Call for Papers
CLEF 2022: Conference and Labs of the
Evaluation Forum
Important Dates (Time zone: Anywhere on Earth)
- Submission of Long & Short Papers: 2 May 2022
- Notification of Acceptance: 30 May 2022
- Camera Ready Copy due: 20 June 2022
- Conference: 5-8 September 2022
Aim and Scope
The CLEF Conference addresses all aspects of Information Access in any
modality and language. The CLEF conference includes presentation of research
papers and a series of workshops presenting the results of lab-based comparative
evaluation benchmarks.
CLEF 2022 is the 13th CLEF conference continuing the popular CLEF
campaigns which have run since 2000 contributing to the systematic evaluation of
information access systems, primarily through experimentation on shared tasks.
The CLEF conference has a clear focus on experimental IR as carried out within
evaluation forums (e.g., CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP,
SemEval, and TAC) with special attention to the challenges of multimodality,
multilinguality, and interactive search also considering specific classes of
users as children, students, impaired users in different tasks (e.g., academic,
professional, or everyday-life). We invite paper submissions on significant new
insights demonstrated on IR test collections, on analysis of IR test collections
and evaluation measures, as well as on concrete proposals to push the boundaries
of the Cranfield style evaluation paradigm.
All submissions to the CLEF main conference will be reviewed on the
basis of relevance, originality, importance, and clarity. CLEF welcomes papers
that describe rigorous hypothesis testing regardless of whether the results are
positive or negative. CLEF also welcomes past runs/results/data analysis and new
data collections. Methods are expected to be written so that they are
reproducible by others, and the logic of the research design is clearly
described in the paper. The conference proceedings will be published in the
Springer Lecture Notes in Computer Science (LNCS).
Topics
Relevant topics for the CLEF 2022 Conference include but are not limited to:
-
- Information Access in any language or modality: information retrieval,
image retrieval, question answering, information extraction and
summarisation, search interfaces and design, infrastructures, etc.
-
- Analytics for Information Retrieval: theoretical and practical results in
the analytics field that are specifically targeted for information access
data analysis, data enrichment, etc.
-
- User studies either based on lab studies or crowdsourcing.
-
- Past results/run deep analysis both statistically and fine grain based.
-
- Evaluation initiatives: conclusions, lessons learned, impact and
projection of any evaluation initiative after completing their cycle.
-
- Evaluation: methodologies, metrics, statistical and analytical tools,
component based, user groups and use cases, ground-truth creation, impact
of multilingual/multicultural/multimodal differences, etc.
-
- Technology transfer: economic impact/sustainability of information access
approaches, deployment and exploitation of systems, use cases, etc.
-
- Interactive Information Retrieval evaluation: the interactive evaluation
of information retrieval systems using user-centered methods, evaluation of
novel search interfaces, novel interactive evaluation methods, simulation
of interaction, etc.
-
- Specific application domains: Information access and its evaluation in
application domains such as cultural heritage, digital libraries, social
media, health information, legal documents, patents, news, books, and in
the form of text, audio and/or image data.
-
- New data collection: presentation of new data collection with potential
high impact on future research, specific collections from companies or labs,
multilingual collections.
-
- Work on data from rare languages, collaborative, social data.
Format
Authors are invited to electronically submit original papers, which have not
been published and are not under consideration elsewhere, using the LNCS
proceedings format:
http://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines
Two types of papers are solicited:
-
- Long papers: 12 pages max (excluding references). Aimed to report
complete research works.
-
- Short papers: 6 pages max (excluding references). Position papers, new
evaluation proposals, developments and applications, etc.
Papers will be peer-reviewed by 3 members of the program committee. Selection
will be based on originality, clarity, and technical quality.
Paper submission
Papers should be submitted in PDF format to the following address:
https://www.easychair.org/conferences/?conf=clef2022
Organisation
General
Chairs
Alberto Barrón-Cedeño, Università di Bologna, Italy
Giovanni Da San Martino, Università di Padova, Italy
Mirko Degli Esposti, Università di Bologna, Italy
Fabrizio Sebastiani, Consiglio Nazionale delle Ricerche, Italy
Program
Chairs
Craig Macdonald, University of Glasgow, United Kingdom
Gabriella Pasi, Università degli Studi di Milano Bicocca, Italy
Evaluation Lab Chairs
Allan Hanbury, Technische Universität Wien, Austria
Martin Potthast, Universität Leipzig, Germany