We kindly invite you to participate in LongEval 2023, a shared task on 
longitudinal evaluation of NLP models at CLEF 2023.

CALL FOR CONTRIBUTION

LongEval @ CLEF 2023

Longitudinal Evaluation of Models Performance

https://clef-longeval.github.io <https://clef-longeval.github.io/> 

Lab description:

The LongEval  aims at identifying the types of models that offer better 
temporal persistence for NLP tasks on data that evolves across time in both 
shorter and longer time periods. LongEval is built on a common framework for 
its Information Retrieval and Text Classification tasks: for one system, we 
evaluate its efficiency when operating on test data acquired at the same time 
than the training data, when operating on data acquired shortly after time t 
(sub-task 1), and when operating on data acquired at time t" ( longer after 
time t, sub-task 2). For each sub-task of each task, two evaluation measures 
are proposed: an absolute quality measure, and a relative drop compared to an 
initial time t test result for each system

LongEval 2023 proposes two tasks:

        • Task 1: Information Retrieval. For this task, the data is a sequence 
of Web document collections and queries, each containing a few million 
documents and hundreds of queries, provided by Qwant. Relevance assessments are 
to be computed using a Click Model acquired from real users of the Qwant search 
engine. As the initial corpus contains only French documents, an automatic 
translation into English will be provided.
        • Task 2: Text Classification. For this task, the training data is the 
TM-Senti sentiment analysis dataset extended with a development set and three 
human-annotated novel test sets for submission evaluation. TM-Senti is a 
general large-scale Twitter sentiment dataset in the English language, spanning 
over a 9-year period from 2013 to 2021. Tweets are labeled for sentiment as 
either “positive” or “negative”. The annotation is performed using distant 
supervision based on a manually curated list of emojis and emoticons. 

You can register for the task at: 
https://clef2023-labs-registration.dei.unipd.it/ 
<https://clef2023-labs-registration.dei.unipd.it/> 

Lab Organizers:

Rabab Alkhalifa, Iman Bilal, Hsuvas Borkakoty, Jose Camacho-Collados, Romain 
Deveaud, Alaa El-Ebshihy, Luis Espinosa-Anke, Gabriela Gonzalez-Saez, Petra 
Galuscakova, Lorraine Goeuriot, Elena Kochkina, Maria Liakata, Daniel Loureiro, 
Harish Tayyar Madabushi, Philippe Mulhem, Florina Piroi, Martin Popel, 
Christophe Servan, and Arkaitz Zubiaga.

Important dates:

Release of training data: 03/01/2023

Release of test data: 30/04/2023

Runs submissions date: 30/06/2023

LongEval Workshop: during CLEF 2023, Thessaloniki, 18-21 September 2023.


_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to