๐—๐—ผ๐˜‚๐—ฟ๐—ป๐—ฎ๐—น ๐—ก๐—ฎ๐˜๐˜‚๐—ฟ๐—ฎ๐—น ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐—ฃ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐˜€๐˜€๐—ถ๐—ป๐—ด - ๐—ฆ๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ฎ๐—น ๐—œ๐˜€๐˜€๐˜‚๐—ฒ ๐—ผ๐—ป ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ณ๐—ผ๐—ฟ 
๐—Ÿ๐—ผ๐˜„-๐—ฅ๐—ฒ๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ๐˜€

URL - https://loreslm.github.io/specialissue
Neural language models have revolutionised natural language processing (NLP) 
and have provided state-of-the-art results for many tasks. However, their 
effectiveness is largely dependent on the pre-training resources. Therefore, 
language models (LMs) often struggle with low-resource languages in both 
training and evaluation. Recently, there has been a growing trend in developing 
and adopting LMs for low-resource languages. This special issue aims to provide 
a forum for researchers to share and discuss their ongoing work on LMs for 
low-resource languages.

๐—ง๐—ผ๐—ฝ๐—ถ๐—ฐ๐˜€
We invite submissions on a broad range of topics related to the development and 
evaluation of neural language models for low-resource languages, including but 
not limited to the following.
- Building language models for low-resource languages.
- Adapting/extending existing language models/large language models for 
low-resource languages.
- Corpora creation and curation technologies for training language models/large 
language models for low-resource languages.
- Benchmarks to evaluate language models/large language models in low-resource 
languages.
- Prompting/in-context learning strategies for low-resource languages with 
large language models.
- Review of available corpora to train/fine-tune language models/large language 
models for low-resource languages.
- Multilingual/cross-lingual language models/large language models for 
low-resource languages.
- Applications of language models/large language models for low-resource 
languages (i.e. machine translation, chatbots, content moderation, etc.)

๐—œ๐—บ๐—ฝ๐—ผ๐—ฟ๐˜๐—ฎ๐—ป๐˜ ๐——๐—ฎ๐˜๐—ฒ๐˜€
Paper submission: December 31, 2025
First decision: March 31, 2026- April 30, 2026
Revised version submission: May 1, 2026- June 1, 2026
Final decision: August 30, 2026

๐—ฆ๐˜‚๐—ฏ๐—บ๐—ถ๐˜€๐˜€๐—ถ๐—ผ๐—ป
Submissions should be formatted according to the journal guidelines available - 
https://www.cambridge.org/core/journals/natural-language-processing/information/author-instructions/preparing-your-materials
 and submitted through the manuscript submission system - 
https://mc.manuscriptcentral.com/nlp. To ensure your manuscript is considered 
for this special issue, please select โ€œLanguage Models for Low-Resource 
Languagesโ€ under Special Issue Designation when uploading your manuscript.

Guest Editors
Hansi Hettiarachchi, Lancaster University, UK
Tharindu Ranasinghe, Lancaster University, UK
Paul Rayson, Lancaster University, UK
Ruslan Mitkov, Lancaster University, UK
Mohamed Gaber, Queensland University of Technology, Australia

Guest Editorial Board
Gรกbor Bella - IMT Atlantique, France
Ana-Maria Bucur - University of Bucharest, Romania
ร‡aฤŸrฤฑ ร‡รถltekin - University of Tรผbingen, Germany
Vera Danilova - Uppsala University, Sweden
Ona de Gibert - University of Helsinki, Finland
Ignatius Ezeani - Lancaster University, UK
Amal Htait - Aston University, UK
Ali HรผrriyetoฤŸlu - Wageningen University & Research, Netherlands
Danka Jokic - University of Belgrade, Serbia
Diptesh Kanojia - University of Surrey, UK
Taro Watanabe - Nara Institute of Science and Technology, Japan
Muhidin Mohamed - Aston University, UK
Alistair Plum - University of Luxembourg, Luxembourg
Damith Premasiri - Lancaster University, UK
Guokan Shang - Mohamed bin Zayed University of Artificial Intelligence, France
Ravi Shekhar - University of Essex, UK


Best Regards
Tharindu Ranasinghe on behalf of the Guest Editors


_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to