Ethical LLMs 2025: The first Workshop on Ethical Concerns in Training, 
Evaluating and Deploying Large Language 
Models<https://sites.google.com/view/ethical-llms-2025> @ 
RANLP2025<https://ranlp.org/ranlp2025/>

Call for papers:

Scope
Large Language Models (LLMs) represent a transformative leap in Artificial 
Intelligence (AI), delivering remarkable language-processing capabilities that 
are reshaping how we interact with technology in our daily lives. With their 
ability to perform tasks such as summarisation, translation, classification, 
and text generation, LLMs have demonstrated unparalleled versatility and power. 
Drawing from vast and diverse knowledge bases, these models hold the potential 
to revolutionise a wide range of fields, including education, media, law, 
psychology, and beyond. From assisting educators in creating personalised 
learning experiences to enabling legal professionals to draft documents or 
supporting mental health practitioners with preliminary assessments, the 
applications of LLMs are both expansive and profound.
However, alongside their impressive strengths, LLMs also face significant 
limitations that raise critical ethical questions. Unlike humans, these models 
lack essential qualities such as emotional intelligence, contextual empathy, 
and nuanced ethical reasoning. While they can generate coherent and 
contextually relevant responses, they do not possess the ability to fully 
understand the emotional or moral implications of their outputs. This gap 
becomes particularly concerning when LLMs are deployed in sensitive domains 
where human values, cultural nuances, and ethical considerations are paramount. 
For example, biases embedded in training data can lead to unfair or 
discriminatory outcomes, while the absence of ethical reasoning may result in 
outputs that inadvertently harm individuals or communities. These limitations 
highlight the urgent need for robust research in Natural Language Processing 
(NLP) to address the ethical dimensions of LLMs. Advancements in NLP research 
are crucial for developing methods to detect and mitigate biases, enhance 
transparency in model decision-making, and incorporate ethical frameworks that 
align with human values. By prioritising ethics in NLP research, we can better 
understand the societal implications of LLMs and ensure their development and 
deployment are guided by principles of fairness, accountability, and respect 
for human dignity. This workshop will dive into these pressing issues, 
fostering a collaborative effort to shape the future of LLMs as tools that not 
only excel in technical performance but also uphold the highest ethical 
standards.

Submission Guidelines
We follow the RANLP 2025 standards for submission format and guidelines. 
EthicalLLMs 2025 invites the submission of long papers, up to eight pages in 
length, and short papers, up to six pages in length. These page limits only 
apply to the main body of the paper. At the end of the paper (after the 
conclusions but before the references) papers need to include a mandatory 
section discussing the limitations of the work and, optionally, a section 
discussing ethical considerations. Papers can include unlimited pages of 
references and an unlimited appendix.
To prepare your submission, please make sure to use the RANLP 2025 style files 
available here:

  *   
Latex<https://ranlp.org/ranlp2025/wp-content/uploads/2025/05/ranlp2025-LaTeX.zip>
  *   
Word<https://ranlp.org/ranlp2025/wp-content/uploads/2025/05/ranlp2025-word.docx>

Papers should be submitted through Softconf/START using the following link: 
https://softconf.com/ranlp25/EthicalLLMs2025/
Topics of interest
The workshop invites submissions on a broad range of topics related to the 
ethical development and evaluation of LLMs, including but not limited to the 
following.

  1.
Bias Detection and Mitigation in LLMs
Research focused on identifying, measuring, and reducing social, cultural, and 
algorithmic biases in large language models.

  2.
Ethical Frameworks for LLM Deployment
Approaches to integrating ethical principles—such as fairness, accountability, 
and transparency—into the development and use of LLMs.

  3.
LLMs in Sensitive Domains: Risks and Safeguards
Case studies or methodologies for deploying LLMs in high-stakes fields such as 
healthcare, law, and education, with an emphasis on ethical implications.

  4.
Explainability and Transparency in LLM Decision-Making
Techniques and tools for improving the interpretability of LLM outputs and 
understanding model reasoning.

  5.
Cultural and Contextual Understanding in NLP Systems
Strategies for enhancing LLMs’ sensitivity to cultural, linguistic, and social 
nuances in global and multilingual contexts.

  6.
Human-in-the-Loop Approaches for Ethical Oversight
Collaborative models that involve human expertise in guiding, correcting, or 
auditing LLM behaviour to ensure responsible use.

  7.  Mental Health and Emotional AI: Limits of LLM Empathy
Discussions on the role of LLMs in mental health support, highlighting the 
boundary between assistive technology and the need for human empathy.

Organisers

Damith Premasiri – Lancaster University, UK
Tharindu Ranasinghe – Lancaster University, UK
Hansi Hettiarachchi – Lancaster University, UK
Contact
If you have any questions regarding the workshop, please contact Damith: 
[email protected]



_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to