The 2st Workshop on DHOW: Diffusion of Harmful Content on Online Web
Workshop
The workshop will be conducted in a *hybrid* format to ensure maximum
participation, accommodating attendees both *online* and in person.
Submission deadline: *July 11 2025 AOE*
*Workshop site*: https://dhow-workshop.github.io/2025/
*Co-located with ACMMM 2025*
https://acmmm2025.org/ <https://lrec-coling-2024.org/>
Dublin, Ireland, 27-31 October 2024
*Important Dates*
Submission deadline: extended to *July 11, 2025*
Notification of acceptance: August 01, 2025
Camera-ready papers due: August 11, 2025
Workshop date: October 27/28, 2025
*Workshop Description*
With the advancement of digital technologies and gadgets, online content
is easily accessible. At the same time, harmful content also gets
spread. There are different harmful content available on different
platforms in multiple languages. The topic of harmful content is broad
and covers multiple research directions. But from the user’s aspect,
they are affected by them all. Often, it is studied individually, like
misinformation and hate speech. Research has been done on one platform,
monolingual, on a particular issue. It leads to harmful content
spreaders switching platforms and languages to reach the user base.
Harmful is not limited to social media but also news media. Spreader
shares harmful content in posts, news articles, comments, and
hyperlinks. So, there is a need to study the harmful content by
combining cross-platform, language, multimodal data and topics.
We will bring the research on harmful content under one umbrella so that
research on different topics (hate speech, misinformation,
disinformation, self-harm, offensive content, etc.) can bring some novel
methods and recommendations for users, leveraging text analysis with
image, audio, and video recognition to detect harmful content in diverse
formats. The workshop will cover the ongoing issue of war or elections
in 2025.
We believe this workshop will provide a unique opportunity for
researchers and practitioners to exchange ideas, share latest
developments, and collaborate on addressing the challenges associated
with harmful contents spread across the Web. We expect that the workshop
will generate insights and discussions that will help advance the field
of societal artificial intelligence (AI) for the development of safer
internet. In addition to attracting high quality research contributions
to the workshop, one of the aims of the workshop is to mobilise the
researchers working on the related areas to form a community.
*Submissions Topics*
•Studying different types of harmful content
•Computational fact-checking & Misinformation Detection
•Role of Generative AI in Mitigating Harmful Content
•Harassment, Bullying, and Hate Speech Detection
•Explainable AI for Harmful Content Analysis
•Multimodal and Multilingual Harmful Content Detection such as fake
news, spam, and troll detection.
•Deepfake and Synthetic Media
•Ethical & Societal Implications of AI in Content Moderation
•Both Qualitative and Quantitative study on harmful content
•Psychological effects of harmful content like mental health
•Approaches for data collection or data annotation using multimodal
large models on harmful content
•User study on the effects of harmful content on human beings
*Submissions*
- Submission Instructions: https://dhow-workshop.github.io/2025/#call
<https://dhow-workshop.github.io/2025/#call>
- Submission Link:
https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW
<https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW>
***Workshop organizers*
•Thomas Mandl (University of Hildesheim, Germany)
•Haiming Liu (University of Southampton, United Kingdom)
•Gautam Kishore Shahi(University of Duisburg-Essen, Germany)
•Amit Kumar Jaiswal (University of Surrey, United Kingdom )
•Durgesh Nandini (University of Bayreuth, Germany)
DHOW 2025
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]