πŸ“¬ 🌐 ***Call for Papers: DHOW 2026 – Diffusion of Harmful Content on the Web** *

πŸ“ *Co-located with WebSci 2026 | Braunschweig, Germany | May 26–29, 2026*


✨ We’re excited to announce the extended CFP for the DHOW 2026 workshop!
Join us in tackling one of the most pressing challenges of our digital age: the **spread of harmful content across platforms, languages, and formats**.

πŸ” Why this matters:
With the rise of AI, social media, and global connectivity, harmful content β€” from misinformation πŸ“° and hate speech πŸ”₯ to deepfakes 🎭 and self-harm triggers πŸ›‘ β€” spreads faster than ever. But here’s the catch: researchers often study these issues in isolation β€” one platform, one language, one type. πŸ‘‰ This leads to **"harmful content hopping"** β€” spreaders move to evade detection.

🎯 Our mission?
Bring together diverse research under one umbrella 🀝 to:
- Study **cross-platform, multi-lingual, multimodal** harmful content
- Combine **text, image, audio, and video analysis** πŸ–ΌοΈπŸ”ŠπŸŽ₯
- Explore the role of **Generative AI** πŸ€– and **Explainable AI** 🧠 in detection & defense
- Understand **psychological impacts** 🧠 and **user experiences** πŸ‘₯
- Address urgent topics like **elections, war, and disinformation** in 2025 🌍

πŸ“Š We’re looking for scientific contributions on:
- πŸ“Š Analysis of hate speech, misinformation, disinformation, self-harm, offensive content
- βœ… Computational fact-checking & AI-driven detection
- πŸ€– Role of Generative AI in mitigating harm
- 🎭 Deepfakes & their societal impact
- 🌍 Multi-lingual & cross-platform detection (e.g., spam, bots, trolls)
- πŸ§ͺ Qualitative & quantitative studies on mental health effects
- πŸ“₯ LLM-assisted data collection & annotation
- πŸ‘₯ User studies & human-AI collaboration in defense
- 🧩 Explainable AI for transparency & trust

πŸ“… Important Dates:
- πŸ“… Submission deadline: **March 15, 2026 (AOE)**
- βœ… Notification of acceptance: **March 29, 2026**
- πŸ“€ Camera-ready due: **April 2, 2026**
- πŸŽ‰ Workshop: **May 26–29, 2026**

πŸ”— Submit your work here:
πŸ‘‰ [Submission Portal (OpenReview)](https://openreview.net/group?id=acmmm.org/WebSci/2026/Workshop/DHOW)
🌐 [Workshop Website](https://dhow-workshop.github.io/2026/)

πŸ‘₯ We’re building a community!*
This workshop is more than a venue β€” it’s a **collaborative space** for researchers, practitioners, and policymakers to share insights, spark innovation, and shape a safer, more responsible internet 🌱.

πŸ‘¨β€πŸ’» Organizing Team:
- Thomas Mandl (University of Hildesheim, Germany)
- Haiming Liu (University of Southampton, UK)
- Gautam Kishore Shahi (University of Duisburg-Essen, Germany)
- Amit Kumar Jaiswal (IIT BHU Varanasi, India)
- Durgesh Nandini (University of Bayreuth, Germany)
- Luis-Daniel IbÑñez (University of Southampton, UK)


#DHOW2026 #WebSci2026 #HarmfulContent #AIforGood #SocietalAI #DigitalSafety #ResearchCommunity
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to