<https://www.aimyths.org/>

The project

Whether it's claims about AI revolutionizing the insurance industry or enabling 
Orwellian mass surveillance, everyone seems to be talking about 'artificial 
intelligence' these days. Unfortunately, much of this talk is riddled with 
myths, misconceptions and inaccuracies. 

The aim of this website is to help disentangle and debunk some of these 
misleading ideas. We'll explore how these ideas appear in the media, and point 
you towards high quality resources for further reading.

The myths to tackle were chosen in two ways. First, a group of stakeholders 
from civil society, academia, government and industry discussed the problem of 
AI bulls**t at RightsCon 2019 in Tunis. Together, we brainstormed a list of the 
most insidious misconceptions and myths about AI. 

Based on this preliminary list, a survey was sent out to allow people to rank 
these myths, and also to contribute additional ones. A combination of these 
survey results and additional consultations with the target audience led to the 
choice of these final 8 topics.

It should be clarified that in no case is there a simple, straightforward 
refutation of the misconceptions and myths. Rather, the ideas are explored, 
obvious misconceptions are refuted and more complex perspectives are 
introduced. The aim of these resources is to guide you towards further 
materials as presented in the bibliographies and guides at the end of each 
section. 

If you have any comments, criticisms, or requests, please don’t hesitate to get 
in touch. This is a live resource, and we're happy to update it based on your 
feedback.

Acknowledgments

This project was funded and supported as part of Daniel Leufer's Mozilla Open 
Web Fellowship - a collaboration between the Ford Foundation and Mozilla. It 
would not have been possible without the incredible support, encouragement and 
direction provided by the Mozilla Fellowship team, but a special thanks must go 
to Amy Schapiro Raikar for her unfailing support and direction.

For the duration of Daniel's fellowship, he was hosted by the digital rights 
organisation Access Now. Enormous thanks must go to all the staff at Access Now 
for their fantastic work which contributed to and inspired this project. Fanny 
Hidvégi in particular deserves the utmost gratitude for inspiring and guiding 
this project from its initial conception through to its completion. Her 
commitment to combatting AI bulls**t and protecting people's rights has been a 
constant source of inspiration.

This project also benefitted enormously from the collaboration of the Harvard 
Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society. 
Jessica Fjeld provided support for this project from the get go and organised 
the collaboration with two students from the Cyberlaw Clinic, Rachel Jang and 
Kathryn Mueller, whose work was instrumental to the project.

The material on this site has been immesurably improved thanks to the 
thoughtful, insightful, and critical comments provided by a number of 
reviewers, including Agata Foryciarz, Sarah Chander, Alexa Steinbrück, and 
especially Vera Tylzanowski, who read every bit of the site more than once. Any 
faults or typos that remain are entirely Daniel's responsibility. 

Lastly, the utmost gratitude must be expressed to all the authors whose work 
inspired and is featured on this site. We have drawn endless inspiration from 
the community of people working to make AI systems safer, to protect people's 
rights and freedoms and to combat AI bulls**t. We hope that this site does 
justice to all of that work, and helps guide readers in further study.


[...]
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to