CRASSH and the Centre for the Study of Existential Risk are pleased to announce 
that Professor Stuart Russell (Berkeley) will be speaking on The Long-Term 
Future of (Artificial) Intelligence on Friday May 15th at 4:15pm at the 
Winstanley Lecture Theatre (Trinity). http://cser.org/event/future-of-ai/.

This event is free and open to all, but places are limited.  Book your ticket 
here.
Abstract: The news media in recent months have been full of dire warnings about 
the risk that AI poses to the human race, coming from well-known figures such 
as Stephen Hawking, Elon Musk, and Bill Gates.  Should we be concerned? If so, 
what can we do about it?  While some in the mainstream AI community dismiss 
these concerns, I will argue instead that a fundamental reorientation of the 
field is required.

Professor Stuart Russell (Berkeley) is one of the biggest names in modern 
artificial intelligence worldwide. His Artificial Intelligence: A Modern 
Approach (cowritten with Google's head of research, Peter Norvig) is a leading 
textbook in the field.  
 
He is also one of the most prominent people thinking about the long-term 
impacts and future of AI. He has raised concerns about the potential future use 
of fully autonomous weapons in war. Thinking longer-term, he has posed the 
question "What if we succeed" in developing strong AI, and suggests that 
success in this might represent the biggest event in human history. He has 
organised a number of prominent workshops and meetings around this topic, and 
this January wrote an open letter calling for a realignment of the field of AI 
towards research on safe and beneficial development of AI, now signed by a 
who's who of field leaders worldwide (http://futureoflife.org/misc/open_letter).


Further information: http://www.crassh.cam.ac.uk/events/26219
CSER website: http://cser.org/event/future-of-ai/


A few nice pieces by or about Professor Russell:
"The long-term future of AI" (from his own website): 
https://www.cs.berkeley.edu/~russell/research/future/
"Of myths and moonshine" - his response to the Edge.org question on the myth of 
AI: http://edge.org/conversation/jaron_lanier-the-myth-of-ai#26015
"Concerns of an artificial intelligence pioneer" Interview in Quanta 
https://www.quantamagazine.org/20150421-concerns-of-an-artificial-intelligence-pioneer/





_____________________________________________________
To unsubscribe from the CamPhilEvents mailing list,
or change your membership options, please visit
the list information page: http://bit.ly/CamPhilEvents

List archive: http://bit.ly/CamPhilEventsArchive

Please note that CamPhilEvents doesn't accept email
attachments. See the list information page for further 
details and suggested alternatives.

Reply via email to