The field of AI itself has had a major project underway for a couple of years to address these issues. It’s called AI 100, and is funded by one of the wealthy founders of the field, Eric Horowitz at Microsoft. The headquarters of this project are at Stanford University.
It is funded for a century. Yes, that’s a century, on the grounds that whatever is decided in five years or ten years will need to be revisited five years or ten years on, again and again. Its staff consists of leading members of the field (who really know what the field can and cannot do), and they will be joined by ethicists, economists, philosophers and others (maybe already are) as the project moves along. (Their first report was rather scathing about Ray Kurzweil and the singularity, but that’s another issue.) Musk, Hawking et al., are very good at getting publicity, but their first great solution last summer was to send a petition to the U.N., which they did with great fanfare. Of course nothing happened, and nothing could. This is the level of naivete (and sorry, self-importance) these men exhibit. I also find them more than a bit hypocritical. Musk is not giving up his smartphone, and Hawking concedes that he loves what AI has done for him personally (in terms of vocal communication) but maybe others shouldn’t be allowed to handle this… Finally, and this is where my anger really boils: they sound to me like the worst kind of patronizing, privileged white guys imaginable. There’s no sense in their aggrieved messages that billions of people around the globe are struggling, and have lives that could be vastly improved with AI. Maybe it behooves them to imagine the good AI can do for those people, instead of stamping their feet because AI is going to upset their personal world. Which it will. It must be very hard to be the smartest guy on the block for so long, and then here comes something even smarter. Pamela > On Jun 6, 2016, at 11:42 AM, Edward Angel <an...@cs.unm.edu> wrote: > > There is a large group of distinguished people including Elon Musk, Stephen > Hawking, Bill Joy and Martin Rees, who believe that AI is an existential > threat and the probability of the human race surviving another 100 years is > less than 50/50. Stephen Hawking has said he has no idea what to do about. > Bill Joy’s (non) solution is better ethical education for workers in the > area. I can’t see how open source will prevent the dangers they worry about. > Martin Rees has an Institute at Cambridge that worries about these things. > > Ed > _______________________ > > Ed Angel > > Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab) > Professor Emeritus of Computer Science, University of New Mexico > > 1017 Sierra Pinon > Santa Fe, NM 87501 > 505-984-0136 (home) an...@cs.unm.edu > <mailto:an...@cs.unm.edu> > 505-453-4944 (cell) http://www.cs.unm.edu/~angel > <http://www.cs.unm.edu/~angel> > >> On Jun 5, 2016, at 4:04 PM, Pamela McCorduck <pam...@well.com >> <mailto:pam...@well.com>> wrote: >> >> I have some grave concerns about AI being concentrated in the hands of a few >> big firms—Google, FaceBook, Amazon, and so on. Elon Musk says the answer is >> open sourcing, but I’m skeptical. That said, I’d be interested in hearing >> other people’s solutions. Then again, you may not think it’s a problem. >> >> >>> On Jun 5, 2016, at 3:22 PM, Robert Wall <wallrobe...@gmail.com >>> <mailto:wallrobe...@gmail.com>> wrote: >>> >>> Hi Tom, >>> >>> Interesting article about Google and their foray [actually a Blitzkrieg, as >>> they are buying up all of the brain trust in this area] into the world of >>> machine learning presumably to improve the search customer experience. >>> Could their efforts actually have unintended consequences for both the >>> search customer and the marketing efforts of the website owners? It is >>> interesting to consider. For example, for the former case, Google picking >>> WebMD as the paragon website for the healthcare industry flies in the face >>> of my own experience and, say, this New York Times Magazine article: A >>> Prescription for Fear >>> <http://www.nytimes.com/2011/02/06/magazine/06FOB-Medium-t.html?login=email&_r=0> >>> (Feb 2011). Will this actually make WebMD the de facto paragon in the >>> minds of the searchers? For the latter, successful web marketing becomes >>> increasingly subject to the latest Google search algorithms instead of the >>> previously more expert in-house marketing departments. Of course, this is >>> the nature of SEO--to game the algorithms to attract better rankings. But, >>> it seems those in-house marketing departments will need to up their game: >>> >>> In other ways, things are a bit harder. The field of SEO will continue to >>> become extremely technical. Analytics and big data are the order of the >>> day, and any SEO that isn’t familiar with these approaches has a lot of >>> catching up to do. Those of you who have these skills can look forward to a >>> big payday. >>> >>> Also, with respect to those charts anticipating exponential growth for AGI >>> technology--even eclipsing human intelligence by mid-century--there is much >>> reasoning to see this as overly optimistic [see, for example, Hubert >>> Dreyfus' critique of Good Old Fashion AI: "What Computers Can't Do"]. >>> These charts kind of remind me of the "ultraviolet catastrophe" around the >>> end of the 19th century. There are physical limitations that may well tamp >>> progress and keep it to ANI. With respect to AGI, there have been some >>> pointed challenges to this "Law of Accelerating Returns." >>> >>> On this point, I thought this article in AEON titled "Creative Blocks: The >>> very laws of physics imply that artificial intelligence must be possible. >>> What’s holding us up? >>> <https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence> >>> (Oct 2012)" is on point concerning the philosophical and epistemological >>> road blocks. This one, titled "Where do minds belong? >>> <https://aeon.co/essays/intelligent-machines-might-want-to-become-biological-again> >>> (Mar 2016)" discusses the technological roadblocks in an insightful, >>> highly speculative, but entertaining manner. >>> >>> Nonetheless, this whole discussion is quite intriguing, no matter your >>> stance, hopes, or fears. 😎 >>> >>> Cheers, >>> >>> Robert >>> >>> On Sat, Jun 4, 2016 at 4:26 PM, Tom Johnson <t...@jtjohnson.com >>> <mailto:t...@jtjohnson.com>> wrote: >>> See >>> http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily >>> >>> <http://techcrunch.com/2016/06/04/artificial-intelligence-is-changing-seo-faster-than-you-think/?ncid=tcdaily> >>> >>> Among other points: "...why doing regression analysis over every site, >>> without having the context of the search result that it is in, is supremely >>> flawed." >>> TJ >>> >>> ============================================ >>> Tom Johnson >>> Institute for Analytic Journalism -- Santa Fe, NM USA >>> 505.577.6482 <tel:505.577.6482>(c) >>> 505.473.9646 <tel:505.473.9646>(h) >>> Society of Professional Journalists <http://www.spj.org/> - Region 9 >>> <http://www.spj.org/region9.asp> Director >>> Check out It's The People's Data >>> <https://www.facebook.com/pages/Its-The-Peoples-Data/1599854626919671> >>> http://www.jtjohnson.com <http://www.jtjohnson.com/> >>> t...@jtjohnson.com <mailto:t...@jtjohnson.com> >>> ============================================ >>> >>> >>> >>> Sent with MailTrack >>> <https://mailtrack.io/install?source=signature&lang=en&referral=jtjohnson...@gmail.com&idSignature=22> >>> >>> >>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> >>> Virus-free. www.avast.com >>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link> >>> <x-msg://4/#m_-9171770883074403068_DDB4FAA8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >>> ============================================================ >>> FRIAM Applied Complexity Group listserv >>> Meets Fridays 9a-11:30 at cafe at St. John's College >>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com >>> <http://redfish.com/mailman/listinfo/friam_redfish.com> >>> >>> ============================================================ >>> FRIAM Applied Complexity Group listserv >>> Meets Fridays 9a-11:30 at cafe at St. John's College >>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com >>> <http://redfish.com/mailman/listinfo/friam_redfish.com> >> ============================================================ >> FRIAM Applied Complexity Group listserv >> Meets Fridays 9a-11:30 at cafe at St. John's College >> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com >> <http://redfish.com/mailman/listinfo/friam_redfish.com> > ============================================================ > FRIAM Applied Complexity Group listserv > Meets Fridays 9a-11:30 at cafe at St. John's College > to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com