My argument is against the singularity model proposed by Good and Vinge, that if humans could produce superhuman intelligence, then so could it, only faster. Yudkowsky and Corwin tested whether such an intelligence could be contained if it became unfriendly (it couldn't). https://wiki.lesswrong.com/wiki/AI_boxing
My paper http://mattmahoney.net/rsi.pdf (which Yudkowsky criticized on SL4) argues that model doesn't work. In order for an AI to improve by acquiring computing resources and learning from an external environment. The most powerful intelligence currently is the internet. Obviously it is not contained. On Mon, Apr 20, 2020, 10:01 AM Basile Starynkevitch < [email protected]> wrote: > > On 20/04/2020 15:54, TimTyler wrote: > > On 2020-04-18 19:50:PM, Matt Mahoney wrote: > > > A self improving agent recursively creates a more > > intelligent version of itself with no external input. > > It is an odd definition. We live an age of "big data". > We have a massive glut of sensors and sensor input. > The blind and deaf agent would seem to be of only > minor interest. The self-improving systems we are > actually interested in are likely to be connected > to cameras, microphones and the internet. > > > The input could come either from a human user interacting with the system, > or from self observation of its behavior (for example, measure of the > virtual address space, of the elapsed time, etc...) > > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T1f1af8ac2c36937b-Mf801991a09602b931ffbebce Delivery options: https://agi.topicbox.com/groups/agi/subscription
