Not all that coy or secretive… published a detailed whitepaper for our (aborted) token sale, dozens of articles and videos, and benchmark test published in various places including https://www.facebook.com/groups/RealAGI/
From: Linas Vepstas <[email protected]> Sent: Sunday, February 3, 2019 8:17 PM To: AGI <[email protected]> Subject: Re: [agi] The future of AGI I have no clue what Peter is actually thinking because he's coy and secretive. But I'm not pessimistic. I'm just perplexed why no one ever seems to try the obvious things. Or why I can never seem to explain obvious things to anyone and have them understand it. I am quite certain that one can do better than neural nets and more easily, too, an have explained exactly how more times than I can count, but my words are not connecting with anyone who understands them. So, whatever. Day at a time. --linas On Sun, Feb 3, 2019 at 5:28 PM <[email protected] <mailto:[email protected]> > wrote: I’m not that pessimistic at all. Our own AGI project has made steady progress over the past 17 years in spite of only spending about $10 million – about 150 man-years of focused effort. We’ve managed to successfully commercialize an early version of our proto-AGI engine in a company that now employs about 100 people www.smartaction.com <http://www.smartaction.com> . For the last 5 years my full-time team of about 10 people has been working on the next generation engine www.AGIinnovations.com <http://www.AGIinnovations.com> / www.Aigo.ai <http://www.Aigo.ai> . We are now ready to commercialize this more advanced platform. Our focus has been limited to natural language comprehension/ learning, question answering/ inference, and conversation management. I think that $100 million could go a long way towards functional, demonstrable proto AGI. It seems to me that DeepMind hasn’t made good use of the $200 or $300million spend so far – they lack a proper theory of intelligence. I don’t know why Vicarious, the other well-funded AGI company, hasn’t made better progress in perception/ action – my guess, for the same reason…. I think all of the theoretical calculations of processing power are widely off the mark – we’re not trying to reverse-engineer a bird – just need to build a flying machine. My articles are here: https://medium.com/@petervoss/my-ai-articles-f154c5adfd37 Peter Voss From: Linas Vepstas <[email protected] <mailto:[email protected]> > Sent: Friday, February 1, 2019 10:26 PM To: AGI <[email protected] <mailto:[email protected]> > Subject: Re: [agi] The future of AGI Thanks Matt, very nice post! We're on the same wavelength, it seems. -- Linas On Thu, Jan 31, 2019 at 3:17 PM Matt Mahoney <[email protected] <mailto:[email protected]> > wrote: When I asked Linas Vepstas, one of the original developers of OpenCog led by Ben Goertzel, about its future, he responded with a blog post. He compared research in AGI to astronomy. Anyone can do amateur astronomy with a pair of binoculars. But to make important discoveries, you need expensive equipment like the Hubble telescope. https://blog.opencog.org/2019/01/27/the-status-of-agi-and-opencog/ Opencog began 10 years ago in 2009 with high hopes of solving AGI, building on the lessons learned from the prior 12 years of experience with WebMind and Novamente. At the time, its major components were DeStin, a neural vision system that could recognize handwritten digits, MOSES, an evolutionary learner that output simple programs to fit its training data, RelEx, a rule based language model, and AtomSpace, a hypergraph based knowledge representation for both structured knowledge and neural networks, intended to tie together the other components. Initial progress was rapid. There were chatbots, virtual environments for training AI agents, and dabbling in robotics. The timeline in 2011 had OpenCog progressing through a series of developmental stages leading up to "full-on human level AGI" in 2019-2021, and consulting with the Singularity Institute for AI (now MIRI) on the safety and ethics of recursive self improvement. Of course this did not happen. DeStin and MOSES never ran on hardware powerful enough to solve anything beyond toy problems. ReLex had all the usual problems of rule based systems like brittleness, parse ambiguity, and the lack of an effective learning mechanism from unstructured text. AtomSpace scaled poorly across distributed systems and was never integrated. There is no knowledge base. Investors and developers lost interest…. -- cassette tapes - analog TV - film cameras - you <https://agi.topicbox.com/latest> Artificial General Intelligence List / AGI / see discussions <https://agi.topicbox.com/groups/agi> + participants <https://agi.topicbox.com/groups/agi/members> + delivery options <https://agi.topicbox.com/groups/agi/subscription> Permalink <https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-Mf660884959aa0f79e145458c> ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M0dbc19fbfb4b5c75b7d4ae77 Delivery options: https://agi.topicbox.com/groups/agi/subscription
