Hi Matt, I'm giving myself a few minutes out from my self-exile to restate, like a broken record, what's going on with this. I know I'll get deafening silence and nothing will come of it. We are in a period like that. I read about them in the history of science. The grip of the received imbues the practitioners such that they don't see the jail they are in. But I'll give it a little go.
What we're doing is great tech. Amazing. >60 years. Incredible story. But none of it is artificial intelligence. I'll repeat that: NONE. Not one instance of actual artificial intelligence has ever been built. General or otherwise. It's all automation based on models of natural intellect. Its intelligence is exactly and permanently, irreversibly zero. It always was and it always will be. And it's OK! Expect it. When you build an artificial version of a natural thing, you have to replicate the physics of the natural thing. That's the way science works. Can computed model of natural intelligence be a literal identity with original natural intelligence? You'd think I'd say no. But I don't say that because is _trivially_ true. Yes you can do it, but you'd never bother. Why? Because the computer, model and the knowledge requirements are so vast (as you point out with a few numbers) that you'd have to solve every problem to make a computer-based machine that can solve every problem. So why would you? That's NOT an artificial version of natural general intelligence. The definitive empirical test and goal that proves this position is the 'artificial scientist' who, by definition has a job defined by lack of knowledge .... the handling of the radically unknown. Things that you don't even know that you don't know. That is one definition of natural general intelligence. Coping with that. What we humans do with alacrity (strangely in this context... we are not doing so well with that very kind of ignorance) :-) Natural general intelligence, human level, operating at its zenith, is the natural human scientist ... meaning human-level artificial general intelligence, an artificial version of a natural thing, must become an artificial scientist.... something that by definition you cannot fake and you cannot model because it is the JOB of the scientist to create models. Humans are not a model of a modeller of the unknown. They are (bio)physics directed at modelling the unknown. The human brain uses physics as computation. But that biological instantiation is not the _computer_ of the kind being used by the 'science of AGI' as it is currently configured. The failure to create AGI using computers is permanent and guaranteed. It will NEVER EVER END. All you will ever get is the pushing of the boundary of fragile failure to tolerable levels, followed by its brilliant and successful use in a niche where its failures become well understood. Endless recursive boundary-nudging narrow-AI like this is all there is and all there ever will be until we stop using computers. I am so tired of reading paragraphs that start with the words 'artificial (general) intelligence' and then seamlessly morph into using the word computer and software as it these things were intrinsically part of the one project. They are not necessarily part of the same project and justifying that position never gets properly posed or empirically tested by examining AGI done without computers. What if you are wrong? How would you know you're wearing the shackles and blinders that I claim are in place? Another way of viewing the same problem is that the entire enterprise is theoretical science. It, so far, has zero empirical science. Not one actual empirical science experiment on AGI has ever been proposed, let alone done. Dressing a computer in a robot suit is NOT empirical science on intelligence. It is theoretical science with an elaborate data I/O system ... The real human-level AGI project has not started. When it starts you will see engineers and scientists doing a 'moot-shot'. Putting brain physics on the chips and putting those chips in robot-suits. Then they'll be seen first testing for inability and ignorance before requiring them to to autonomously acquire knowledge and ability... in an area where even the builders have know idea what it has to face. Those novel machines, pitted against their computer-based counterparts as one of the controls, do not use computers. That's what the real empirical science of successful, real AGI will look like. It'll have a big bespoke, dedicated chip foundry, a bespoke robot fabrication facility and a dedicated test facility, with independent oversight. It will take 10-15 years. It's like CERN and the Higgs Boson. Look out for that. When you see it, you'll know the solution to AGI is on the way. Until then.... resuming exile.... :-) cheers colin On Fri, Feb 1, 2019 at 8:17 AM Matt Mahoney <[email protected]> wrote: > When I asked Linas Vepstas, one of the original developers of OpenCog > led by Ben Goertzel, about its future, he responded with a blog post. > He compared research in AGI to astronomy. Anyone can do amateur > astronomy with a pair of binoculars. But to make important > discoveries, you need expensive equipment like the Hubble telescope. > https://blog.opencog.org/2019/01/27/the-status-of-agi-and-opencog/ > > Opencog began 10 years ago in 2009 with high hopes of solving AGI, > building on the lessons learned from the prior 12 years of experience > with WebMind and Novamente. At the time, its major components were > DeStin, a neural vision system that could recognize handwritten > digits, MOSES, an evolutionary learner that output simple programs to > fit its training data, RelEx, a rule based language model, and > AtomSpace, a hypergraph based knowledge representation for both > structured knowledge and neural networks, intended to tie together the > other components. Initial progress was rapid. There were chatbots, > virtual environments for training AI agents, and dabbling in robotics. > The timeline in 2011 had OpenCog progressing through a series of > developmental stages leading up to "full-on human level AGI" in > 2019-2021, and consulting with the Singularity Institute for AI (now > MIRI) on the safety and ethics of recursive self improvement. > > Of course this did not happen. DeStin and MOSES never ran on hardware > powerful enough to solve anything beyond toy problems. ReLex had all > the usual problems of rule based systems like brittleness, parse > ambiguity, and the lack of an effective learning mechanism from > unstructured text. AtomSpace scaled poorly across distributed systems > and was never integrated. There is no knowledge base. Investors and > developers lost interest. > > Meanwhile the last decade transformed our lives with smart phones, > social networks, and online maps. Big companies like Apple, Google, > Facebook, and Amazon, powered it with AI: voice recognition, face > recognition, natural language understanding, and language translation > that actually works. It is easy to forget that none of this existed 10 > years ago. Just those four companies now have a combined market cap of > USD $3 trillion, enough to launch hundreds of Hubble telescopes if > they wanted to. > > Of course we have not yet solved AGI. We still do not have vision > systems as good as the human eye and brain. We do not have systems > that can tell when a song sounds good or what makes a video funny. We > still pay people $87 trillion per year worldwide to do work that > machines are not smart enough to do. And in spite of dire predictions > that AGI will take our jobs, that figure is increasing at 3-4% per > year, continuing a trend that has lasted centuries. > > Over a lifetime your brain processes 10^19 bits of input, performing > 10^25 operations on 10^14 synapses at a cost of 10^-15 joule per > operation. This level of efficiency is a million times better than we > can do with transistors, and Moore's Law is not going to help. Clock > speeds stalled at 2-3 GHz a decade ago. We can't make transistors > smaller than about 10 nm, the spacing between P or N dopant atoms, and > we are almost there now. If you want to solve AGI, then figure out how > to compute by moving atoms instead of electrons. Otherwise Moore's Law > is dead. > > Even if we can extend Moore's Law using nanotechnology and biological > computing (and I believe we will), there are other obstacles to the > coming Singularity. > > First, the threshold for recursive self improvement is not human level > intelligence, but human civilization level intelligence. That's higher > by a factor of 7 billion. But that's already happening. It's the > reason our economy and population are both growing at a faster than > exponential rate. > > Second is Eroom's Law. The price of new drugs doubles every 9 years. > Global life expectancy has been increasing 0.2 years per year since > the early 1900's, but that rate has slowed a bit since 1990. Testing > new medical treatment is expensive because testing requires human > subjects and the value of human life is increasing as the economy > grows. > > Third, Moore's Law doesn't cover software or knowledge collection, two > of the three components of AGI (the other being hardware). Human > knowledge collection is limited to how fast you can communicate, about > 150 words per minute per person. Software productivity has remained > constant at 10 lines per day since 1950. If you were hoping for an > automated method to develop software, keep in mind that the 6 x 10^9 > bits of DNA that is you (equivalent to 300 million lines of code) > required 10^50 copy and transcription operations on 10^37 bits of DNA > to write over the last 3.5 billion years. > > Comments? > > -- > -- Matt Mahoney, [email protected] ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta6fce6a7b640886a-M2d432106d054aee483341e76 Delivery options: https://agi.topicbox.com/groups/agi/subscription
