Hi Matt, The purpose of the paper is to critically examine the operational state of the practice of the science/engineering of artificial (general) intelligence. Scientific evidence acting in support of claims about the state of the science will be required and supplied by the discussion. Such evidence is scientific evidence that is a measurement of scientific behaviour. It is unusual in its self-referential nature, but not intractably so.
By its nature, such evidence can unfamiliar and tricky to curate. If the practice of a particular science is fundamentally deformed, and a case is required to be constructed to bring it to light and effect remediation, the evidence can be in a measurement of lack. In this case, for example, the lack of certain kinds of practitioner involved in the project or the failure of the project to meet certain goals or the absence of any focus on a particular necessary aspect of the work. We have to provide 'evidence of absence' . I hope that in doing do we avoid "absence of evidence". I am hoping to find out how to locate and properly deliver that kind of evidence with the help of this discussion. One example I have is a lot of data (a gut wrenching literature survey of hundreds of papers) about the relatively closed community that developed neuromorphic computers. It is a community positioned in the conjunction of engineering/chip design, neuroscience and computer science. Within that community there is measurable historical lack of exposure to AGI/AI awareness, it being more directed at neuroscience and brain-inspired signal processing. More recently, neuromorphic computing and AI have become conjoined. Regardless, in the history there is a set of attitudes and a lexicon that is both inconsistently used within the community, and often at odds with other related communities that might both thought to be directed at the same or similar goals. That's one example. The fact that it is impossible to find any attempt at or even a proposal for an AI that doesn't use the physics of a 'computer' (definition alert!) to do AI, is another. The one thing that will be absent from all of it is opinion. I hope. Nobody is entitled to opinions. You're entitled to what you can argue for with evidence. Preference is not an argument, no matter the strength of conviction. The scientific evidence that this is a fact of science generally, is already in, and I will not be defending that. There's no end of self-referentiality in this endeavour! Colin On Sat, Jun 29, 2019 at 4:00 AM Matt Mahoney <[email protected]> wrote: > Colin, will your proposed paper contain an experimental results > section? I realize you favor the neuroscience approach to AGI. We need > neuroscience to figure out how the brain does what it does, as well as > computer science to test the theories that it suggests. Have you done > any experiments on human or animal brains? > > I'm not interested in yet another untested design or proposal for AGI. > I wrote one in 2008. I'm not going to build it because it will cost > USD $1 quadrillion. > http://mattmahoney.net/agi2.html > > Nor am I interested in theories about consciousness. We will spend $1 > quadrillion globally on human labor over the next 12 years doing work > that machines aren't smart enough to do. To solve this problem, we > need machines that can see, hear, navigate, understand language, > evaluate art, model human behavior, and do everything else that people > can do. But there is no need to simulate human weaknesses like > emotions or poor memory in order for it to pass the Turing test. I > don't care if you think it thinks or not, any more than Turing cared. > > I am not interested in economic or social theories on what will happen > when machines put us all out of work. The more we automate, the more > the cost of labor rises. Do you understand why? > > I am interested in actual experimental results that further goals > toward AGI. My results, if you care to include them: > > First: A 13 year long evaluation of over 1000 versions of 200 text > compression algorithms. http://mattmahoney.net/dc/rationale.html > > My conclusions are: > 1. The best language models are based on neural networks (which we now > know is true for vision and robotics as well). > 2. Intelligence (measured by prediction accuracy) increases with the > log of computing speed and the log of memory. > > Second: my cost estimate for AGI: http://mattmahoney.net/costofai.pdf > > The cost is based on the computing power of the human brain (1 to 10 > petaflops, 0.1 to 1 petabyte) times 7 billion people, collecting 10^17 > bits of global human knowledge not yet on the internet at $0.01 per > bit, and the complexity of our DNA, equivalent to 300M lines of code. > > If you want to prevent another AI winter, then solve the power > problem. A petaflop computer uses 1 MW of electricity. 7 billion of > these would use 7000 TW. Global energy production is 15 TW. You can't > reduce power consumption by making transistors smaller because you > can't make transistors smaller than atoms. If you want to get anywhere > close to the Landauer limit of 3 x 10^-21 J per bit operation, then > you need to compute by moving atoms instead of electrons, like our > cells and our brains do at a global cost of 0.7 TW. > > On Thu, Jun 27, 2019 at 10:56 PM Colin Hales <[email protected]> wrote: > > > > On Fri, Jun 28, 2019 at 10:33 AM Steve Richfield < > [email protected]> wrote: > >> > >> Colin, > >> > >> The obvious thing missing from neuroscience and AGI is application of > the Scientific Method. > >> > >> Theory: give enough computer scientists enough keyboards and time, and > they will eventually figure out or stumble on whatever it takes to have > general intelligence > >> > >> Experiment: let the world's programmers work on this for half a century. > >> > >> Results: Zero, nada, nothing. Experiment failed. Time for another > theory. > >> > >> My/Our? Theory: Use math to predict what might work to do the needed > processing, physics to evaluate whether biological neurons might be capable > of such things, neuroscience to see if these actually occur in biology, > computer science (AGI) to simulate large systems of identified components, > etc. > >> > >> To illustrate, we have argued in the past whether the Hall effect is > significantly responsible for mutual inhibition. This micro-dispute can > only exist in our current broken "system", because once a new integrated > field has emerged, some bright physicist would spend a week running numbers > through the equations to provide a definitive answer that we would both > accept. > >> > >> What we seem to need here is some sort of "constitution" for people to > digitally sign onto. I thoroughly expect a coming AGI disaster much like > the Perceptron Winter. Maybe if we point the way to the future via > competent research BEFORE the crash, we can preserve future research while > these folks join the ranks of the homeless. > >> > >> Let's wring out any differences we might have and put this together. > >> > >> Thoughts? > >> > >> Steve > > > > > > Yes. Let's. There is a lot to sort out. > > > > I have just embarked on writing a paper to sort this out once and for > all. It's my last attempt to get this very issue sorted out. The writing > will benefit from a serious pile of adversarial collaboration from yourself > and others. Ben? You interested? > > > > I have one and only one perspective on the issue that I have not tried. > Maybe it will push it over the line. I have written this cross-disciplinary > thing out from so many disciplinary perspectives I have lost count. All > shot blanks. And a sorry story it is. I have 1 approach left. Before that, > this is my personal position and preferred way to handle it if it happens > in this place: > > > > 1) I have taken the IP warrior hat off, and all my ideas will be in the > paper, including the chip design concept. 100% ownership of something that > goes nowhere = ...let me do the math ... hmmm. $Bugger-all in any currency. > > 2) Co-authors. This must be a collaboration with at least 3 authors. I > have some ideas for prospective people. Anyone that can make a viable > textual contribution that makes it into the final version gets authorship. > Explicit acknowledgement will cover everything else. It would be very cool > to be able to put the names of a couple of hundred people in the > acknowledgements. > > 3) The text shall be fed to the commentariat in an ARXIV context for > serious adversarial critique prior to submission in any journal. > > 4) The paper shall be of the ilk (scholastic standard) of those that > caused the trajectory of the state of AGI art to go the way it has e.g. > Turing, Von -Neumann... and e.g. my fave, probably the most influential > (required reading!) : Pylyshyn, Z.W. (1980). Computation and cognition: > Issues in the foundations of cognitive science. Behavioral and Brain > Sciences 3, 111-132. > > https://www.southampton.ac.uk/~harnad/Temp/.pylyshynBBS.pdf > > 5) It shall be published in a journal with suitable impact. > > > > I already know what the outcome is in terms of its changes to the > science of AGI. I have already prepared the question leading to it in the > final chapter of my book. But that's all moot. Let's re-discover it in the > paper's own narrative. Shoot it to death if you can. Put me out of my > misery! > > > > It's kind of weird that such a paper would be produced somewhat under > the gaze of an AGI forum. But I'm OK with that if you are. We can manage > that aspect offline a bit, if needed. It would be good if we can carry the > whole forum along with us to its conclusion. If we can do that, surely it > counts for something? Personally I think it apt that a serious left turn in > AGI science should come from a place like this, and a social media > community of this kind, where stakeholders abound. It would be very cool to > be able to tell any potential reviewers to join the forum to read the > archives covering the creation of the work! > > > > If the social media side gets too hard to manage we can bail and go off > line. BTW you can bail any time. I'll be doing this anyway, one way or > another. Just tell me to EFF OFF and I will. :-) > > > > Comments? ... Good to go? Or not? > > > > Colin > > > > Artificial General Intelligence List / AGI / see discussions + > participants + delivery options Permalink > > -- > -- Matt Mahoney, [email protected] ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Maf552335f19fccf43b0514bb Delivery options: https://agi.topicbox.com/groups/agi/subscription
