On Sat, Jan 2, 2021 at 12:30 AM <[email protected]> wrote:
> On Thursday, December 31, 2020, at 8:40 PM, Colin Hales wrote: > > (i) Observation of a natural context (empirical science). > > (ii) Observation of artificial versions of the natural > context. Call this engineered or replicated nature a > ‘scientifically-artificial’ version of nature (empirical science). > > (iii) Creation of abstract models predictive of the properties > of the natural context observable in (i) and (ii) (theoretical science). > > Dude the brain doesn't depend on particles, atoms, cells, and a lot of > other stuff. > A simulation of a natural thing is NOT the natural thing.. A computed model of combustion is not burning. You can sit in a flight simulator until the end of the universe and you will NOT be transported anywhere by the simulator. Under what conditions can a computed model of a brain be 'braining'? The paper is literally about the formal proof of a universally assumed claim about the possibility that an artificial brain can be made without using any of the natural physics of the natural original brain. This is a claim which has never happened anywhere in science and is unique to the brain, is something that AI assumes true. Never proved. We can simulate bubbles without atoms. No. If you want actual bubbles then you must make bubbles. The whole point of the paper is NOT simulating. What part of this diagram fails to communicate the situation: [image: image.png] (i) natural combustion (atoms) (ii) (scientifically-)artificial combustion (atoms) (iii) a 'simulation' of abstract combustion physics (a theory/'law of nature' with a GP-computer. You cannot BURN without the atoms behaving combustion-ly. You cannot fly without the atoms behaving flight-ly You cannot digest without atoms behaving chemically You cannot pump without atoms behaving pump-ly So why is it the case that you can brain without the atoms behaving 'brain-ly'? How would you prove this unique anomaly in science if you were asked to? I am not saying you can't do it. I am saying that it has never been the case ever in science that the natural thing can be carried out without any of the natural physics. > GPT-2 does not simulate > This is formally a false statement although word-semantics will now obfuscate the argument. In general I tend not to use the word "simulate". GPT-2/3/4/5/6/7/8/9/10 is and will always be a MODEL of a natural thing. It could be called simulation. It is something constructed to conserve human-generated data relations. A model of a thing. > / need atoms or other physics, the "atom" for GPT-2 or at least in my > pre-AGI is "memories" of sequences like abcdef... > > (i) and (iii) ok ... but (ii) IS already being done when we create things > like GPT-2 and then observe GPT-2, because (ii) says "Observation of > artificial versions" and GPT-2 IS an artificial "brain" that we then > observe (my AI is not a black box or uses backpropagation like GPT-2 does > yet will perform on par or better than GPT-2, and is much more natural just > like a real human brain you'll see soon in a month). > > Colin, if you have an AI (aka one you came up with, or not) that you want > to run, you do it either of 2 ways, either run it on a computer simulated, > or on a hardware chip/ etc "for real" or at least more realer than a > computer at least. The neuromorphic chips are hardware that run > specifically AI algorithms and therefore can run them faster, that's why > they are called hardware accelerators. They are less general at computing, > but faster for AI algorithms - that's the trade off. You said you can run > your AI on a computer, so how does your AI work then!? Does it use > backprop? How does it find Patterns in Data ex. 'z' is the least common > letter or eat usually follows dog. The only thing that exists in the > universe is patterns, else all would be random and could not use any past > experience memories to improve prediction decision making. And, if you want > to run your AI on a real hardware accelerator to speed it up merely, why > are you suggesting you can only make AI if run it on a accelerator? I > thought, you said, you can run it on a computer? You say you want to > observe it empirically on an accelerator chip but why - it will not behave > different, it will only, ONLY, run faster. There's nothing to observe there > then but speed. You can observe everything on a computer of your AI. > > Lastly if you have full time to read this too, as said data in from random > sources, desired data collection out to attain desired data from non random > sources, it can do this in brain, ex. decide AGI is similar to food goal, > so now it starts thinking about AGI not food as much hence collecting data > from tests in its brain. You mentioned external lab world tests to prove > theories simply as reinforcement, well, this is data collection from a > specific domain still, a brain can do that in its head, it does not need a > lab/world/ body to be a scientist, it just updates its hobby in its brain > to change where it collects data from, repeat. > The hardware chip I propose is NOT an abstraction of anything. No models. No GP-computer. No simulation, no emulation, no mimicry. Neuromorphic chips are exactly what you say they are. But the real point is completely missed. Natural intelligence is based on specific physics. All I am saying is that perhaps using the actual physics we might be able to make *artificial* intelligence. And that maybe we should try it for the first time ever. And that maybe it would directly and scientifically speak to the problem of the equivalence of GP-computed models of nature and nature in this special context. My complete failure to get the point across is truly dispiriting, On to the next one .... ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T2f2a092379e757d2-Mf78b3c4a3a841f04a322841e Delivery options: https://agi.topicbox.com/groups/agi/subscription
