On Wednesday, December 23, 2020, at 5:54 PM, Colin Hales wrote:
> Thanks for opening this door.
> 
> The *paper* (not me) claims (with empirical evidence) that a science that 
> assumes a claim "cognition can be achieved by algorithms in GP-computers", an 
> equivalence of nature and abstract models not achieved anywhere else in the 
> history of the science of natural phenomena, if it is to be fully and 
> formally tested conclusively, must include null hypothesis testing that does 
> not presuppose it to be true. 
> 
> Section 5 details the proposed change to the testing (through introduction of 
> the neuromorphic chip and its empirical science) ... and at the end of 
> section 5 in black and white:
> 
> *"Note that none of the
above discussion is intended to imply that GP-computers cannot reach
equivalence with natural brain function under circumstances not yet understood.
That potentiality is not the issue here and is not contested. The issue here is
how neuroscience and the science of AI must be configured to empirically
determine any potential equivalence and the context in which it may happen. "*

Then perhaps I can soften your basic claim to "it has not been *proven* that 
cognition can be achieved by algorithms"? Or are you trying to say that, if a 
GP-computer *did* manage to implement cognition, it would be by virtue of some 
physical side effect that happens to arise when a GP computer executes certain 
algorithms, and not by virtue of the algorithms themselves?

As far as I am concerned, intelligence is not even a physical phenomenon. It is 
an abstract/informational phenomenon that occurs in some physical vehicle. The 
appropriate analogy here is not an artificial wing, stomach, or heart, but a 
*story*. 

If you tell me a story and then order me to "copy down that story," I need not 
concern myself with replicating its physical nature -- i.e. a sequence of 
pressure waves in the air between your mouth and my ears. My first priority 
will be replicating the information, i.e. the words. I could, for instance, 
create an "artificial story" by printing a book containing those words. Said 
book would have no physics *whatsoever* in common with the original sound 
waves, but it would still contain the story. Surely you would not say we need 
an elaborate empirical proof to know that *Moby Dick* the audiobook could just 
as easily be a printed book while remaining *Moby Dick*? Here, then, is an 
existing example of physics independence.

Much like a story, AI is intended to produce *informational* results which are, 
by their nature as information, inherently independent of their physical 
medium. Thus the difference between it and various technologies whose goal is 
to produce *physical* results, is readily understandable and justified.

To continue the analogy, our present problems in AI are less like a poor 
understanding of how sound waves work, and more like not knowing all the words 
of the story. Some people are trying to reconstruct the words by simulating the 
movements of the storyteller's mouth, but this is not the only possible method. 
The words, if we could get them, would not "model" the story, they would BE the 
story.

If your hope is to replicate the audio rather than the story -- if you desire 
an artificial brain, rather than artificial intelligence -- then you and I are 
not even aiming at the same goal, though my smaller goal would be contained 
inside your larger one.

I read Section 5.2 of your paper, and I still don't understand how you envision 
the XChip being used to (maybe) prove the possibility of AI in GP-computers. If 
you implemented the exact same algorithm using a GP-computer and using XChip, 
then observed that the XChip was demonstrably intelligent and the GP-computer 
was not, you might regard that as evidence that XChip's physics are necessary 
to intelligence and GP-computers cannot achieve it. But if the XChip did not 
succeed in producing demonstrable intelligence, then what would you have 
learned?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-Ma3f92bd19d7dc675f5dd948f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to