Matt,
For years I have been trying to get one particular idea across that you
continually fail to encounter. This is a problem with the functioning of a
science. You seem to be unable to get your head around the idea that the
operation of a science includes the creation of artificial versions of a
natural phenomenon. Constructing and computer exploration of abstract
models is only 1 third of the science process. And never, in the history of
science, in any other science, is this latter modelling been treated as an
artificial version of a natural thing. That uniqueness is unprecedented. It
only happens in AI.

Fixing that is what this paper is about. What are you doing if you create
an artificial version of natural brain/intelligence by the same method used
everywhere else in science? That is a neuromorphic chip. The paper does not
deny the potential equivalence of these two different approaches.

Page 16, end of section 5:

*"Note that none of the above discussion is intended to imply that
GP-computers cannot reach equivalence with natural brain function under
circumstances not yet understood. That potentiality is not the issue here
and is not contested. The issue here is how neuroscience and the science of
AI must be configured to empirically determine any potential equivalence
and the context in which it may happen. With method (1) the equivalence is
presupposed. With method (2), the equivalence of GP-computers and brains
can be empirically settled by a fully formed neuroscience."*
It is the neuromimetic chip that does (2) and properly tests the anomalous,
unique and unprecedented equivalence of brains and GP-computers within AI.

Please read the paper and try to get your head around what I am trying to
do for the science of AI. I know it runs up against generationally
entrained presuppositions about GP-computers. That can't be helped.
Challenge your own perspective. I did! I started off with the same
presuppositions.

The science of AI is fundamentally malformed and I am trying to get this
into the open for a decent critique. Maybe contribute to it? That would be
much appreciated.

cheers
colin







On Thu, Dec 24, 2020 at 5:33 AM Matt Mahoney <[email protected]>
wrote:

>
>
> On Wed, Dec 23, 2020, 12:45 PM WriterOfMinds <[email protected]>
> wrote:
>
>>
>> I don't agree with him, but watching all of you talk past each other is
>> frustrating me.
>>
>
> Me too. Colin's argument is that nobody else has produced AGI either, so
> his theory is as good as any other.
>
> It's ridiculous. We already understand how neural networks work. They are
> the most successful models we have for vision, language, and robotics, but
> they require massive computing power that's only recently become available
> if you have millions to invest. It has nothing to do with the magic of
> consciousness emenating from electromagnetic fields. His theory doesn't
> even try to explain how that might work.
>
>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M5185f5de5799908d45f72adf>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf319c0e4c79c9397-M94e2ef4f14817f14931f5121
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to