On Mon, Jul 1, 2019 at 7:08 AM Matt Mahoney <[email protected]> wrote:

> Colin, you seemed confused. AGI is not science. It is engineering. Science
> is about finding theories that make useful predictions and testing them
> with experiments. Engineering is about designing and building solutions to
> problems.
>
> The goal of AGI is not to understand brains or build an artificial brain.
> It is to figure out how to make machines that can do all the things that
> humans can do. The funding for AGI comes from the desire to automate human
> labor.
>
> We study neurobiology and cognitive science in the same way that the
> aircraft industry studied birds. It's not to build artificial birds.
>

This is a direct quote from the text I sent:
---------------------------------------
"Overall

In the depicted manner, empirical observation (Middle/Left) and theory
(Right) mutually interact as the science becomes more accurate. Observation
can precede theory and vice versa. The role of each activity in science
depends on the natural context. Next note that in (a) left we replicate
oxidation of the form similar to, but not identical with, the burning bush
(a) middle.* In (b) left we replicate flight, not the (b) middle bird.* And
so forth. As part of this process (traditionally called reductionism) we
reveal the essential underlying physics of the original nature – that
physics upon which the nature is critically dependent in the sense that if
it is not there, then the nature ceases to continue functioning or is
degraded/limited to some predictable degree."
---------------------------------------

If you could please read my text carefully before you commence discussion.
For the sake of a comparison: "In AGI we are trying to replicate
intelligence, not the brain". Like the bird. Replicate the function, not
the original way of doing it. Some of the physics of the brain is
essential. Some isn't. Which bits? Nobody knows, I want to find out.
Throwing all the physics away does not do that.

I also repeatedly used the phrase "engineered" in the context of science.
What happened when Laviosier in (a) created "engineered combustion"? He was
on the LEFT side of the diagram. He was inspired by nature ((a) middle) and
he discovered a theory (a) RIGHT  .... oxidation. It's the same everywhere
in science. An experiment is _engineered_ as part of science. "When an
experiment works it is a scientific success. When it fails, it's an
engineering failure" ... the old cliche: engineering as part of science.

Engineering also exists in computer science. In the text, under (f)
computer science, I describe the *engineering of a computer * in (f) left.
I gave an example: Neuromorphic computers. Physicists, electrical
engineers, computer scientists all manifesting the rules (the causality) of
the definition of a computer (the 'theory' in (f) RIGHT) onto a silicon
substrate.

All I want to do is the same thing for the (e) brain, and as done
everywhere else in science on the LEFT side of the diagram! The empirical
work of a scientific experiment that creates an artificial version of a
natural brain is in (e) LEFT, not (e) RIGHT. Engineering it is part of the
process and it does not involve use of "computers", but is literally the
replication of natural computation done with brain physics.

Doing (e) RIGHT and thinking you are doing (e) LEFT is the real problem. It
has orphaned an entire empirical science activity. Not one instance. Ever.
That's what's wrong. Saying the science is broken is not telling anyone
that "they are doing the science wrong". It is saying the science is
missing.

If "I want to solve the problem of AGI", where "AGI = an artificial version
of natural intelligence" is your mission, surely you'd want to know what
the science that does that looks like? And if you consider yourself an
expert, then surely everyone should be on the same page as to what the
science looks like when you're doing it? It looks like (e). All of it. Not
part of it. In this it is like every other science of a natural phenomenon.

By contrast if "I want to solve the problem of AGI", where "AGI = using
computers to create an artificial version of natural intelligence". Then
that is a whole other proposition, and it is confined to (e) RIGHT.

Nobody gets to cherry-pick what comprises science practice.

You must choose which bit you want to be involved in empirical, theoretical
or both. You must understand the implications of that choice. I choose
*both*. The fact that everyone in the area is doing the latter (as
demonstrated in the Ben Goertzel video recently posted here) is just a
historical/cultural accident caused by the birth of computers.

In Laviosier's day everyone was up to their armpits in a 100 year old
theory of combustion: Phlogiston. Buring things were 'de-plogistonated'.
Scientists had to adjust when they learned (through (a) LEFT) that
phlogiston, which was very useful as a theory, was based on an erroneous
hypothesis proved false by (a) LEFT by Laviosier. He provided a critical
experiment. I do that for AGI. In contrast, however, I am predicting, this
time, that the "substrate independence hypothesis" will be proved "true but
impractical". But I don't know.

Please read my words carefully. What is going on here is serious. When I
say "the science is malformed" I mean it. If you then conclude "I/we are
doing science wrong" then that is your reaction, and is factually
incorrect. What is going on in AGI is the 100% use of computers in (e)
RIGHT theoretical science. It's perfectly valid theoretical *science*.
Valuable, important, necessary jada jada jada. Brilliant. What I am saying
is that *it's not the empirical science of artificial general intelligence,
which must **engineer an artificial version of the natural original * for
the purposes of completing the science and proving what computers can and
can't do in reaching AGI. That done, and only when that is done, can
anybody claim, with a level of scientifically proved certainty, how the use
of computers in (e) RIGHT can be on the trajectory of functional
equivalence with the nature in (e) MIDDLE.

I can't say this with any more overt clarity. It's all in the text I have
delivered.

Maybe the next deliver of the text will help. Let's see.

cheers
colin

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T87761d322a3126b1-Mc47c5af17bc14a12b502f580
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to