On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
> On Wed, Jan 17, 2024 at 11:09 PM Tim Daly <[email protected]> wrote:
> 
> They can raise the issue all they like. What they are not seeing is that
> artificial stupidity (AI) systems are limited. As I said above. The only
> intelligence you will find in these systems is the stuff generated by human
> intelligence. No artificial stupidity (AI) system can ever exceed the
> limitations of the programming entailed in them.

Well, humans are at least as limited: your claim as true as claim
that "humans can not ever exceed the limitations of the programming
entailed in them".  In case of humans programming meaning both things
hardcoded in genome and chemical machinery of the body and "learned
stuff".  Already at age 1 toys, customized environment and interactions
with other humans make significant difference to learning.  At later
stages there are stories which were perfected for thousends of years,
school curricula and books.  There were myths that people from
non-western cultures are less "inteligent" than people from western
culture.  More deeper research showed that part of our "inteligence"
is really "programmed" (learned) and "programming" in differnet
cultures were different.

In slightly different spirit, in fifties there were efforts to
define inteligence and researcheres from that time postulated
several abilities that every inteligent being should have.
Based on that there were "proofs" that artificial inteligence
is impossible.  One of such "proofs" goes as follows: people
can prove math theorems.  But Goedel and Church proved that
no machine can prove math theorem.  So no machine will match
humans.  The fallacy of this argument is classic abuse of
quantifiers: humans can prove same (easy) math theorems.
No machine or human can prove _each_ math theorem.  Actually,
we still do not know how hard is proving, but common belief
is that complexity of proving is exponential in length of the
proof.  What is proven is that that there is no computable
bound on length of shortest proof.  Clearly this difficulty,
that is large length of proofs affect humans as much as
computers.

To put is differently, if you put strong requirements on
inteligence, like ability to prove each math theorem, then
humans are not inteligent.  If you lower your requirements,
so that humans are deemed inteligent, than appropriately
programmed computer is likely to qualify.

One more thing: early in history of AI there was Eliza.
It was simple pattern matcher clearly having no inteligence,
yet it was able to fool some humans to belive that they
communicate with other human (ok, at least for some time).
Some people take this to consider all solved AI problems
as kind of fake and show that the problem was not about
inteligence.  But IMO there is different possiblity: that
all our inteligence is "fake" in similar vein.  In other
words, we do not solve general problem but use tricks which
happen to work in real life.  Or to put it differently,
we may be much more limited than we imagine.  Eliza clearly
shows that we can be easily fooled into assuming that
something has much more abilities than it really has
(and "something" may be really "we").

-- 
                              Waldek Hebisch

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/Zaf4t0bQZRQAsouA%40fricas.org.

Reply via email to