Tim,

This is a good example of the problem of the language we are using.

On Thu, Jan 18, 2024 at 5:49 PM Tim Daly <[email protected]> wrote:

> AlphaGeometry: An Olympiad-level AI system for Geometry
>
> https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
>
> On Thursday, January 18, 2024 at 12:32:34 AM UTC-5 Tim Daly wrote:
>
>> While the question of human intelligence is interesting
>> please note that my post topic has been restricted to the
>> application of GPT-like / AI-like systems to the automation
>> of proof and computer algebra.
>>
>> Given the changes I'm seeing and the research I'm doing it
>> is likely that there will be a very rapid change from the legacy
>> version of computer algebra we now use, taking advantage of
>> the growing power of these systems.
>>
>> I've been involved and published in AI-like systems since the 1973,
>> including such
>> "AI-like" areas of machine vision, robotics, expert systems, speech,
>> neural
>> nets, knowledge representation, planning, etc. All of these were
>> considered
>> "intelligent behavior" ... until they weren't. Indeed, in the 1960s
>> computer
>> algebra-like systems were considered "intelligent".
>>
>> Intelligence is a "suitcase word" [0], interesting but irrelevant.
>>
>> The topic at hand is "computational mathematics", meaning the application
>> of recent technology (proof assistants, theorem databases, algorithm
>> proofs, and reasoning assistants) to moving beyond 50-year-old legacy
>> computer algebra. Proving computer algebra algorithms correct and
>> providing useful interfaces is the long-term goal.
>>
>> Tim
>>
>> [0] "Marvin Minsky, in his fascinating book "The Emotion Machine" ...
>> described
>> words such as "emotions," "memory," "thinking" and "intelligence" as
>> suitcase
>> words - words that mean "*nothing by itself, but holds a bunch of things
>> inside *
>> *that you have to unpack*."
>>
>> On Wednesday, January 17, 2024 at 10:03:53 PM UTC-5 [email protected]
>> wrote:
>>
>>> Good morning Waldek,
>>>
>>> My response will be delving into some areas of philosophical and
>>> metaphysical response.
>>>
>>> On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch <[email protected]> wrote:
>>>
>>>> On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
>>>> > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly <[email protected]> wrote:
>>>> >
>>>> > They can raise the issue all they like. What they are not seeing is
>>>> that
>>>> > artificial stupidity (AI) systems are limited. As I said above. The
>>>> only
>>>> > intelligence you will find in these systems is the stuff generated by
>>>> human
>>>> > intelligence. No artificial stupidity (AI) system can ever exceed the
>>>> > limitations of the programming entailed in them.
>>>>
>>>> Well, humans are at least as limited: your claim as true as claim
>>>> that "humans can not ever exceed the limitations of the programming
>>>> entailed in them".  In case of humans programming meaning both things
>>>>
>>>
>>> Though we are limited in every sort of way, we are capable of exceeding
>>> any programming that is involved in our development. What is so not obvious
>>> is that there is a category error when comparing humans to any sort of
>>> machine that we build. Though we can build machines stronger, faster and
>>> able to handle tedious tasks far better than we can do, due to our very
>>> distinctive intelligence, we are able to do what no machine can do and that
>>> is solve the problems that arise.
>>>
>>>
>>> hardcoded in genome and chemical machinery of the body and "learned
>>>> stuff".  Already at age 1 toys, customized environment and interactions
>>>>
>>>
>>> You speak here about [programming] hard-coded in the genome and chemical
>>> machinery of the human body. But I somehow don't think you realise just how
>>> far beyond us is that machinery. Within a single living cell, the automatic
>>> adaptive capabilities are so far beyond our current levels of technology
>>> that in comparison, we are no more advanced than some caveman when we
>>> compare the entire planetary industrial and planetary capabilities that we
>>> have already made.
>>>
>>> The more we learn about cellular biology with its internal
>>> transportation, control and manufacturing systems the more we should be
>>> recognising that we are not capable of building any computerised system
>>> that will ever be intelligent.
>>>
>>> with other humans make significant difference to learning.  At later
>>>> stages there are stories which were perfected for thousends of years,
>>>> school curricula and books.  There were myths that people from
>>>> non-western cultures are less "inteligent" than people from western
>>>> culture.  More deeper research showed that part of our "inteligence"
>>>> is really "programmed" (learned) and "programming" in differnet
>>>> cultures were different.
>>>>
>>>
>>> Though part of our intelligence is learned there is something that is so
>>> far beyond this, so that, even though this is an active area of research,
>>> we haven't a clue as to what this is.
>>>
>>>
>>>> In slightly different spirit, in fifties there were efforts to
>>>> define inteligence and researcheres from that time postulated
>>>> several abilities that every inteligent being should have.
>>>> Based on that there were "proofs" that artificial inteligence
>>>> is impossible.  One of such "proofs" goes as follows: people
>>>> can prove math theorems.  But Goedel and Church proved that
>>>> no machine can prove math theorem.  So no machine will match
>>>> humans.  The fallacy of this argument is classic abuse of
>>>> quantifiers: humans can prove same (easy) math theorems.
>>>>
>>>
>>> Here you are now arguing against the various incompleteness theorems and
>>> in doing so, you are entering the realm of metaphysics and philosophy.
>>> These incompleteness theorems highlight something quite interesting in that
>>> the basis of our logic and mathematics (the fundamental axioms used) cannot
>>> be proved true from within our logic and mathematics. We can argue for them
>>> from metaphysics and philosophy but we are unable to prove them from within
>>> the systems that are based on those fundamental axioms.
>>>
>>> What you have missed here is that any theorems developed within the
>>> systems are provable based on those unprovable axioms and previously proven
>>> theorems. That is how we can build up the various systems of mathematics in
>>> use. However, new mathematical systems require insights not possible to any
>>> computerised system. These new insights are in the realm of human
>>> intelligence which does exceed any programming that individual humans may
>>> be exposed to.
>>>
>>> No machine or human can prove _each_ math theorem.  Actually,
>>>> we still do not know how hard is proving, but common belief
>>>>
>>>
>>> All theorems can be proved if they are based on the initial set of
>>> axioms used. How hard it is to prove the theorem is another matter.
>>>
>>>
>>>> is that complexity of proving is exponential in length of the
>>>> proof.  What is proven is that that there is no computable
>>>> bound on length of shortest proof.  Clearly this difficulty,
>>>> that is large length of proofs affect humans as much as
>>>> computers.
>>>>
>>>
>>> Maybe. The thing of interest here is that it is humans who can find
>>> alternative ways of proving a theorem which is not a part of the exhaustive
>>> processes previously used.
>>>
>>> As you would well know, the Halting problem is not solvable by any
>>> Turing machine or corresponding alternative. But we can see the problem
>>> immediately. We make leaps and find solutions that are not possible within
>>> a logical/mathematical system.
>>>
>>>
>>>> To put is differently, if you put strong requirements on
>>>> inteligence, like ability to prove each math theorem, then
>>>> humans are not inteligent.  If you lower your requirements,
>>>> so that humans are deemed inteligent, than appropriately
>>>> programmed computer is likely to qualify.
>>>>
>>>
>>> Your comment above is false. It is false logically, rationally,
>>> metaphysically and philosophically. The ability to prove a mathematics
>>> theorem, if based on previous axioms and theorems, is not a sign of
>>> intelligence. I would agree that we do this as a part of our intelligence,
>>> but this is not a sign of intelligence. The axioms used at the fundamental
>>> level which are designed and specified may well be a sign of intelligence,
>>> but no computerised system does this. I have spent 40 years looking at
>>> these kinds of things and every computerised system is based on the human
>>> intellectual capacity (intelligence) for these systems to even exist.
>>>
>>>>
>>>> One more thing: early in history of AI there was Eliza.
>>>> It was simple pattern matcher clearly having no inteligence,
>>>> yet it was able to fool some humans to belive that they
>>>> communicate with other human (ok, at least for some time).
>>>>
>>>
>>> The ability to fool people does not challenge our intelligence. Every
>>> human is flawed and can be fooled. We see that every day in every society.
>>> This does not lessen what humanity has in terms of being intelligent. We
>>> act irrationally, we act in completely adverse ways. We make decisions that
>>> make no sense. We even choose options that are not actually available to
>>> us. We can see this in the recent Iowa caucus where the winner wasn't even
>>> on the available options.
>>>
>>> Machine programming doesn't allow this. Certainly, we can see all sorts
>>> of errors and when fully studied, we see why those errors occurred. They
>>> are logically defined.
>>>
>>> Some people take this to consider all solved AI problems
>>>> as kind of fake and show that the problem was not about
>>>> inteligence.  But IMO there is different possiblity: that
>>>> all our inteligence is "fake" in similar vein.  In other
>>>>
>>>
>>> If our intelligence is [fake] then the logical consequence of that
>>> argument is that there is no mathematics, no scientific investigation, no
>>> rationality, no intelligence, no progress in any form. This is a
>>> contradiction. We do do mathematics, we do do scientific investigation, we
>>> do do philosophy and metaphysics, we do do rational things.
>>>
>>> words, we do not solve general problem but use tricks which
>>>> happen to work in real life.  Or to put it differently,
>>>>
>>>
>>> Yes we do do tricks and this is one sign of intelligence here. Certainly
>>> we are limited and I have been saying such things for a long time. However,
>>> for all of our limitations, we are in a category that is very different to
>>> all other things, living or otherwise and we often fail to recognise, let
>>> alone understand that category difference.
>>>
>>> I have seen too many people who dismiss philosophy and metaphysics as
>>> being useless. Yet it is by these that we show forth our intelligence and
>>> by which we can justify mathematics and scientific investigation as valid,
>>> rational and proper intellectual pursuits.
>>>
>>>
>>> we may be much more limited than we imagine.  Eliza clearly
>>>> shows that we can be easily fooled into assuming that
>>>> something has much more abilities than it really has
>>>> (and "something" may be really "we").
>>>>
>>>> --
>>>>                               Waldek Hebisch
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "FriCAS - computer algebra system" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>>
>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/fricas-devel/Zaf4t0bQZRQAsouA%40fricas.org
>>>> .
>>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "FriCAS - computer algebra system" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/fricas-devel/7847d8fa-e682-4808-9ea6-5cade706234dn%40googlegroups.com
> <https://groups.google.com/d/msgid/fricas-devel/7847d8fa-e682-4808-9ea6-5cade706234dn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/CAEnaMTEEPa-MNxRjxtgbVHL7vtZ%3DpgGF5Qn0Lrdnp1WdyZFyyg%40mail.gmail.com.

Reply via email to