Tim,

The first video is interesting for a number of reasons.

The first reason is that mathematicians don't like tedium and proof
assistants can contribute to that tedium. (Formal mathematics part 18 -19
minute mark)

The next section on Teaching highlights an especially good reason for proof
assistants and that is the confidence that we can have that the theorems
being taught and used have been verified. Many times in my undergraduate
engineering, we were introduced to various theorems relevant to the
engineering subjects that we were being taught and it was assumed that the
particular proofs for those theorems were correct. Yet, the problem that we
were faced with was that it was assumed that any previous theorems used as
a basis for the specific theorem being proved were true. In many cases we
were not shown that those base theorems were in fact true. At that time,
these tools were not available or were not known to be available. We were
taught to be pragmatic and if these theorems were useful in practical ways,
we used them. It has been in latter years that formal proofs for all
theorems (especially in engineering)  has become a matter of interest and
concern.

The next section on mathematical discovery concerns me. He is assuming that
there is an underlying intelligence to these tools and in doing so fails to
see that these are just tools that people can use to do what they need to
do. He does recognise that pure mathematicians don't appear to have made a
determination where or how these tools can be used and I think that is tied
up with the baggage that there is intelligence in these tools. People and
this includes mathematicians and scientists treat tools quite differently
when they assume that the tools they are using have some level of
intelligence. I have observed this strange change for well over 30 years
and it is getting worse. Instead of using a tool as a tool and being in
control of that tool, people start to defer to the tool if they have an
idea that it has some level of intelligence. I see this happening with the
general public and I see it happening with mathematicians and scientists.

I don't have a problem with automation per se. I have used automation to
remove many tasks for people over the years. But I don't implicitly trust
automation because I know just how fragile all such systems are and I try
to endure that work-arounds are available in the event of failure.

His insight into formalism and collaboration is something that is often
done but rarely appreciated in the main.

In his section on Dependent Types, there is an implicit problem underlying
his discussion on type checking. Interestingly, this particular subject was
being researched and discuss in the late 1970's to the mid 1980's at the
University of St Andrews in the Computer Science departments. What it
boiled done to was that certain type checking was only decidable at runtime
and not at compile time. Interestingly, type checking became one of those
inane and often vitriolic discussions. The middle ground people who were of
the opinion to use both static and runtime typing were the target of both
sides of the argument. It is well known that compile time type checking is
undecidable in certain circumstances and so such systems work to restrict
what kinds of type checking are allowed. This leads to certain classes of
problems not being amenable to static type checking.

If you allow runtime type checking to be used as well, then the problems
with static type checking in Dependent Type theory are mitigated.

His approach is based on purely static type checking even though there is
an implication in what he says of a possible mitigation here.

In the section on types and expressions, at the end of that section, he
makes a distinction between #eval and #check. There is no distinction here.
You are evaluating expressions. The end result of those expressions are
different, no issue with that. But both are evaluations of expressions. I
find making the distinction here to be somewhat problematic and complicates
the discussion where there should be no such complication. This is the same
kind of complication that has arisen in the static/dynamic type checking
arguments.

This is enough comment at this time.

On Fri, Jan 19, 2024 at 4:08 AM Tim Daly <[email protected]> wrote:

> Jeremy Avigad has two youtube videos showing how strong these
> interactive proof assistants have become:
>
> Formal mathematics, dependent type theory, and the Topos Institute
> https://www.youtube.com/watch?v=Kpa8cCUZLms&ab_channel=ToposInstitute
>
> The Design of Mathematical Language
> https://www.youtube.com/watch?v=wUUcuaegljk&ab_channel=leanprovercommunity
>
> It seems to me that it should be possible to encode the Risch algorithm
> proof.
>
> On Thursday, January 18, 2024 at 2:06:39 AM UTC-5 [email protected]
> wrote:
>
>> Tim,
>>
>> This is a good example of the problem of the language we are using.
>>
>> On Thu, Jan 18, 2024 at 5:49 PM Tim Daly <[email protected]> wrote:
>>
>>> AlphaGeometry: An Olympiad-level AI system for Geometry
>>>
>>> https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/
>>>
>>> On Thursday, January 18, 2024 at 12:32:34 AM UTC-5 Tim Daly wrote:
>>>
>>>> While the question of human intelligence is interesting
>>>> please note that my post topic has been restricted to the
>>>> application of GPT-like / AI-like systems to the automation
>>>> of proof and computer algebra.
>>>>
>>>> Given the changes I'm seeing and the research I'm doing it
>>>> is likely that there will be a very rapid change from the legacy
>>>> version of computer algebra we now use, taking advantage of
>>>> the growing power of these systems.
>>>>
>>>> I've been involved and published in AI-like systems since the 1973,
>>>> including such
>>>> "AI-like" areas of machine vision, robotics, expert systems, speech,
>>>> neural
>>>> nets, knowledge representation, planning, etc. All of these were
>>>> considered
>>>> "intelligent behavior" ... until they weren't. Indeed, in the 1960s
>>>> computer
>>>> algebra-like systems were considered "intelligent".
>>>>
>>>> Intelligence is a "suitcase word" [0], interesting but irrelevant.
>>>>
>>>> The topic at hand is "computational mathematics", meaning the
>>>> application
>>>> of recent technology (proof assistants, theorem databases, algorithm
>>>> proofs, and reasoning assistants) to moving beyond 50-year-old legacy
>>>> computer algebra. Proving computer algebra algorithms correct and
>>>> providing useful interfaces is the long-term goal.
>>>>
>>>> Tim
>>>>
>>>> [0] "Marvin Minsky, in his fascinating book "The Emotion Machine" ...
>>>> described
>>>> words such as "emotions," "memory," "thinking" and "intelligence" as
>>>> suitcase
>>>> words - words that mean "*nothing by itself, but holds a bunch of
>>>> things inside *
>>>> *that you have to unpack*."
>>>>
>>>> On Wednesday, January 17, 2024 at 10:03:53 PM UTC-5 [email protected]
>>>> wrote:
>>>>
>>>>> Good morning Waldek,
>>>>>
>>>>> My response will be delving into some areas of philosophical and
>>>>> metaphysical response.
>>>>>
>>>>> On Thu, Jan 18, 2024 at 2:56 AM Waldek Hebisch <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> On Thu, Jan 18, 2024 at 01:16:55AM +1100, Hill Strong wrote:
>>>>>> > On Wed, Jan 17, 2024 at 11:09 PM Tim Daly <[email protected]>
>>>>>> wrote:
>>>>>> >
>>>>>> > They can raise the issue all they like. What they are not seeing is
>>>>>> that
>>>>>> > artificial stupidity (AI) systems are limited. As I said above. The
>>>>>> only
>>>>>> > intelligence you will find in these systems is the stuff generated
>>>>>> by human
>>>>>> > intelligence. No artificial stupidity (AI) system can ever exceed
>>>>>> the
>>>>>> > limitations of the programming entailed in them.
>>>>>>
>>>>>> Well, humans are at least as limited: your claim as true as claim
>>>>>> that "humans can not ever exceed the limitations of the programming
>>>>>> entailed in them".  In case of humans programming meaning both things
>>>>>>
>>>>>
>>>>> Though we are limited in every sort of way, we are capable of
>>>>> exceeding any programming that is involved in our development. What is so
>>>>> not obvious is that there is a category error when comparing humans to any
>>>>> sort of machine that we build. Though we can build machines stronger,
>>>>> faster and able to handle tedious tasks far better than we can do, due to
>>>>> our very distinctive intelligence, we are able to do what no machine can 
>>>>> do
>>>>> and that is solve the problems that arise.
>>>>>
>>>>>
>>>>> hardcoded in genome and chemical machinery of the body and "learned
>>>>>> stuff".  Already at age 1 toys, customized environment and
>>>>>> interactions
>>>>>>
>>>>>
>>>>> You speak here about [programming] hard-coded in the genome and
>>>>> chemical machinery of the human body. But I somehow don't think you 
>>>>> realise
>>>>> just how far beyond us is that machinery. Within a single living cell, the
>>>>> automatic adaptive capabilities are so far beyond our current levels of
>>>>> technology that in comparison, we are no more advanced than some caveman
>>>>> when we compare the entire planetary industrial and planetary capabilities
>>>>> that we have already made.
>>>>>
>>>>> The more we learn about cellular biology with its internal
>>>>> transportation, control and manufacturing systems the more we should be
>>>>> recognising that we are not capable of building any computerised system
>>>>> that will ever be intelligent.
>>>>>
>>>>> with other humans make significant difference to learning.  At later
>>>>>> stages there are stories which were perfected for thousends of years,
>>>>>> school curricula and books.  There were myths that people from
>>>>>> non-western cultures are less "inteligent" than people from western
>>>>>> culture.  More deeper research showed that part of our "inteligence"
>>>>>> is really "programmed" (learned) and "programming" in differnet
>>>>>> cultures were different.
>>>>>>
>>>>>
>>>>> Though part of our intelligence is learned there is something that is
>>>>> so far beyond this, so that, even though this is an active area of
>>>>> research, we haven't a clue as to what this is.
>>>>>
>>>>>
>>>>>> In slightly different spirit, in fifties there were efforts to
>>>>>> define inteligence and researcheres from that time postulated
>>>>>> several abilities that every inteligent being should have.
>>>>>> Based on that there were "proofs" that artificial inteligence
>>>>>> is impossible.  One of such "proofs" goes as follows: people
>>>>>> can prove math theorems.  But Goedel and Church proved that
>>>>>> no machine can prove math theorem.  So no machine will match
>>>>>> humans.  The fallacy of this argument is classic abuse of
>>>>>> quantifiers: humans can prove same (easy) math theorems.
>>>>>>
>>>>>
>>>>> Here you are now arguing against the various incompleteness theorems
>>>>> and in doing so, you are entering the realm of metaphysics and philosophy.
>>>>> These incompleteness theorems highlight something quite interesting in 
>>>>> that
>>>>> the basis of our logic and mathematics (the fundamental axioms used) 
>>>>> cannot
>>>>> be proved true from within our logic and mathematics. We can argue for 
>>>>> them
>>>>> from metaphysics and philosophy but we are unable to prove them from 
>>>>> within
>>>>> the systems that are based on those fundamental axioms.
>>>>>
>>>>> What you have missed here is that any theorems developed within the
>>>>> systems are provable based on those unprovable axioms and previously 
>>>>> proven
>>>>> theorems. That is how we can build up the various systems of mathematics 
>>>>> in
>>>>> use. However, new mathematical systems require insights not possible to 
>>>>> any
>>>>> computerised system. These new insights are in the realm of human
>>>>> intelligence which does exceed any programming that individual humans may
>>>>> be exposed to.
>>>>>
>>>>> No machine or human can prove _each_ math theorem.  Actually,
>>>>>> we still do not know how hard is proving, but common belief
>>>>>>
>>>>>
>>>>> All theorems can be proved if they are based on the initial set of
>>>>> axioms used. How hard it is to prove the theorem is another matter.
>>>>>
>>>>>
>>>>>> is that complexity of proving is exponential in length of the
>>>>>> proof.  What is proven is that that there is no computable
>>>>>> bound on length of shortest proof.  Clearly this difficulty,
>>>>>> that is large length of proofs affect humans as much as
>>>>>> computers.
>>>>>>
>>>>>
>>>>> Maybe. The thing of interest here is that it is humans who can find
>>>>> alternative ways of proving a theorem which is not a part of the 
>>>>> exhaustive
>>>>> processes previously used.
>>>>>
>>>>> As you would well know, the Halting problem is not solvable by any
>>>>> Turing machine or corresponding alternative. But we can see the problem
>>>>> immediately. We make leaps and find solutions that are not possible within
>>>>> a logical/mathematical system.
>>>>>
>>>>>
>>>>>> To put is differently, if you put strong requirements on
>>>>>> inteligence, like ability to prove each math theorem, then
>>>>>> humans are not inteligent.  If you lower your requirements,
>>>>>> so that humans are deemed inteligent, than appropriately
>>>>>> programmed computer is likely to qualify.
>>>>>>
>>>>>
>>>>> Your comment above is false. It is false logically, rationally,
>>>>> metaphysically and philosophically. The ability to prove a mathematics
>>>>> theorem, if based on previous axioms and theorems, is not a sign of
>>>>> intelligence. I would agree that we do this as a part of our intelligence,
>>>>> but this is not a sign of intelligence. The axioms used at the fundamental
>>>>> level which are designed and specified may well be a sign of intelligence,
>>>>> but no computerised system does this. I have spent 40 years looking at
>>>>> these kinds of things and every computerised system is based on the human
>>>>> intellectual capacity (intelligence) for these systems to even exist.
>>>>>
>>>>>>
>>>>>> One more thing: early in history of AI there was Eliza.
>>>>>> It was simple pattern matcher clearly having no inteligence,
>>>>>> yet it was able to fool some humans to belive that they
>>>>>> communicate with other human (ok, at least for some time).
>>>>>>
>>>>>
>>>>> The ability to fool people does not challenge our intelligence. Every
>>>>> human is flawed and can be fooled. We see that every day in every society.
>>>>> This does not lessen what humanity has in terms of being intelligent. We
>>>>> act irrationally, we act in completely adverse ways. We make decisions 
>>>>> that
>>>>> make no sense. We even choose options that are not actually available to
>>>>> us. We can see this in the recent Iowa caucus where the winner wasn't even
>>>>> on the available options.
>>>>>
>>>>> Machine programming doesn't allow this. Certainly, we can see all
>>>>> sorts of errors and when fully studied, we see why those errors occurred.
>>>>> They are logically defined.
>>>>>
>>>>> Some people take this to consider all solved AI problems
>>>>>> as kind of fake and show that the problem was not about
>>>>>> inteligence.  But IMO there is different possiblity: that
>>>>>> all our inteligence is "fake" in similar vein.  In other
>>>>>>
>>>>>
>>>>> If our intelligence is [fake] then the logical consequence of that
>>>>> argument is that there is no mathematics, no scientific investigation, no
>>>>> rationality, no intelligence, no progress in any form. This is a
>>>>> contradiction. We do do mathematics, we do do scientific investigation, we
>>>>> do do philosophy and metaphysics, we do do rational things.
>>>>>
>>>>> words, we do not solve general problem but use tricks which
>>>>>> happen to work in real life.  Or to put it differently,
>>>>>>
>>>>>
>>>>> Yes we do do tricks and this is one sign of intelligence here.
>>>>> Certainly we are limited and I have been saying such things for a long
>>>>> time. However, for all of our limitations, we are in a category that is
>>>>> very different to all other things, living or otherwise and we often fail
>>>>> to recognise, let alone understand that category difference.
>>>>>
>>>>> I have seen too many people who dismiss philosophy and metaphysics as
>>>>> being useless. Yet it is by these that we show forth our intelligence and
>>>>> by which we can justify mathematics and scientific investigation as valid,
>>>>> rational and proper intellectual pursuits.
>>>>>
>>>>>
>>>>> we may be much more limited than we imagine.  Eliza clearly
>>>>>> shows that we can be easily fooled into assuming that
>>>>>> something has much more abilities than it really has
>>>>>> (and "something" may be really "we").
>>>>>>
>>>>>> --
>>>>>>                               Waldek Hebisch
>>>>>>
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "FriCAS - computer algebra system" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to [email protected].
>>>>>>
>>>>> To view this discussion on the web visit
>>>>>> https://groups.google.com/d/msgid/fricas-devel/Zaf4t0bQZRQAsouA%40fricas.org
>>>>>> .
>>>>>>
>>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "FriCAS - computer algebra system" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>>
>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/fricas-devel/7847d8fa-e682-4808-9ea6-5cade706234dn%40googlegroups.com
>>> <https://groups.google.com/d/msgid/fricas-devel/7847d8fa-e682-4808-9ea6-5cade706234dn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "FriCAS - computer algebra system" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/fricas-devel/307db63c-87f0-4d39-9402-6e0c9ee8aae2n%40googlegroups.com
> <https://groups.google.com/d/msgid/fricas-devel/307db63c-87f0-4d39-9402-6e0c9ee8aae2n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"FriCAS - computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/CAEnaMTHGCeL8M3nsKB%3DTQvZfmGEWYupoj4pkzV3b71UMLb9jRQ%40mail.gmail.com.

Reply via email to