Tim, Some comments are below.
On Wed, Jan 17, 2024 at 11:09 PM Tim Daly <[email protected]> wrote: > I'm not the only one raising the question of AI and GPT systems and > their effects on Mathematics. The best and brightest of mathematicians, > of which I'm not one, are raising the issue. > They can raise the issue all they like. What they are not seeing is that artificial stupidity (AI) systems are limited. As I said above. The only intelligence you will find in these systems is the stuff generated by human intelligence. No artificial stupidity (AI) system can ever exceed the limitations of the programming entailed in them. > > This is a lecture by Jeremy Avigad, > Professor of Mathematics and Philosophy at CMU titled: > Is Mathematics Obsolete? > https://www.youtube.com/watch?v=lE48QtO4xBQ&ab_channel=SantaFeInstitute > > Timothy Gowers, a Fields Medalist, raises similar issues: > Can Computers Do Mathematical Research? > https://mathscholar.org/2020/09/can-computers-do-mathematical-research/ > > Gowers talks about computer proofs of mathematical theorems and > automatic mathematical reasoning. Gowers 'says that this may lead to a > brief golden age, when mathematicians still dominate in original thinking > and computer programs focus on technical details, “but I think it will > last a > very short time,” given that there is no reason that computers cannot > eventually > become proficient at the more creative aspects of mathematical research as > well.' > Here we see the failure of understanding the limitations found in all computer systems. We can build systems that can be used to develop proofs and may even be able to do exhaustive tests to find those proofs. But none of these systems will be able to do the original thinking that is inherent in human intelligence and intuition. I have been observing the many different areas in which we have progressed with computerised systems and the more I see, the more it becomes very clear that we will never get a thinking machine based on any level of technology that we are able to invent. I said it in a previous comment above about watching my granddaughter as she is growing up. I have been watching my children and my grandchildren as well as many many other people. No matter how seemingly complex the systems we build, there is nothing that approaches even the animals, let alone humanity. Irrespective of any seeming intelligence that the researchers speak on, there is no intelligence within these systems. If you do ever see some artificial system show distinctive signs of real intelligence, take it that we are seeing a natural intelligence trying to fool you. There are many people who willingly commit fraud to achieve control over others. > Certainly, as I"ve mentioned, proof assistants provide all of the elements > of > games which makes them perfect for reinforcement learning, the final > training stage of GPT systems. All this requires is a bright PhD candidate. > Proof assistants are relatively simple tools to help a human intelligence with the tedium involved. Nothing wrong with that as this is a useful amplification of our own abilities. But, we are far too stupid in reality to be able to understand what intelligence actually is. There are enough people, philosophers and theologians and physical scientists who are working on understanding this and we have in all essentials gotten nowhere in understanding what intelligence is, let alone understanding what we need to do to build anything even resembling the least of living intelligence systems. I am far more concerned with what various people and groups of people will do with our current levels of technology than ever being concerned that we will ever build a real artificial system. What seems to be lost is that we build real intelligence systems all the time and these we call children. > What few people so far have done is realize the impact this will have on > computer algebra. Certainly a GPT system can answer questions about > the proper handling of sqrt(4) and/or working through both the positive > and negative root assumptions in a calculation by conducting the > calculation > applying the simplification for 2 and -2 and presenting both results. All > the > GPT prompt would have to say is "assuming sqrt(4) is -2 compute ...". > The impact will be, if properly used, to build comprehensive proofs in mathematics that can have potential use in other areas of civilisation. But as we often see, the human motivations involved here are for the purposes of control, power and wealth and influence. The insights that come from mathematics and how we can apply these are in all cases coming from our human intelligence. One must be wary of the problem of the hype surrounding these systems. It is the case that we need to [follow the money] to see who gains from this hype. All expert systems have points of failure that are never seen until far too late. These failure points are based on coding errors, data errors, human maliciousness, invalid assumptions and so on. If anyone, no matter who it is, makes the assumption that we can build intelligent systems then they have fooled themselves and have forgotten that we are still far too primitive (as in caveman primitive) in regards to our technological advancement. As for the rapidity of change, we forget the lessons of the past and are doomed to repeat them. In just 150 years, we have forgotten or discarded so much knowledge of what we could do. Artificial systems can be used to correlate the enormous amounts of information that is still available, but it cannot correlate what we have already discarded and that information far exceeds what we have kept. An example of the discard that has taken place is that some years ago, an emeritus professor at the University of Adelaide was contacted by a colleague of his from England. This colleague was involved in clearing out a closed down lab that hadn't been used in 60 -70 years. The emeritus professor took all the records in that lab and based on those records found an anomaly in of the extended Michelson-Morl;ey experiments. I believe the specific lab had been doing the experiment over a period of two years. None of this information had been published or was even known about. There are many different areas in which all sorts of experimental data and technology is now not available or at least no available in any electronic form. All too often, people assume that our computer systems are reasonable and workably useful. To some extent they are. Often however, those useful reasonable results are only flukes. The actual code generating those results is utter garbage. If there is any serious complexity within these systems then there is a high likelihood that the system only has the appearance of being correct. I have been looking at Metamath and it has a simplicity that allows for a good semblance of it being correct. Axiom and FriCAS, as well as Maxima, Mathematics, SymPy and all the other associated systems are filled with [kludges] that cause interesting problems for the users of those systems. Artificial stupidity (AI) systems are even more unreliable. Without a human looking at the results of these systems. That doesn't mean that useful work cannot be done, but you have to use these systems carefully and with understanding of the limitations involved in each. The work I have undertaken on rebuilding Axiom/FriCAS is to try and mitigate the underlying problems/features within these systems. Now, I have personal reasons for doing this and these reasons are in the furtherance of specific research areas I have in both quantum and non-quantum worlds based on a particular engineering mindset. Always assume (and it is a fair assumption) that there is always one more bug in any code in any computerised systems, especially in any artificial stupidity system. Note: I continue to use the term [artificial stupidity] because of the 40 years I have been involved in the various related areas. At any rate enough of the rant from me. Have a blessed rest of the week as it comes to you. > The mixtral (Mixture of Experts)[0] system allows a custom expert to be > added to an existing system. A computer algebra expert is certainly > possible. > > This future is not far off. Consider that the ollama system[1] lists 61 > systems, of which "wizard-math"[2] is one. The OpenAI "apps" page is > said to have several hundred systems since November 2023. > > The jerk[3] in AI systems is difficult to track. Alvin Toffler[4] > underestimated > the rapidity of change. It is not a matter of "if", only a matter of > "when". > > Tim > > [0] https://huggingface.co/docs/transformers/model_doc/mixtral > > [1] https://ollama.ai/library > > [2] wizard-math "Model focused on math and logic problems" > > [3] Jerk (Wikipedia) aka the third derivative > *jerk* ... is the rate of change of an object's acceleration > <https://en.wikipedia.org/wiki/Acceleration> over time > > [4] Alvin Toffler "Future Shock" > > https://cdn.preterhuman.net/texts/literature/general/Alvin%20Toffler%20-%20Future%20Shock.pdf > > On Wednesday, January 17, 2024 at 2:05:44 AM UTC-5 [email protected] > wrote: > >> Tim, >> >> That use should be something that would benefit people. Good luck with it. >> >> >> On Wed, 17 Jan 2024, 1:18 pm Tim Daly, <[email protected]> wrote: >> >>> I have GPT systems running locally. It is possible to feed the systems >>> text files for training. Once trained they can answer questions related >>> to the training. It will be interesting to convert the book to text and >>> train the system on the book. At minimum it might make an interesting >>> "help" facility capable of responding to natural language questions. >>> >>> On Monday, January 15, 2024 at 10:05:54 PM UTC-5 [email protected] >>> wrote: >>> >>>> Tim, notice though that the only intelligence here is what is provided >>>> by human intelligence. None of the various systems have any inherent >>>> intelligence. >>>> >>>> My youngest granddaughter has just turned 1 and I have been spending >>>> time watching her develop intelligence and the difference between her and >>>> any of the current systems is possibly on the order of a million orders of >>>> magnitude with the systems being the primitive level. >>>> >>>> As I said above, these systems can have possible use cases but only if >>>> humans are controlling them intelligently. It is a matter of finding >>>> specific use cases that work for us. >>>> >>>> An appropriate analogy is summed up as: fire is a good servant but a >>>> destructive master. >>>> >>>> On Tue, 16 Jan 2024, 1:00 pm Tim Daly, <[email protected]> wrote: >>>> >>>>> There is an interesting example of proof earlier in the video: >>>>> https://youtu.be/3pb_-oLfWJ4?t=298 >>>>> >>>>> Reinforcement learning (using the Bellman equation[0]) needs >>>>> the ability to at least decide if it succeeds and can use information >>>>> about success in intermediate steps to rapidly improve. It also >>>>> needs a set of actions that can be performed at each step. >>>>> Reinforcement learning has conquered gaming with this structure. >>>>> >>>>> LEAN provides a database of theorems, definitions, and tactics >>>>> that can be applied at each step of a proof. It provides feedback >>>>> about the local success of each action. It provides global success >>>>> once the proof completes. Thus all of the elements needed by >>>>> reinforcement learning are available in proof systems. LEAN's >>>>> theorems are strongly typed. >>>>> >>>>> When training GPT-like systems, reinforcement training is the last >>>>> step (see Karpathy's slide 3): >>>>> https://karpathy.ai/stateofgpt.pdf >>>>> It is straightforward to customize a GPT-like system with LEAN's >>>>> mathematical knowledge to focus on proofs. >>>>> >>>>> We use group theory as our organizational skeleton. One of the steps >>>>> to integrating LEAN and computer algebra is aligning LEAN's axiom >>>>> structure with our category/domain structure (my current research). >>>>> Once that is done then theorems, such as a commutative theorem, >>>>> can be inherited and available to prove a function in a domain. >>>>> >>>>> We use Buchberger's Groebner basis algorithm. This algorithm has >>>>> already been proven in Coq and is in use[1]. >>>>> >>>>> It is only a matter of time before the axioms which support the Risch >>>>> algorithm are proven. That will form the basis of proof of our current >>>>> implementation. >>>>> >>>>> Because we are strongly typed and use a group theory scaffold we >>>>> are in a unique position to formalize our algorithms. >>>>> >>>>> The combination of proof systems and computer algebra will be a >>>>> large step in computational mathematics. >>>>> >>>>> Tim >>>>> >>>>> >>>>> [0] Bellman Equation >>>>> https://en.wikipedia.org/wiki/Bellman_equation >>>>> >>>>> [1] Connecting Gröbner Bases Programs with Coq to do Proofs in >>>>> Algebra, Geometry and Arithmetics >>>>> https://arxiv.org/abs/1007.3615 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Monday, January 15, 2024 at 6:59:46 PM UTC-5 Dima Pasechnik wrote: >>>>> >>>>>> On Mon, Jan 15, 2024 at 10:45 PM Hill Strong <[email protected]> >>>>>> wrote: >>>>>> > >>>>>> > No artificial stupidity (AI) will ever or should ever be considered >>>>>> as [a trustworthy co-author in mathematical research]. At best, these >>>>>> systems could be a possibly useful tool in the hands of those who know >>>>>> their subject field. In the hands of everyone else they will be like >>>>>> young >>>>>> children using power tools - a dangerous proposition at best and an utter >>>>>> disaster at worse. >>>>>> > >>>>>> > There is far too much unthinking hype about the subject even by the >>>>>> researchers in the field. >>>>>> >>>>>> Well, we all know Doron Zeilberger's co-author: >>>>>> https://sites.math.rutgers.edu/~zeilberg/ekhad.html >>>>>> >>>>>> In this sense it's not new at all. More powerful, yes, but still... >>>>>> >>>>>> > >>>>>> > On Tue, 16 Jan 2024, 12:10 am Tim Daly, <[email protected]> wrote: >>>>>> >> >>>>>> >> "When integrated with tools such as formal proof verifiers, >>>>>> >> internet search, and symbolic math packages, I expect, say, >>>>>> >> 2026-level AI, when used properly, will be a trustworthy >>>>>> >> co-author in mathematical research, and in many other fields >>>>>> >> as well" -- Terrance Tao >>>>>> >> https://youtu.be/3pb_-oLfWJ4?t=358 >>>>>> >> >>>>>> >> I might add that it would be important that the computer algebra >>>>>> >> algorithms be proven correct. >>>>>> >> >>>>>> >> -- >>>>>> >> You received this message because you are subscribed to the Google >>>>>> Groups "FriCAS - computer algebra system" group. >>>>>> >> To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to [email protected]. >>>>>> >> To view this discussion on the web visit >>>>>> https://groups.google.com/d/msgid/fricas-devel/11e29cc8-7c91-4ad2-a56b-d380db9a33e3n%40googlegroups.com. >>>>>> >>>>>> > >>>>>> > -- >>>>>> > You received this message because you are subscribed to the Google >>>>>> Groups "FriCAS - computer algebra system" group. >>>>>> > To unsubscribe from this group and stop receiving emails from it, >>>>>> send an email to [email protected]. >>>>>> > To view this discussion on the web visit >>>>>> https://groups.google.com/d/msgid/fricas-devel/CAEnaMTGr_Vk3qa-7X%3DVcPpmwuhEdcKySsngh1UaTmsTRgX9Lwg%40mail.gmail.com. >>>>>> >>>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "FriCAS - computer algebra system" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> >>>> To view this discussion on the web visit >>>>> https://groups.google.com/d/msgid/fricas-devel/6ec0fcdb-1c96-4d5d-95df-35b52e3d97e1n%40googlegroups.com >>>>> <https://groups.google.com/d/msgid/fricas-devel/6ec0fcdb-1c96-4d5d-95df-35b52e3d97e1n%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>> . >>>>> >>>> -- >>> You received this message because you are subscribed to the Google >>> Groups "FriCAS - computer algebra system" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> >> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/fricas-devel/217ae176-b177-48f6-9a31-05a0758b5820n%40googlegroups.com >>> <https://groups.google.com/d/msgid/fricas-devel/217ae176-b177-48f6-9a31-05a0758b5820n%40googlegroups.com?utm_medium=email&utm_source=footer> >>> . >>> >> -- > You received this message because you are subscribed to the Google Groups > "FriCAS - computer algebra system" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/fricas-devel/082f0305-e06f-4623-aa62-4eb8412e0b23n%40googlegroups.com > <https://groups.google.com/d/msgid/fricas-devel/082f0305-e06f-4623-aa62-4eb8412e0b23n%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "FriCAS - computer algebra system" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/fricas-devel/CAEnaMTGvyJwiC5FVZFN2vXcwftFRH6yY4VfyW8aEX3UaVtNvJQ%40mail.gmail.com.
