Segnalo questo pezzo, a mio giudizio molto equilibrato e competente, di Brian Hayes /contra/ Welch, con un gustoso confronto tra Knuth e ChatGpt.

Il punto cruciale per me è questo: "we should blame the [] unavoidable weakness of a protocol in which the most frequent answer is, by default, the right answer".

<http://bit-player.org/2023/ai-and-the-end-of-programming>


Earlier this year Matt Welsh <https://www.mdw.la/> announced the end of programming <https://dl.acm.org/doi/10.1145/3570220>. He wrote, in /Communications of the ACM:/

   I believe the conventional idea of “writing a program” is headed for
   extinction, and indeed, for all but very specialized applications,
   most software, as we know it, will be replaced by AI systems that
   are trained rather than programmed. In situations where one needs a
   “simple” program (after all, not everything should require a model
   of hundreds of billions of parameters running on a cluster of GPUs),
   those programs will, themselves, be generated by an AI rather than
   coded by hand.

A few weeks later, in an online talk <https://acm-org.zoom.us/webinar/register/WN_vf0SPZY7TeWMH-5_IaloIQ#/registration>, Welsh broadened his deathwatch. It’s not only the art of programming that’s doddering toward the grave; all of computer science is “doomed.” (The image below is a screen capture from the talk.)

[...]

I wanted a problem that can be solved by computational means, but that doesn’t call for doing much arithmetic, which is known to be one of the weak points of LLMs. I settled on a word puzzle invented 150 years ago by Lewis Carroll and analyzed in depth by Donald E. Knuth in the 1990s.

[...]

At this point I must concede that ChatGPT, with a little outside help, has finally done what I asked of it. It has written a program that can construct a valid word ladder. But I still have reservations. Although the programs written by GPT-4 and by Knuth produce the same output, the programs themselves are not equivalent, or even similar.

[...]

Why do the chatbots favor the inferior algorithm? You can get a clue just by googling “word-ladder program.” Almost all the results at the top of the list come from websites such as Leetcode <https://leetcode.com/problems/word-ladder/>, GeeksForGeeks <https://www.geeksforgeeks.org/word-ladder-length-of-shortest-chain-to-reach-a-target-word/#> and RosettaCode <https://rosettacode.org/wiki/Word_ladder>. These sites, which apparently cater to job applicants and competitors in programming contests, feature solutions that call for generating all 125 single-letter variants of each word, as in the GPT programs. Because sites like these are numerous—there seem to be hundreds of them—they outweigh other sources, such as Knuth’s book (if, indeed, such texts even appear in the training set). Does that mean we should blame the poor choice of algorithm not on GPT but on Leetcode? I would point instead to the unavoidable weakness of a protocol in which the most frequent answer is, by default, the right answer.

[...]

For decades, architects of AI believed that true intelligence (whether natural or artificial) requires a mental model of the world. To make sense of what’s going on around you (and inside you), you need intuition about how things work, how they fit together, what happens next, cause and effect. Lenat insisted that the most important kinds of knowledge are those you acquire long before you start reading books. You learn about gravity by falling down. You learn about entropy when you find that a tower of blocks is easy to knock over but harder to rebuild. You learn about pain and fear and hunger and love—all this in infancy, before language begins to take root. Experiences of this kind are unavailable to a brain in a box, with no direct access to the physical or the social universe.

LLMs appear to be the refutation of these ideas. After all, they are models of language, not models of the world. They have no /embodiment/, no physical presence that would allow them to learn via the school of hard knocks. Ignorant of everything but mere words, how do they manage to sound so smart, so worldly?

[...]

_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to