Dear Harry, You gave an excellent definition on how the text is generated. The probabilities for what word comes next are extracted from the input texts (so no syntactic or semantic rules, just statistics).
Besides these probabilities, there are also random number generators, which are used for variations of the generated text. So, nothing new or creative could appear, only what was entered, and most of the times in a distorted form. Claudius Teodorescu On Sun, 21 Sept 2025 at 14:19, Mauricio Najarro via INDOLOGY < [email protected]> wrote: > Just in case people find it useful, here’s an important and well-known > critique of LLMs from people currently working and thinking carefully about > all this: https://dl.acm.org/doi/10.1145/3442188.3445922 > > Mauricio > > Sent from my iPhone > > On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY < > [email protected]> wrote: > > > Csaba Dezso wrote: > > My question to the AI savvies among us would be: is confabulation / >> hallucination an integral and therefore essentially ineliminable feature of >> LLM? >> > > I have an extremely limited knowledge and experience of AI but my > understanding of LLM's is that they work by choosing the next most > statistically likely word in their answer (again I'm not exactly clear how > they determine that), So there answers aren't based on any kind of > reasoning. > Harry Spier > > _______________________________________________ > INDOLOGY mailing list > [email protected] > https://list.indology.info/mailman/listinfo/indology > > > _______________________________________________ > INDOLOGY mailing list > [email protected] > https://list.indology.info/mailman/listinfo/indology > -- Cu stimă, Claudius Teodorescu
_______________________________________________ INDOLOGY mailing list [email protected] https://list.indology.info/mailman/listinfo/indology
