Just in case people find it useful, here’s an important and well-known critique 
of LLMs from people currently working and thinking carefully about all this: 
https://dl.acm.org/doi/10.1145/3442188.3445922

Mauricio 

Sent from my iPhone

> On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY 
> <[email protected]> wrote:
> 
> 
> Csaba Dezso wrote:
> 
>> My question to the AI savvies among us would be: is confabulation / 
>> hallucination an integral and therefore essentially ineliminable feature of 
>> LLM? 
> 
> I have an extremely limited knowledge and experience  of AI but my 
> understanding of LLM's is that they work by choosing the next most 
> statistically  likely word in their answer (again I'm not exactly clear how 
> they determine that),  So there answers aren't based on any kind of 
> reasoning. 
> Harry Spier
> 
> _______________________________________________
> INDOLOGY mailing list
> [email protected]
> https://list.indology.info/mailman/listinfo/indology
_______________________________________________
INDOLOGY mailing list
[email protected]
https://list.indology.info/mailman/listinfo/indology

Reply via email to