Also of interest is James Gleick’s The Parrot in the Machine, New York
Review of Books, July 24, 2025.
Incidentally, and unrelatedly, the same issue contains a heartbreaking
report by David Shulman on the current situation in the West Bank.
Shana Tova!
Eli
Zitat von Mauricio Najarro via INDOLOGY <[email protected]>:
Just in case people find it useful, here’s an important and
well-known critique of LLMs from people currently working and
thinking carefully about all this:
https://dl.acm.org/doi/10.1145/3442188.3445922
Mauricio
Sent from my iPhone
On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY
<[email protected]> wrote:
Csaba Dezso wrote:
My question to the AI savvies among us would be: is confabulation
/ hallucination an integral and therefore essentially ineliminable
feature of LLM?
I have an extremely limited knowledge and experience of AI but my
understanding of LLM's is that they work by choosing the next most
statistically likely word in their answer (again I'm not exactly
clear how they determine that), So there answers aren't based on
any kind of reasoning.
Harry Spier
_______________________________________________
INDOLOGY mailing list
[email protected]
https://list.indology.info/mailman/listinfo/indology
--
Prof. Dr. Eli Franco
Hegergasse 8/15
Wien 1030
Austria
_______________________________________________
INDOLOGY mailing list
[email protected]
https://list.indology.info/mailman/listinfo/indology