It's apparently a feature not a bug, according to research from OpenAI: "We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty..."
https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf Best wishes, Daniel > On 21 Sep 2025, at 07:57, Csaba Dezso via INDOLOGY > <[email protected]> wrote: > > Dear Colleagues, > Recently I have experimented with using ChatGPT as a research tool. You can > follow our interaction here: > > https://chatgpt.com/share/68cefb37-52a4-800e-9da0-9960fbe2d5ad > > The general answers I got looked promising and often intriguing, but when it > came to more precise references, I got mostly hallucinations. It even > confabulated Sanskrit quotations. (ChatGPT did not notice the mistake I made > in the question, writing Īśvarakṛṣṇa isntead of Īśvaradatta. Claude did > notice it, but then it also went on hallucinating.) > My question to the AI savvies among us would be: is confabulation / > hallucination an integral and therefore essentially ineliminable feature of > LLM? > > Best wishes, > Csaba Dezső > > _______________________________________________ > INDOLOGY mailing list > [email protected] > https://list.indology.info/mailman/listinfo/indology
_______________________________________________ INDOLOGY mailing list [email protected] https://list.indology.info/mailman/listinfo/indology
