Dear Colleagues,
Recently I have experimented with using ChatGPT as a research tool. You can 
follow our interaction here:

https://chatgpt.com/share/68cefb37-52a4-800e-9da0-9960fbe2d5ad
Heaven critiques in literature
chatgpt.com

The general answers I got looked promising and often intriguing, but when it 
came to more precise references, I got mostly hallucinations. It even 
confabulated Sanskrit quotations. (ChatGPT did not notice the mistake I made in 
the question, writing Īśvarakṛṣṇa isntead of Īśvaradatta. Claude did notice it, 
but then it also went on hallucinating.)
My question to the AI savvies among us would be: is confabulation / 
hallucination an integral and therefore essentially ineliminable feature of 
LLM? 

Best wishes,
Csaba Dezső
_______________________________________________
INDOLOGY mailing list
[email protected]
https://list.indology.info/mailman/listinfo/indology

Reply via email to