That will happen soon if people do not follow Antonia’s advice. Dr. Jeffery D. Long Carl W. Zeigler Professor of Religion, Philosophy, & Asian Studies School of Arts & Humanities Elizabethtown College Elizabethtown, PA https://etown.academia.edu/JefferyLong Series Editor, Explorations in Indic Traditions: Ethical, Philosophical, and Theological Lexington Books “One who makes a habit of prayer and meditation will easily overcome all difficulties and remain calm and unruffled in the midst of the trials of life.” (Holy Mother Sarada Devi) “We are a way for the Cosmos to know itself.” (Carl Sagan)
> On Sep 21, 2025, at 1:53 PM, Uskokov, Aleksandar via INDOLOGY > <[email protected]> wrote: > > How long before it appears as such in a publication? > > Get Outlook for Android <https://aka.ms/AAb9ysg> > From: INDOLOGY <[email protected]> on behalf of Lyne > Bansat-Boudon via INDOLOGY <[email protected]> > Sent: Sunday, September 21, 2025 1:30:29 PM > To: Madhav Deshpande <[email protected]>; Antonia Ruppel > <[email protected]> > Cc: Indology List <[email protected]> > Subject: Re: [INDOLOGY] AI hallucinations > > Another example found at random during a series of linked searches. I quote: > > Aperçu IA > +2 > "Dhvanyaloka" se traduit en français par > "monde des significations implicites" ou "lumière de la suggestion" > > ‘Dhvanyaloka’ translates into English as ‘world of implied meanings’ or > ‘light of suggestion’. > > Automatism as an intellectual principle. > > Best wishes, > > Lyne > > Lyne Bansat-Boudon > Directeur d'études pour les Religions de l'Inde > Ecole pratique des hautes études, section des sciences religieuses > Membre senior honoraire de l'Institut universitaire de France > De : INDOLOGY <[email protected]> de la part de Antonia > Ruppel via INDOLOGY <[email protected]> > Envoyé : dimanche 21 septembre 2025 16:42 > À : Madhav Deshpande <[email protected]> > Cc : Indology List <[email protected]> > Objet : Re: [INDOLOGY] AI hallucinations > > I think the simple rule for using AI for knowledge purposes is: use it to do > grunt work in cases where it is easier for you to proof the result than to do > the work yourself. I've been using DeepSeek to generate running vocab > commentaries (that then still take a fair while to get from being 75-80% to > being actually correct); friends of mine who write code say that they find > doing this themselves a lot easier than asking AI to do it and then checking > the result for the inevitable bugs. > > AI is made to sound convincing; when you ask it about something where you > don't know the answer, you have no way of knowing whether what it tells you > is right or just sounds right. It *is* good for brainstorming if you're > looking for ideas and then intend to follow up on the answers it gives you to > check whether any of the references (to articles, legal precedents, > historical events or Pāṇinian rules) refer to things that actually exist. > > And of course, the constant use of AI that its creators are trying to push us > towards uses up huge amounts of natural resources (such as drinking-quality > water to cool the machinery) and requires the generation of larger amounts of > energy than can be safely generated if we are serious about wanting to > prevent further climate change. > > Antonia > > On Sun, 21 Sept 2025 at 16:28, Madhav Deshpande via INDOLOGY > <[email protected] <mailto:[email protected]>> wrote: > Several times when I asked ChatGPT and other AI chatbots something about > Panini, it gave me rules that were irrelevant and with wrong numbers. Cannot > trust these chatbots for specifics. > > Madhav M. Deshpande > Professor Emeritus, Sanskrit and Linguistics > University of Michigan, Ann Arbor, Michigan, USA > Senior Fellow, Oxford Center for Hindu Studies > Adjunct Professor, National Institute of Advanced Studies, Bangalore, India > > [Residence: Campbell, California, USA] > > > On Sun, Sep 21, 2025 at 7:03 AM Harry Spier via INDOLOGY > <[email protected] <mailto:[email protected]>> wrote: > Thank you Claudius, > I've wondered if in additional to this statistical generation of the text > there was some kind of "algorithmic monitoring" to eliminate undesirable > answers (undesirable for perhaps good reasons or not so good reasons) . > > For example a few months ago, when AI was coming up on the list, I typed into > google "what are the advantages of AI" and got an AI generated paragraph or > two. But when I then typed in "What are the disadvantages of AI" into Google, > I did not get any AI generated answer. A few weeks later I did the same > experiment and the situation had changed. I got AI generated answers in > google for both "What are the advantages of AI" and "What are the > disadvantages of AI?". > > Harry Spier > > > > On Sun, Sep 21, 2025 at 8:03 AM Claudius Teodorescu > <[email protected] <mailto:[email protected]>> wrote: > Dear Harry, > > You gave an excellent definition on how the text is generated. The > probabilities for what word comes next are extracted from the input texts (so > no syntactic or semantic rules, just statistics). > > Besides these probabilities, there are also random number generators, which > are used for variations of the generated text. > > So, nothing new or creative could appear, only what was entered, and most of > the times in a distorted form. > > Claudius Teodorescu > > On Sun, 21 Sept 2025 at 14:19, Mauricio Najarro via INDOLOGY > <[email protected] <mailto:[email protected]>> wrote: > Just in case people find it useful, here’s an important and well-known > critique of LLMs from people currently working and thinking carefully about > all this: https://dl.acm.org/doi/10.1145/3442188.3445922 > > Mauricio > > Sent from my iPhone > > On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY > <[email protected] <mailto:[email protected]>> wrote: > > > Csaba Dezso wrote: > > My question to the AI savvies among us would be: is confabulation / > hallucination an integral and therefore essentially ineliminable feature of > LLM? > > I have an extremely limited knowledge and experience of AI but my > understanding of LLM's is that they work by choosing the next most > statistically likely word in their answer (again I'm not exactly clear how > they determine that), So there answers aren't based on any kind of > reasoning. > Harry Spier > > _______________________________________________ > INDOLOGY mailing list > [email protected] <mailto:[email protected]> > https://list.indology.info/mailman/listinfo/indology > > _______________________________________________ > INDOLOGY mailing list > [email protected] <mailto:[email protected]> > https://list.indology.info/mailman/listinfo/indology > > > -- > Cu stimă, > Claudius Teodorescu > > _______________________________________________ > INDOLOGY mailing list > [email protected] <mailto:[email protected]> > https://list.indology.info/mailman/listinfo/indology > > _______________________________________________ > INDOLOGY mailing list > [email protected] <mailto:[email protected]> > https://list.indology.info/mailman/listinfo/indology > > _______________________________________________ > INDOLOGY mailing list > [email protected] > https://list.indology.info/mailman/listinfo/indology
_______________________________________________ INDOLOGY mailing list [email protected] https://list.indology.info/mailman/listinfo/indology
