Jean Louis <[email protected]> writes:

> * David Masterson <[email protected]> [2026-03-09 23:41]:
>> OpenAI has proven that LLMs have a fundamental problem -- they lie and
>> their lying is getting more pronounced in the newer models.  The basic
>> problem is they are trained to *not* say "I don't know" because saying
>> that would break the foundation of their business plan.  Something to
>> incorporate in your draft...
>> 
>> https://www.science.org/content/article/ai-hallucinates-because-it-s-trained-fake-answers-it-doesn-t-know
>
> Just that you are generalizing when saying "LLMs have a fundamental
> problem" -- did you make a study to prove that fundamental problem?

Did you read the article?

»“Fixing hallucinations would kill the product,” says Wei Xing, an AI
 researcher at the University of Sheffield.«

Not everyone in the article agrees that it is a fundamentally unfixable
problem, but this article is about a study and asks experts in the field
what that study means for AI.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein,
ohne es zu merken.
https://www.draketo.de

Attachment: signature.asc
Description: PGP signature

Reply via email to