* David Masterson <[email protected]> [2026-03-09 23:41]:
> Ihor Radchenko <[email protected]> writes:
> 
> > Christian Moe <[email protected]> writes:
> >
> >> Looks like it's time to add a note to this effect to
> >> https://orgmode.org/worg/org-contribute.html, and possibly elsewhere as
> >> well. The points you make that go beyond copyright are well put, and I
> >> think it is worth setting them out. Should the copyright issue somehow
> >> be resolved to GNU's satisfaction, it doesn't mean that we want to open
> >> the floodgates.
> >
> > Yes, I plan to write a draft, once this and another thread settle.
> > I also coined some idea about copyright handling that we can use before
> > the official GNU guidance on LLMs is issued. But I still need to check -
> > several people on private GNU lists raised concerns.
> >
> > And GNU guidance seems to be coming (after they consult lawyers). Maybe
> > in several months or so.
> 
> Hmm.  You've probably also seen this, but...
> 
> OpenAI has proven that LLMs have a fundamental problem -- they lie and
> their lying is getting more pronounced in the newer models.  The basic
> problem is they are trained to *not* say "I don't know" because saying
> that would break the foundation of their business plan.  Something to
> incorporate in your draft...
> 
> https://www.science.org/content/article/ai-hallucinates-because-it-s-trained-fake-answers-it-doesn-t-know

Just that you are generalizing when saying "LLMs have a fundamental
problem" -- did you make a study to prove that fundamental problem?

And this is not necessarily a deliberate act of “lying,” but rather a
byproduct of their training objectives.

-- 
Jean Louis

Reply via email to