Okay I must come back to this.

* Rudolf Adamkovič <rud...@adamkovic.org> [2025-02-28 16:00]:
> Jean Louis <bugs@gnu.support> writes:
> 
> > [...] that answer is not innovative.
> 
> Thank you for the explanation.  I think I now see what you mean.
> 
> In that case, we are simply talking about two different topics.
> 
> I stated that large language models can solve never-seen instances of a
> problem.  That is, the model has seen the problem in the training set
> but not the particular instance in the test set.

It's actually always so; your test lacks facts.

If you input "I am an Emacs enthusiast" as a system message into the
model, it can infer numerous related concepts without needing explicit
question-answer pairs in its dataset for probabilistic resolution of
correct answers. This is fundamentally how Large Language Models
(LLMs) function.

You can't take it as a test that if question wasn't in the dataset
that Large Language Model (LLM) invented something new... come on.

Imagine, if we would have all the questions people are asking inside
of the dataset? That would not make any sense.

Let me ask DeepSeek about it:

LLMs and Novelty: LLMs like GPT-4 do not "invent" in the human sense. They 
generate responses based on patterns and information in their training data. If 
a question or concept wasn't explicitly in the dataset, the model can still 
produce a novel response by combining and rephrasing learned information in new 
ways. This might appear inventive, but it’s fundamentally an interpolation or 
extrapolation of existing knowledge, not true creation.

So the Large Language Model (LLM) said basically same thing as me, same thought 
with different words.

I am speaking about what I learned about vectors and tensors, and I am 
beginner. 

> I use the terms (1) problem, (2) problem instance, (3) training set,
> (4) test set, per their standard, precise, mathematical definitions.
> I said nothing more and nothing less, and with the standard
> terminology, my statement holds true, as can be trivially
> demonstrated, and as I explained.

Hmm. That paragraph I am not getting fully. Training set is just that,
tuning the vectors in probabilistic manners. If you bring probability
accuracy very high by training it, model starts becoming apparently
"smarter" while in reality it just matches probability of what could
it be better than other models less "trained".

Another statement by DeepSeek:

Limitations of LLMs: LLMs don’t have true understanding, intent, or
creativity. They rely on statistical correlations in their training
data. While they can generate outputs that seem novel or insightful,
these are still rooted in the data they were trained on, not in
independent thought or invention.

> Now to your views.  As for "innovation" and "true invention", I have
> nothing to say to that end.  In fact, I have no idea what what those
> terms mean, if we were to measure.  I have the same issue with the
> linked GNU article on "artificial intelligence".

That a computer can be programmed to find by probability something
that human can't find that fast, it is great tool, not necessarily
innovation.

What is innovation?

* Overview of noun innovation

The noun innovation has 3 senses (first 2 from tagged texts)
1. (3) invention, innovation -- (a creation (a new device or process) resulting 
from study and experimentation)
2. (1) invention, innovation, excogitation, conception, design -- (the creation 
of something in the mind)
3. initiation, founding, foundation, institution, origination, creation, 
innovation, introduction, instauration -- (the act of starting something for 
the first time; introducing something new; "she looked forward to her 
initiation as an adult"; "the foundation of a new scientific society")

(note that single word has multiple definitions depending of the context)

Model doesn't study anything. It doesn't see. It doesn't evaluate.

We are human who anthropomorphize the Large Language Model (LLM)
because of our fascination.

The Hells Angels often refer to their motorcycles as 'babies', which
can be seen as a form of personification or anthropomorphism,
attributing human-like qualities to inanimate objects such as bikes.

We have got new words in the area of artificial intelligence.

So we have got a new context.

to train a child:

discipline, train, check, condition -- (develop (children's) behavior by 
instruction and practice; especially to teach self-control;

but to train a model is in different context, it is not related to human, so 
dictionaries must get updated:

- to train the LLM means:

To train an LLM (Large Language Model) means to expose it to large
amounts of data so it can learn patterns, relationships, and
structures in the text to generate coherent and contextually relevant
responses.

- to learn in the LLM context means:

To learn, in the context of an LLM, means to adjust its internal
parameters (weights) based on patterns and relationships in the
training data, enabling it to predict and generate text that aligns
with the input it receives.

"Adjusting internal parameters"" does not equate to "learning" as in
normal context; rather, it's merely a modification of numerical values
influencing the probability of outcomes.

>   My *narrow* statement was true, and you are taking a *wider* view.
> 
> Thank you again for taking the time to explain your view to me!

But all that wasn't important to me.

I wish to find out anything truly innovative that was invented by some
"AI" Large Language Model (LLM).

I can't find it on Internet.

-- 
Jean Louis

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)
  • ... Jean Louis
    • ... Rudolf Adamkovič
      • ... Jean Louis
        • ... Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
          • ... Jean Louis
            • ... Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
              • ... Jean Louis
                • ... Eli Zaretskii
                • ... Jean Louis
                • ... Eli Zaretskii
                • ... Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
                • ... Jean Louis
                • ... Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists

Reply via email to