Jean Louis <bugs@gnu.support> writes: > I just asked for examples, which I can't find easily.
As someone with a fair amount of experience in the field, it would probably do you well to read some relevant research on the subject to understand how some of these models work. Regardless, let's see if we can address your question. The question is a little ill-posed, and so some further clarification would help. 1. By large language models, is it fair to assume that you are specifically focused _only_ on transformer-based neural networks? 2. Please provide 10 positive and 10 negative examples of what you mean by "innovation". In order for the examples to be of relevance, please ensure that the innovative thing is text-based. Otherwise, while you may be describing what innovation means to you, it's not applicable in the domain where you seek examples of innovation by LLMs. For each positive and negative example, please share what about the example makes it "innovative" vs not. Please do not use any aid from an LLM for the generation of these examples (lest the answers dilute your intended meaning of the term). -- Suhail --- via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)