* Eli Zaretskii <e...@gnu.org> [2025-03-09 17:39]: > > Date: Sun, 9 Mar 2025 16:08:24 +0300 > > From: Jean Louis <bugs@gnu.support> > > Cc: van...@sdf.org, emacs-tangents@gnu.org > > > > Too many opinions, zero examples. > > You don't accept expert opinions without examples? That would mean > you will accept only a very small amount of opinions.
Sure you are expert, and not just in subject we know about you in public. Though the question wasn't about getting "yes/no" answers, it was about examples of where LLM, or any other type of artificial intelligence made some true innovation, created something new. We talk and talk, but examples are missing. Sure you are expert, and not just in subject we know about you in public. Though the question wasn't about getting "yes/no" answers, it was about examples of where LLM, or any other type of artificial intelligence made some true innovation, created something new. - "Innovations created by artificial intelligence" https://duckduckgo.com/?q=Innovations+created+by+artificial+intelligence&ia=web Can't find anything. LLM itself is innovation, but I am searching for innovation created by LLM. > A complete example of using ML to solve non-trivial problems will take > hours to describe and explain to someone who is not proficient in the > domain. I don't have that kind of time sorry. What you mentioned here wasn't a subject. Trivial or non-trival problems can be solved by various tools like LLM or hammer, or many other tools. Tool is also a book. Finding solution in a book isn't an innovation in itself, it was already there. If Large Language Model (LLM) could truly innovate, then at certain point where that innovation is achieved, I don't think there would be need to continue "training" the model. Large Language Model (LLM) answers are inferred from tensors, and tensors are generated based on defined written knowledge, or trash of knowledge, there are all kinds of it. Inferring information isn't innovation. Inferencing info just means making educated guesses based on what you already know—nothing fancy there! Innovation involves coming up with new ideas and solutions (Wikipedia: innovation). They’re different; one is like guessing the next number in a sequence, while innovation’s more about creating that whole series from scratch. https://en.wikipedia.org/wiki/Innovation By alone the LLM does basically nothing, can't think, can't innovate. It can get some input, context and uses huge information inferred from tensors to generate some bullshit, if it aligns with observer's expectations, it is impressive, and if not, click it again. A program is not "creative", rather author of a program is creative, as he created the tool that can implement the creation. Anthropomorphism is both entertaining and dangerous. Here are some references that I could find about registering AI innovations: https://asipi.org/lima2019/wp-content/uploads/sites/17/2019/11/Invenciones-por-Inteligencia-Artificial-Mitch-Feller-Barbara-Fiacco-Bennett-Rockney-comprimido.pdf However, yet I did not find one new innovation. Let's conclude, there are no examples of any true innovation by LLM in this discussion. It is difficult to find them. Maybe they are there, but hard to find. Jean Louis --- via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)