> Date: Mon, 24 Mar 2025 14:27:56 +0530 (IST) > Cc: emacs-tangents@gnu.org > From: Madhu <enom...@meer.net> > > * Eli Zaretskii <e...@gnu.org> <86o6xsgt6s....@gnu.org> > Wrote on Sun, 23 Mar 2025 08:24:59 +0200 > > Not the models with which I have the experience, no. They would be > > useless for me if they did that. Because I have real hared problems > > to solve, and they must be solved correctly and better than the > > alternative methods which are not based on LLM, when compared by some > > meaningful metrics. > > Are you talking of LLMs or of other ML models?
For the purpose of this discussion, what is the difference? If by "LLM" you mean only those models that produce text (either for human consumption or text of programs), then no, I'm not talking about LLMs of that kind. > I can readily > understand what youre saying if the domain is say OCR or similar, > where the inputs and outputs are clearly definable. "Clearly definable" is in the eyes of the beholder. Some problems have only probabilistic solutions, in which case the output is some kind of weight or score. > > And even in other, much simpler, cases this is not so IME. Consider > > the "text completion" feature, for example, which is present in many > > tools (including Emacs, but in Emacs it is not based on LLM): you type > > text and the application offers you likely completions based on what > > you typed till now. If the completion is not what you intend to > > write, why would you "have no choice but accept"? You just keep > > typing instead of pressing TAB to accept. > > I believe much of the hype and banking fraud about LLMs revolves > around the plan to get the endtimes population to accept the > decision-making made on their behalf by those who govern them but > through an unassailable unquestionable intermediary, without directly > implicating the authority which governs them. I'm not sure this aspect is at all relevant to the present discussion. --- via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)