* Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists <emacs-tangents@gnu.org> [2025-03-10 11:10]:
> > Sure. Though ANY output of ANY LLM, I only consider as a generated > > text draft for human to be verified. > > Thank you for taking the trouble to generate the LLM response. > > > I don't think there is the answer below that you wanted to get. Can it > > think? I don't think so. > > Your email address is bounced and that is why I've not messaged directly. My email is: bugs@gnu.support, maybe mailing systems change it on the fly or I used some other email by mistake. > When you know to say, "I don't know." How do you think "AI" > techniques may arrive at that thought? I have been using plethora of models, you may see the list here inside: https://github.com/gnusupport/LLM-Helpers/blob/main/bin/rcd-llm-dmenu-launcher.sh and good thing I find is that I started with 4 GB GPU, as that experience from small gave me quite a good feeling on what can be done with which model related to my personal and business needs. You may see there is quite number of models, and more than that which is not in that file. There was never a single situation where model said "I don't know". That is the level of conversation I would like to get one time in future, where model is objectively real. I have no idea how AI techniques could arrive to the thought that model doesn't know something. Maybe minimizing temperature and teaching model somehow (don't know how) to provide only information that is objectively real. But I have seen that it claims to know something, what it can't know, you can ask for the last knowledge date given to the model and then provide news from future, and model will claim that it is not known, not accepting possibility that something happened in the future, after the model training. Look this: Explain me transformation of power from alien forces using beams to minimize people to size of ants Response -------- The transformation of power from alien forces using beams to minimize people to the size of ants is a fascinating concept often explored in science fiction. This process typically involves the use of advanced technology or energy manipulation to alter the physical dimensions of objects or beings. Here's a breakdown of how this transformation might work: Energy Source: The alien forces possess a highly advanced energy source capable of emitting powerful beams. This energy is often depicted as being more advanced than anything humans currently understand, possibly involving quantum mechanics or other exotic physics. Beam Emission: The aliens direct these beams at the target, which in this case are humans. The beams are finely tuned to interact with the molecular structure of the target. .... The above shows that model pretends to know things which I have no idea what I was writing about. There is always something to say. Blah blah. I don't like using it for accuracy, I am using it for guidance, text correction, list generation, brainstorming, improving my expressions. ChatGPT is bullshit | Ethics and Information Technology: https://link.springer.com/article/10.1007/s10676-024-09775-5 Jean Louis --- via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)