* Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists <emacs-tangents@gnu.org> [2025-03-11 09:56]: > > * Van Ly via Emacs news and miscellaneous discussions outside the scope of > > other Emacs mailing lists <emacs-tangents@gnu.org> [2025-03-10 11:10]: > > > > There was never a single situation where model said "I don't > > know". That is the level of conversation I would like to get one time > > in future, where model is objectively real. > > Yes. That remains unsolved. The models don't operate on error bars.
Unless instructed specifically, there are those which make continous questions. I wondered always why model doesn't ask me for more information when it is necessary. That is about in the same corner of the problem like "I don't know" mystery. I am these days getting very rude answers, and that give to model emotional character, which I like, it is very entertaining. https://gitea.com/gnusupport/LLM-Helpers/src/branch/main/bin/rcd-llm-text-query.sh > > I have no idea how AI techniques could arrive to the thought that > > model doesn't know something. Maybe minimizing temperature and > > teaching model somehow (don't know how) to provide only information > > that is objectively real. > > This short story appeared on my feed. > > ask a foolish question > * robert sheckley > => https://www.gutenberg.org/cache/epub/33854/pg33854-images.html a short > story Good story! There are parallels with the Large Language Model (LLM) talking, though in opposite way. There is so much knowledge out there, easy to find, but which wasn't assessed by LLM training processes. Jean Louis --- via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)