* Dr. Arne Babenhauserheide <arne_...@web.de> [2025-03-09 23:03]:
> Maybe this is the misconception: before Alpha Go, no computer was able
> to beat even moderate human players.
> 
> Go was the bane of game-AI.

https://arstechnica.com/information-technology/2022/11/new-go-playing-trick-defeats-world-class-go-ai-but-loses-to-human-amateurs/

I see there about it, and it seems it still has weaknesses.

Though you may see it was as a game invented long ago: 
https://www.myabandonware.com/game/go-kyh

It wasn't AI who took the old game and made a new version, and it
wasn't "artificial intelligence" who decided "let me train myself to
beat human in this game".

Here is scientific paper that concludes:
https://arxiv.org/html/2409.04109v1
Statement came from a study with 100+ NLP researchers.

"Despite this, no evaluations have shown that LLM systems can take the very 
first step of producing novel, expert-level ideas, let alone perform the entire 
research process."

Here is the answer by ChatGPT:
https://chatgpt.com/share/67ce7cb5-94f4-800a-8fe0-32fc60b30d00 and I
don't consider answer authoritative, that is why I am looking for
examples. LLM answers are like drafts, pointers for further review. I
never consider them conclusive.

It seems that people wonder about same subject as me:

Large language models are powerful imitators, but not innovators
https://the-decoder.com/large-language-models-are-powerful-imitators-but-not-innovators/?utm_source=chatgpt.com#summary

Discovering novel functions in everyday tools is not about finding the 
statistically nearest neighbor from lexical co-occurrence patterns. Rather, it 
is about appreciating the more abstract functional analogies and causal 
relationships between objects that do not necessarily belong to the same 
category or are associated in text. In these examples, people must use broader 
causal knowledge, such as understanding that tracing an object will produce a 
pattern that matches the object’s shape, to produce a novel action that has not 
been observed or described before, in much the same way a scientific theory, 
for example, allows novel interventions on the world (Pearl, 2000). Compared 
with humans, large language models are not as successful at this type of 
innovation task. On the other hand, they excel at generating responses that 
simply demand some abstraction from existing knowledge.

https://journals.sagepub.com/doi/10.1177/17456916231201401

If it would be that easy to say "text generating LLM can innovate" then with 
all the hundreds of LLM models it is expected that such huge fake intelligence 
can easily prove that it can innovate.

Rather than that, scientists prove it can't. It's a good tool for 
brainstorming, but it's all the existing knowledge.

-- 
Jean Louis

---
via emacs-tangents mailing list 
(https://lists.gnu.org/mailman/listinfo/emacs-tangents)
  • ... Jean Louis
  • ... Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
  • ... Jean Louis
  • ... Van Ly via Emacs news and miscellaneous discussions outside the scope of other Emacs mailing lists
  • ... Jean Louis
  • ... Eli Zaretskii
  • ... Jean Louis
  • ... Dr. Arne Babenhauserheide
  • ... Jean Louis
  • ... Dr. Arne Babenhauserheide
  • ... Jean Louis
  • ... Eli Zaretskii
  • ... Jean Louis
  • ... Suhail Singh
  • ... Jean Louis
  • ... Suhail Singh
  • ... Eli Zaretskii
  • ... Jean Louis
  • ... Dr. Arne Babenhauserheide
  • ... Jean Louis
  • ... Dr. Arne Babenhauserheide

Reply via email to