Posizione condivisibile, logica e presciente. Posizioni che riflettono in
parte quelle di Geoffrey Hinton, di chi ha contributo allo sviluppo delle
reti neurali negli ultimi anni, e del San Francisco consensus.

Alla luce dei thread degli ultimi giorni - e delle immediate risposte alla
ricerca di cui sopra - converrebbe riscoprire il concetto di "auctoritas"
prima di formulare affermazioni assolutistiche poco fondate in decine e
decine di lunghissimi post.

Converrebbe anche leggersi la letteratura eg:

Matteo Wong, 'The GPT Era Is Already Ending' (*The Atlantic*, 6 December
2024) <
https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906
>.

Zhong-Zhi Li and others, ‘From System 1 to System 2: A Survey of Reasoning
Large Language Models’ (2025) arXiv:2502.17419 [cs.AI] <
https://arxiv.org/abs/2502.17419>.

See Hunter Lightman and others, 'Let's Verify Step by Step' (2023)
arXiv:2305.20050 <https://doi.org/10.48550/arXiv.2305.20050>.

Guan Wang and others, ‘OpenChat: Advancing Open-source Language Models with
Mixed-Quality Data’ (2023) arXiv:2309.11235 [cs.CL] <
https://arxiv.org/abs/2309.11235>.

Guan Wang and others, ‘Hierarchical Reasoning Model’ (2025)
arXiv:2506.21734 [cs.AI] 2–5 https://arxiv.org/abs/2506.21734 (presenting
HRM as a ‘brain-inspired’ recurrent architecture designed for reasoning).

Zhong-Zhi Li and others, ‘Toward Large Reasoning Models: A Survey’ (2025)
arXiv:2501.09686 [cs.AI] 2–5 <https://arxiv.org/abs/2501.09686>.

Jason Wei and others, ‘Chain-of-Thought Prompting Elicits Reasoning in
Large Language Models <https://dl.acm.org/doi/10.5555/3600270.3602070>’
(2022) 35 Advances in Neural Information Processing Systems 24824,
24825–24830. (defining chain-of-thought as intermediate reasoning steps and
reporting large gains on GSM8K math word problems);

Takeshi Kojima and others, ‘Large Language Models are Zero-Shot Reasoners
<https://dl.acm.org/doi/10.5555/3600270.3601883>’ (2022) 35 Advances in
Neural Information Processing Systems 22199, 22201–22205 (on the
‘zero-shot’ procedure where models are prompted to ‘think step by step’ to
generate reasoning chains during inference);

Shuofei Qiao and others, ‘Reasoning with Language Model Prompting: A Survey
<https://aclanthology.org/2023.acl-long.294/>’ (2023) 1(1) ACL Anthology 1,
4–6 (providing a comprehensive review of how CoT functions as a procedural
layer on top of predictive architectures).

Shunyu Yao and others, ‘Tree of Thoughts: Deliberate Problem Solving with
Large Language Models’ (2023) arXiv:2305.10601 <
https://arxiv.org/abs/2305.10601>.

Maciej Besta and others, ‘Graph of Thoughts: Solving Elaborate Problems
with Large Language Models’ (2023) arXiv:2308.09687 <
https://arxiv.org/abs/2308.09687>(extending beyond CoT/ToT by modelling
“thoughts” as a graph with dependencies and feedback loops, ie structured
exploration rather than a single linear narrative).

Lilian Weng, ‘LLM Powered Autonomous Agents’ (Lil’Log, 23 June 2023) <
https://lilianweng.github.io/posts/2023-06-23-agent>;

Lei Wang and others, ‘A Survey on Large Language Model Based Autonomous
Agents <https://doi.org/10.1007/s11704-024-40231-1>’ (2024) 18(6) *Frontiers
of Computer Science* 186345.

David Ha and Jürgen Schmidhuber, ‘World Models’ (2018) arXiv:1803.10122
[cs.LG] <https://arxiv.org/abs/1803.10122>;

David Ha and Jürgen Schmidhuber, ‘Recurrent World Models Facilitate Policy
Evolution
<https://proceedings.neurips.cc/paper/2018/file/2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf>’
(2018) arXiv:1809.01999 [cs.LG] <https://arxiv.org/abs/1809.01999>.

‘World Models Race 2026: How LeCun, DeepMind, and World Labs Are Redefining
the Path to AGI’ (*Introl Blog*, 3 January 2026) <
https://introl.com/blog/world-models-race-agi-2026> (discussing a move
toward a form of reasoning grounded in causality and physics rather than
statistical mimicry).

Julian Schrittwieser and others, ‘Mastering Atari, Go, chess and shogi by
planning with a learned model
<https://www.nature.com/articles/s41586-020-03051-4>’ (2020) 588 Nature
604, 604 (presenting MuZero as ‘combining a tree-based search with a
learned model’).

For the definitive overview of neurosymbolic AI as the ‘third wave’ of
artificial intelligence focused on structured reasoning and explainability,
see Artur d’Avila Garcez and Luís Lamb, ‘Neurosymbolic AI: The 3rd Wave
<https://arxiv.org/abs/2012.05876>’ (2023) 56(11) Artificial Intelligence
Rev 12387, 12388–12392.

Giuseppe Marra and others, ‘From Statistical Relational to Neurosymbolic
Artificial Intelligence: A Survey
<https://www.sciencedirect.com/science/article/pii/S0004370223002084?via%3Dihub>’
(2024) 328 *Artificial Intelligence* 104062

Lukas Nel, ‘Do Large Language Models Know What They Don’t Know?
KalshiBench: A New Benchmark for Evaluating Epistemic Calibration via
Prediction Markets’ (2025) arXiv:2512.16030 [cs.CL] <
https://arxiv.org/abs/2512.16030> (reporting systematic
miscalibration/overconfidence—models expressing confidence that outstrips
correctness),

Melanie Mitchell and David C Krakauer, ‘The Debate Over Understanding in
AI’s Large Language Models
<https://www.pnas.org/doi/full/10.1073/pnas.2215907120?doi=10.1073%2Fpnas.2215907120>’
(2023) 120(13) Proceedings of the National Academy of Sciences e2215907120;

Melanie Mitchell, ‘AI’s Challenge of Understanding the World
<https://www.science.org/doi/10.1126/science.adm8175>’ (2023) 382(6671)
Science eadm8175. François Chollet and Mike Knoop, ‘ARC Prize 2025:
Technical Report’ (2026) arXiv:2601.10904v1 [cs.AI] <
https://arxiv.org/html/2601.10904v1> (characterizing the 2025 reasoning
breakthroughs as still being ‘fundamentally related to model knowledge’
rather than human-like cognitive flexibility).

Qianjun Pan and others, ‘A Survey of Slow Thinking-based Reasoning LLMs
using Reinforced Learning and Inference-time Scaling Law’ (2025)
arXiv:2505.02665 https://arxiv.org/abs/2505.02665
<https://arxiv.org/abs/2505.02665?utm_source=chatgpt.com> (synthesising
‘slow thinking’ methods: inference-time compute scaling,
search/verification, and iterative refinement loops to improve structured
reasoning beyond single-pass plausibility);

Yixin Ji and others, ‘A Survey of Test-Time Compute: From Intuitive
Inference to Deliberate Reasoning’ (2025) arXiv:2501.02497 <
https://arxiv.org/abs/2501.02497> (defining "test-time compute" as a
mechanism that allows a model to "work" a problem during inference—using
techniques like self-correction, repeated sampling, and tree search to
solve tasks that are impossible for a single forward pass).

Yann LeCun, ‘A Path Towards Autonomous Machine Intelligence’ (2022)
OpenReview <https://openreview.net/pdf?id=BZ5a1r-kVsf>.

Rizwan Qureshi, ‘Thinking Beyond Tokens: From Brain-Inspired Intelligence
to Cognitive Foundations for Artificial General Intelligence’ (2025)
arXiv:2507.00951 <https://arxiv.org/abs/2507.00951>.

Mohamed Amine Ferrag, Norbert Tihanyi and Merouane Debbah, ‘From LLM
Reasoning to Autonomous AI Agents: A Comprehensive Review’ (2025)
arXiv:2504.19678 [cs.AI] <https://arxiv.org/abs/2504.19678>

Jason Wei and others, ‘Emergent Abilities of Large Language Models’ (2022)
arXiv:2206.07682 [cs.CL] <https://doi.org/10.48550/arXiv.2206.07682>

Melanie Mitchell, ‘Debates on the Nature of Artificial General
Intelligence’ (2024) 383(6689) Science eado7069 <
https://www.science.org/doi/10.1126/science.ado7069>.

Shane Legg and Marcus Hutter, ‘Universal Intelligence: A Definition of
Machine Intelligence’ (2007) arXiv:0712.3329 [cs.AI] <
https://arxiv.org/abs/0712.3329>.

Ben Goertzel and Cassio Pennachin (eds), *Artificial General Intelligence*
(Springer 2007) 1–5;

Meredith Ringel Morris and others, ‘Levels of AGI: Operationalizing
Progress on the Path to AGI’ (2024) arXiv:2311.05148 [cs.AI] 2–7 (categorising
LLMs as  "Level 1: Emergent AGI"—defined as systems that are equal to or
better than an unskilled human at a wide range of tasks);

Blaise Agüera y Arcas and Peter Norvig, ‘Artificial General Intelligence Is
Already Here’ (*Noema Magazine*, 10 October 2023) <
https://www.noemamag.com/artificial-general-intelligence-is-already-here>;

Sébastien Bubeck and others, ‘Sparks of Artificial General Intelligence:
Early Experiments with GPT-4’ (2023) arXiv:2303.12712 [cs.AI] 1–5 <
https://arxiv.org/abs/2303.12712>;

Blaise Agüera y Arcas, *What Is Intelligence? Lessons from AI About
Evolution, Computing, and Minds* (MIT Press 2025) (postulating predictive
AI as AGI according to neuroscience ‘predictive brain’ hypothesis);

Eric Schmidt, ‘The San Francisco Consensus’ (2024) 2 The Digitalist Papers <
https://www.digitalistpapers.com/vol2/schmidt>;

Leopold Aschenbrenner, ‘Situational Awareness: The Decade Ahead’ (June
2024) < https://situational-awareness.ai>.

‘Geoffrey Hinton – Interview’ (*NobelPrize.org*, 8 October 2024) <
https://www.nobelprize.org/prizes/physics/2024/hinton/1925103-interview-transcript
>.

Violet Xiang and others, ‘Just Enough Thinking: Efficient Reasoning with
Adaptive Length Penalties Reinforcement Learning’ (2025)
arXiv:2506.05256 [cs.AI] <https://arxiv.org/abs/2506.05256>


Giancarlo


On Tue, 12 May 2026 at 07:46, Giuseppe Attardi via nexa <
[email protected]> wrote:

> Un punto di vista diverso, da parte di Nello Cristianini, noto ricercatore
> di Machine Learning:
>
>
> https://www.repubblica.it/tecnologia/2026/05/12/news/nello_cristianini_libro_forma_mentis_ne_umani_ne_pappagalli_ma_un_altra_forma_di_intelligenza-425337273/
>
> — Beppe
>
>

Reply via email to