Una demistificazione della tesi per cui le reti neurali artificiali sarebbero 
scatole nere.

Understanding Artificial Neural Networks: Mysterianism about Known Mechanism is 
Mysticism
Guest, Olivia - Nuñez Hernández, Nancy Abigail - Blokpoel, Mark
May 7, 2026


Abstract 
Mysterianism is the idea that human cognition, mind, cannot be understood. 
Taking this concept and applying it to known mechanism — such that claims are 
made that we do not know how engineered systems, such as artificial neural 
networks (ANNs), work, or that they constitute black boxes that we can only 
open with difficulty — is inappropriate at best and malicious at worst. We do 
know the mechanistic structure of such models because we designed and built 
them. We also do know their functional role (what they are for) as well as the 
mathematical function they are asked to approximate (map inputs to target 
outputs). Because mysterianist beliefs about known systems, such as ANNs, are 
often expressed, scientists need to sit up and take notice. We provide an error 
theory as to what is going on to help unpick this metatheoretical blunder. 
Ultimately, the problem is that 'understanding' is not a technical term in 
these cases: the word is co-opted for a specific narrative to sell 'artificial 
intelligence' through mystification. All computational systems, from pendulums 
to databases, will behave in ways we cannot predict or control — this is not a 
unique property of ANNs — and experts do indeed grasp the computational 
properties of these systems nonetheless.


https://zenodo.org/records/20071869

Reply via email to