DaveW -I also am a fan of McGilchrist's POV on these things. Also Hawkins, Levin, Solms (with Friston free energy)... maybe Deacon as well.
The arc of this dialogue (if it emerges on FriAM) is promising to me.Some questions triggered for me (not as show-stoppers, but maybe stimulating more discussion?):
* How different are humans (homo sapiens and maybe N and D and others as yet unidentified) pre-writing from those post wide-wide literacy? * How different are humans (hominids in general?) from other primates/mammals who all share the fact of a neocortex? * How different are intelligent mammals living in acutely different environmental Umwelt (e.g. cetacea)? * How different are creatures with neo cortices or even vertebrate styled central nervous systems vs the aforementioned? * do the seemingly capable range of CNS-free (jellyfish, starfish, sponges, slime-molds) creatura share anything with us? Shifting over to the digital domain: * is the addition of language in any way a "digital" feature? proto-digital? Is it *just* quantization? Abstraction for manipulation? * does moving from oral culture to written culture yield such a threshold? * within written culture, does adding legal/bureaucratic modes do so? * Aristotelian and other formalize modes of logic? * digitalization (Jacquard Loom and beyond) * formal programming languages * ???The first wave of AI had played it's course by the time I entered the world of computing (mid-late 70s) and the first wave of complexity science was emerging as I began my own professional career (early 80s) very nearby (LANL's CNLS, then SFI). I was fascinated by both and participated peripherally (neither being my dayjob). What felt like significant phase-changes in both knowledge and perspective on the topics have come in waves for me:
* self-organizing systems * learning classifier systems/SVMs * (modern) neural networks and other ML paradigms * deep learning, GANs * transformer models (circa 2017) Selfhood * Aristotle-Kant-Berkeley-Peirce-Hiedegger--- * Wiener-Varela&Maturana-Brooks-Searle * BCS Origin of Objects (occasionally referenced here) * beyond<disclaimer> the following is a timeline table produced by GPT 4.5 based on my prompting with most of the names, GPT filled in a few</disclaimer>
Year Major Event / Contribution *350 BCE* Aristotle’s Metaphysics: Substance & objects *1710* Berkeley’s Idealism: Objects depend on perception *1781* Kant’s Transcendental Idealism *1872* Peirce’s Pragmatism: Objects derive meaning through interaction *1929* Heidegger’s /Being and Time/: Relational selfhood *1948* Wiener’s Cybernetics: Feedback & self-organizing systems *1972* Varela’s Autopoiesis: Living systems define themselves *1985* Brooks’ Behavior-Based AI: Intelligence without representation *1996* /Origin of Objects/: Objects emerge through interaction *2004* Friston’s Free Energy Principle: Markov blankets define selfhood *2010s* AI systems autonomously redefine object categories *2022+* Active Inference AI: Selfhood as a dynamic learning systemAnother kink or twist is the conceit of "neuromorphic" or "quantum" somehow adding magic "woo" to what is otherwise a mechanistic and strongly monist position? I'm not convinced that either "solves" the monist/dualist paradox, nor do I think it is meaningless. both are likely to provide qualitative shifts, even if only through the kind of emergence which comes up as scaling thresholds are met.
I know this is _yet another_ *idea salad* or maybe a *salad bar*... not all the offerings are necessarily compatible with all the others. I'm still trying to compose the "perfect" salad plate for myself.
On 3/1/25 9:50 AM, Prof David West wrote:
Just an observation, related, I think to Jon’s post, about biological entities. Specifically, about humans. A standard issue human being exists in a maelstrom of sensory inputs. Every nerve-ending and most individual cells receive constant stimulus. 100 billion nerve endings, 30-37 trillion cells. The human organism evolved to “make sense” of this massive, and constant, input: both the inputs—as a whole and by the organism—as a whole. Initial “sense-making” probably focused on simple gradient detection: low-high intensity, intermittent-constant, attractive-repellant, safe-dangerous, and, likely, some kind of spatial organization—here, there, up, down, right, left. Then, the most primitive of categorizations: ‘self’ versus ‘other’. Shortly thereafter, a host of additional categorizations (as yet unnamed), like this, that, those. Brian Cantwell, On the Origin of Objects, discusses this extensively. An extension to the self-other category happens here: Us versus Them. The driving force, to this point, is simply survival. This also leads to the next advances, specialization within the organism (we get a brain) and “filtering”—prioritization of some inputs over others, especially with regard those objects along the attractive-repellent and safe dangerous gradients. Not only do we get a brain, we get one with two lobes. Consider a bird, it must simultaneously locate and consume a seed and maintain constant awareness of its environment lest it becomes food itself. The two lobes of the brain assume primary responsibility for one of those two needs. In humans, the left worries about manipulating the world and the right maintains our awareness of and place in the world. [I am now channeling Ian McGilchrist, nearly 3,000 pages in The Master and His Emissary and The Trouble with Things vol I and II.] Then language happened. When communication was exclusively oral, auditory and visual—and local; it retained an appeal to the whole brain, the whole organism; e.g., stories, rich in context, evoking memories of shared experiences and places. Written language, however, gave a bit of ascendancy to left-brain skills. Telegraph and radio technologies removed context and evocation, diminishing communication to the exchange of mere words. Shared context, evocation of shared experience, non-verbal communication (e.g., body language, intonation, even pheromones) were lost. Shannon killed “meaning” (and admitted such) with his information theory. Digitization stripped data, e.g., the frequencies lost when a square wave replaces a sine wave. Computing added algorithms and finally realized the Cartesian (Leibniz, Pascal, et. al.) assertion that thought was nothing more than the formal manipulation of precisely defined “tokens of thought.” Computational Thinking reigns supreme as the epitome of the left-brain mode of thinking. But only at the cost of ignoring or refusing to recognize most of the ways that a human, as a whole-organism, makes sense of the totality of the stimuli it receives. Spurious claims that humans cannot sense or be aware of, and therefore cognition cannot be affected by, much of the stimulation being received are easily proven false. “Cocktail party effect;” the human eye can detect a single photon; subsonic sound inducing fear; human ability to accurately differentiate between live, analog recorded and digitally recorded music; pheromonal responses; alterations in brain chemistry; etc., etc. Huxley’s thesis that, for survival purposes, many sensations and gradations of sensations are ‘filtered’ (in that they are kept below a threshold of conscious awareness, but are still being received) and Mescaline inhibits those filters so that a more complete apprehension of the world around is obtained. Similarly ignored, how the organism-as-a-whole, and the right-brain specifically, processes inputs-as-a-whole to affect and support cognition. Muscle-memory, embodied metaphor, and situated cognition (how physical environment impacts thinking, e.g., Moroccan tailor who can lay out patterns on cloth to minimize waste in the shop, but not in a classroom or office) would be examples. Then the whole notion of culture. Ninety-percent of what a human being “knows” is tacit knowledge about one’s culture. It invisibly (below the threshold of conscious awareness) shapes, constrains, and supports human cognition. AI advocates (especially those claiming the imminence of AGI) are guilty of extreme hubris. They are exemplars of left-brain, computational, thinking and, because of that, they assume that that mode of thought is the be-all and end-all of cognition. In point of fact, left-brain (science, mathematical, computational) thinking addresses and sometimes resolves only the simplest of problems. Left-brained thinking is relatively simple to replicate with a program. This does not mean the program is, in any way, “intelligent” beyond the most simplistic and limited definition of that word. Certainly nothing even approximating the whole-organism grounded intelligence of a human being. If, in a year or so, ChatGPT or sibling is capable of recognizing itself in a mirror, something a human infant can do in 18-24 months, I might change my mind. davew .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-.. FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoomhttps://bit.ly/virtualfriam to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIChttp://friam-comic.blogspot.com/ archives: 5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021http://friam.383.s1.nabble.com/
OpenPGP_0xD5BAF94F88AFFA63.asc
Description: OpenPGP public key
OpenPGP_signature.asc
Description: OpenPGP digital signature
.- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-.. FRIAM Applied Complexity Group listserv Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom https://bit.ly/virtualfriam to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/