On 7/19/25 1:20 PM, Santafe wrote:
> This is a nice framework, Glen, even if one then has to do a lot of work to find out whether there are good cites for some of the proposed themes.
> 
> It has had me thinking over the past day about the “alternation” between Hofstadter-Lakoff, and whoever the Logicians of the moment are (Carnap-Quine or Putnam or whoever).
> 
> One of the big themes that I assume would be behind the Hofststadter-Lakoff position, and in different ways Damasio, would be this premise:
> 
> — Take some subset of things the brain does, which involve producing, or bringing into current activity, some “that which is not”, where “not” is meant to indicate “not currently streaming in through senses”, and also, up to the previous moment, “not whatever was in the active role".  My long clumsy phrase above is often just called “memory”, though one could equally well regard it as “imagination” if one thinks that imagination is a kind of synthetic or constructive manipulation of the same primitives as memory.
> 
> — Suppose that the basic mechanism for that process in the last bullet is resonance by some kind of content-similarity.  So the novel produced thing “which is not” is not identical to whatever was currently in the active role, and can properly be called something “produced”, or “brought into the active role”.  But neither is it very far from, or free of, whatever was active that led to its selection/production.
> 
> — The above content-resonance based program would be so different as to be nearly an “opposite" from an address-based lookup, and in some idealized limit, the address-based lookup is meant to provide complete independence between the address and the content.  The latter description of a machine process seems to overlap quite heavily with the defining aim of logic and of the logical-system aspect of mathematics (as characterized by Hilbert), in the sense that the symbols are supposed to take on dynamics in their own isolated, synthetic world, without dependence on “binding”, to such an extent that one can put aside even understanding what binding is or how it is done, and still intend to make arguments about properties of this synthetic domain.
> 
> Then suppose one had to make full operational systems out of all one primitive or all the other.  Or nearly so.  I wouldn’t say the NN-based MLs are fully content-similarity based, in the sense that there is a lot of structure there that doesn’t rely on content similarity to take its form.  It is what the engineers fix as the design.  Probably in brains that is also true to considerable extents; Broca and Wernicke areas go into more-or-less stereotypical places, and visual cortex already has a lot of organization before there is anything for it to process.  But brains might make much more use of content-similarity to take their form and connectivity than ML systems currently do.  The kinds of problems Chuck Stevens used to worry about: how do brains continuously function, while also growing, and seem to use the content of their ongoing activity in essential ways as part of the directing input for their growth?
> 
> I guess the above full operating system would look rather different from one based on the von Neumann architecture as its central design paradigm.
> 
> But would I want to say that either then cross-cuts the other so strongly that they are skew, that neither can be in any sense what the other is?  I assume I would not, and the reason would be the capacity for simulation.
> 
> People — and almost surely most of this fine-grained activity is going on in brains, so I want to claim that it is okay to focus the attention of a few sentences on what they do — do engage in deliberative activities (counting things out, working through logic puzzles along rule-system pathways, etc.), and even if we found that they used a nearly all-associative architecture to do it, that wouldn’t change the fact that at the end, there is a collection of states and events that carry the logical a-semantic tags faithfully.  I would expect (after all, this is biology), that for some classes of symbol-like things that need to be used often in all people, the simulation hierarchy also gets hacked and tweaked a lot, to move its overall input-output function down to a lot more rigid and primitive level.  Jackendoff’s “3-system” picture of message-passing phonology, grammar, and semantics seems to claim certain quite symbol-based programs working very fast and dense at low levels 
> in at least the first two of the three.
> 
> I imagine that this above fencing-of-views is conducted on something like this structure.  One side says that we can identify primitives that are much simpler than the simulations they produce, with the latter being high-order syntheses from the former, and that therefore the primitives are “more fundamental”.  As long as one knows that “more fundamental” is just a tag for the longer argument about “more primitive w.r.t. synthesis”, that can be okay.  But if the simulation brings into existence something whose organization (deliberation with characteristics of logic and symbol-addressable content) has a compact description fully different-in-kind from that of the primitives, I don’t think one gets to deny that the new architecture has come into existence as a thing-in-itself in the world, even if it was by way of simulation that it was produced.  I think my view here connects to your (Glen’s) earlier arguments that things really need to be produced to get credit for being 
> carried out.  I have (in a paper that at this rate may never actually see the far side of a production process) that these symbolic things, even if just learned and used as deliberative sequences in private thought, have about the same artifact-status is the un-willed natural phenomena in the world, and different in nature from whatever our ongoing practice with, and experience of them is.
> 
> All kind of statements of the elementary, I guess, and things everybody in the literature-conversation and here would already take as known and obvious, so not addressing high-order questions, and thereby not interesting as well.  But maybe some terms for clearing underbrush?  If they are not already wrong?
> 
> Eric

