Dorian,
On Thu, Sep 4, 2025 at 2:12 AM Dorian Aur <[email protected]> wrote:
> ...
> in EDI systems, the network coherence factor is indeed *inspired by phase
> synchrony,* though its implementation in* memristor-based substrates*
> naturally diverges from traditional spike-based models like those in
> neuronal systems.
>
> Whereas neuronal synchrony is typically framed in terms of spike timing
> correlations, in *EDI*, coherence emerges through* field-aligned signal
> propagation,* where the *timing, energy phase,* and *recursive trajectory
> alignment* of memristive states reinforce each other dynamically. It’s
> less about discrete spike coincidences and more about *continuous,
> phase-sensitive alignment across the network.*
>
> We’re not simply measuring oscillation phase across devices, but rather
> capturing how signal propagation patterns entrain one another over time , a
> kind of analog synchrony driven by *shared context and energy
> minimization.* You could think of it as *"*propagation phase coherence*"*
> rather than spike phase synchrony.
>
> This makes it particularly suited to detecting and reinforcing semantic
> convergence, especially in the presence of divergent inputs collapsing
> toward shared attractors. It’s here that memristors shine offering
> nonlinear, history-dependent modulation that makes phase alignment not just
> possible, but dynamically stable and meaningful.
>
I can see how continuous waves could be better in some ways for summing and
feeding back downstream interference.
The summing is crucial. I gave the A->X/Y->B example. But in practice what
you want is for many of these to stack: A->X/Y->B, C->X/Y->D, E->X/Y->F...
etc. The more contexts two elements share, the more they can be assessed as
semantically (conversely also syntactically) similar.
Using the LLM language of "prompts", given a prompt AXB, you want the
system to expand out X, X={Y, ....}
So you want the entire set of shared contexts A_B, C_D, E_F... to inform an
attractor around X, which will actually define the meaning of X.
You can relate this back to LLMs as embeddings. X is "embedded" in a vector
space of its contexts, with components of the vector being "weights" along
the dimensions of the different contexts.
But to do this dynamically, you want to sum all those contexts. And to do
that you really want to expand them.
Spikes are very all or nothing. Analog waves might be easier to sum, as
feedback can be continuous.
Currently I'm imagining that this "summing" might happen by way of these
inhibition "landscapes". The prompt sequence is presented, and "holes" in
its inhibition of noise, spread. So for a prompt AXB, B creates a "hole"
which allows noise to spread also to Y, which causes D, F, etc. to spike
(and create their own "holes"...) The sum of the "holes" creating the sum
of contexts to define the grouping generated around X.
It's the ability to go "backwards" using the inhibition "holes" which
allows this summing. Just expanding over synapses from AXB won't sum over
all shared contexts.
Analog waves might do it better. If there were a way for B to affect Y, and
Y to recruit D, F, etc. To sum them in real time rather than in discrete
steps over a cascade of spikes.
Given the insight, it might not be too hard to do.
Given there's not much general interaction on this thread, feel free to
write to me directly to discuss it.
Cheers,
Rob
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M4ce26074fa9dd1d2387e4c8d
Delivery options: https://agi.topicbox.com/groups/agi/subscription