On Tue, Aug 13, 2024 at 10:21 PM James Bowery <jabow...@gmail.com> wrote:

> Not being competent to judge the value of your intriguing categorical
> approach, I'd like to see how it relates to:
>
> * abductive logic programming
>

Yes, abductive logic is a good point.
Abduction means "finding explanations for..."
For example, a woman opens the bedroom door, sees a man in bed with another
woman,
and then all parties start screaming at each other at a high pitch.
Explanation:  "wife discovers husband's affair", "she's jealous and
furious", etc.
In classical logic-based AI, these can be learned by logic rules,
and applying the rules backwards (from conclusions to premises).
In the modern paradigm of LLMs, all these inferences can be achieved in one
fell swoop:

[image: auto-encoder.png]

In our example, the bedroom scene (raw data) appears at the input.
Then a high-level explanation emerges at the latent layers (ie. yellow strip
but also distributed among other layers).
The auto-encoder architecture (also called predictive coding, and a bunch
of names...)
beautifully captures all the operations of a logic-AI system:  rules
matching, rules application,
pruning of conclusions according to interestingness, etc.
All these are mingled together in the "black box" of a deep neural network.
My big question is whether we can _decompose_ the above process into
smaller parts,
ie. to give it some fine structure, so the whole process would be
accelerated.
But this is hard because the current Transformer already has a fixed
structure
which is still somewhat mysterious...

YKY

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-M0159c437978edb6fbb39e6fb
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to