Abductive logic means that if you have the rule "if A then B" and B is true, then A is more likely to be true. That is logically incorrect but it still works in practice, as it can be shown by Bayes rule that usually p(A|B) > p(A). B is evidence for A.
Abductive reasoning is learned by Hebb's rule for classical conditioning. The rule is learned by firing the neurons that detect A followed by B and then strengthening the connection from A to B. Abductive reasoning is learned by the firing sequence A -> B -> memory of A. This is all modeled in neural networks with a short term memory implemented by slow responding neurons. In your example, love and jealousy evolved in birds and a few mammals including prairie voles and humans because offspring raised by 2 parents had a better chance of survival. The algorithm for programming this behavior required 10^46 DNA base copy operations on a 10^37 bit memory and ran 4.2 billion years on a planet sized computer powered by 90 petawatts of sunlight. Fortunately LLMs can learn these and other human emotions and use them in their text prediction algorithms. It's like you read about elephants going into musth, an emotion you have never felt, and using that knowledge to predict their behavior. If you program LLMs to output their predictions in a conversation in real time, then they are indistinguishable from actually having human feelings. But what I think you are asking is how to convert a neural network to a set of logical rules that you can understand. Well, the first part is not hard. Each neuron is a rule in fuzzy logic, which is superior to Boolean logic because it represents uncertainty. But it is fundamentally impossible to understand (as tested by prediction) an AI by any means, because that would imply that it was less intelligent than you. By Wolpert's law, it is impossible for a pair of computers each to predict the other, even if each is given the state and source code of the other as input. (Proof: Otherwise who would win at rock scissors paper?). So the smarter computer wins. And without prediction, you have no control. But don't worry. The transfer of power from humans to machines is gradual because intelligence is not a point on a line. It started in the 1950s with arithmetic. Now we have a house full of computers and no idea what software is on any of them. And besides, you won't notice anyway. When you train a dog by giving it treats, it thinks it is controlling you. On Wed, Aug 21, 2024, 4:19 AM YKY (Yan King Yin, 甄景贤) < generic.intellige...@gmail.com> wrote: > On Tue, Aug 13, 2024 at 10:21 PM James Bowery <jabow...@gmail.com> wrote: > >> Not being competent to judge the value of your intriguing categorical >> approach, I'd like to see how it relates to: >> >> * abductive logic programming >> > > Yes, abductive logic is a good point. > Abduction means "finding explanations for..." > For example, a woman opens the bedroom door, sees a man in bed with > another woman, > and then all parties start screaming at each other at a high pitch. > Explanation: "wife discovers husband's affair", "she's jealous and > furious", etc. > In classical logic-based AI, these can be learned by logic rules, > and applying the rules backwards (from conclusions to premises). > In the modern paradigm of LLMs, all these inferences can be achieved in > one fell swoop: > > [image: auto-encoder.png] > > In our example, the bedroom scene (raw data) appears at the input. > Then a high-level explanation emerges at the latent layers (ie. yellow > strip > but also distributed among other layers). > The auto-encoder architecture (also called predictive coding, and a bunch > of names...) > beautifully captures all the operations of a logic-AI system: rules > matching, rules application, > pruning of conclusions according to interestingness, etc. > All these are mingled together in the "black box" of a deep neural network. > My big question is whether we can _decompose_ the above process into > smaller parts, > ie. to give it some fine structure, so the whole process would be > accelerated. > But this is hard because the current Transformer already has a fixed > structure > which is still somewhat mysterious... > > YKY > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-M0159c437978edb6fbb39e6fb> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-Mca9c8d8ab6ae87b045a6f4b0 Delivery options: https://agi.topicbox.com/groups/agi/subscription