Agreed. Could be though that the causal-inference engine (for lack of a better 
word) may be but one-of-many reasoning subsystems in the overall AI 
application. Perhaps, for specific functionality, such a subsystem would be 
activated as a temporary, primary node of an AI schema. However, was that the 
primary basis used for learning and adapting? Unless we viewed the logical 
architecture and traced the adaptive competency throughout such architecture, 
we would not know.


________________________________
From: EdFromNH via AGI <[email protected]>
Sent: Friday, 14 September 2018 12:06 AM
To: [email protected]
Subject: Re: [agi] Judea Pearl on AGI

If Demis Hassabis, the current leader of Google's DeepMind AI subsidiary, was 
able several years ago to create an artificially intelligent program that could 
 learn to play each of many different video games much better than human 
players -- just from  feedback from  from playing each such game -- his program 
obviously had to be able to model the causal inference inherent in whatever 
videogame it was learning. So obviously there already has been a lot of success 
in AI's being able to do a good job at automatically learning causal inference.

On Thu, Sep 13, 2018 at 3:45 PM Nanograte Knowledge Technologies via AGI 
<[email protected]<mailto:[email protected]>> wrote:
Most interesting. Thanks for sharing. From the little I understand about this 
large, body of work, this makes sense to me. However, I would contend that by 
adopting - what is called by some - a network structure (closing loops in a 
3-entity structure) would lead to confusing results.

For example, one cannot reliably infer a vertex from that, which may then skew 
the rest of the structural results. . I think it's a classical "copout" in 
systems design; when in doubt, then to close the loop to open the associative 
option i.e., A=> B and C and B => C. Result: A indirectly causing C, but it was 
already inferred that A directly caused C. Did it, or didn't it?

This would present as a self-made paradox, not so?


________________________________
From: Robert Levy via AGI <[email protected]<mailto:[email protected]>>
Sent: Thursday, 13 September 2018 10:08 PM
To: AGI
Subject: [agi] Judea Pearl on AGI

I don't think I've seen a discussion on this mailing list yet about Pearl's 
hypothesis that causal inference is the key to AGI.  His breakthroughs on 
causation have been in use for almost 2 decades.  The new Book of Why, other 
than being the most accessible presentation of these ideas to a broader 
audience, is interesting in that it expressly goes into applying causal 
calculus to AGI.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-Mf8d761b549558b23eeb9b432>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-M1dc3bae706850f689ec4b574
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to