On Tuesday, January 14, 2025, at 6:12 PM, Matt Mahoney wrote:
> Wolpert's theorem is fundamental to the alignment problem. Either you can 
> predict what AI will do, or it can predict you, but not both. We use 
> prediction accuracy to measure intelligence. If you want to build an AI that 
> is smarter than you, then you can't predict what it will do. This means you 
> can't control it either, because that would require you to predict the 
> effects of your actions.

Yes it is intuitively obvious, but it does seem that there may be a theoretical 
special case, not that it would be practical or achievable but it does make one 
wonder about the structure of things.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tff34429f975bba30-Mcafb1739aacc57cd3070ba95
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to