FLI (Max Tegmark et al) proposes controlling a superior intelligence
through a chain of progressively more intelligent agents. The paper claims
success rates up to 52% under some conditions in a series of game
simulations where there is a 400 point ELO difference (90% chance of the
better player winning), and lower as the advantage increases.

https://arxiv.org/abs/2504.18530

It's not the first time I have seen the idea proposed, although it is the
first attempt I'm aware of at actually testing it. But I'm still dubious.
It's like if insects can't control humans, then maybe they could control
dogs and then the dogs would control humans.

Or maybe it isn't a problem. I mean, our phones are already a billion times
smarter than us. So what if they control us as long as it feels like we
control them?

-- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tab8f757c89546375-M3a7cac9d823d6867f7a604c7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to