FYI I've had some papers relevant to this.

At AGI-08:
https://www.ssec.wisc.edu/~billh/g/hibbard_agi.pdf

At AGI-11:
https://www.ssec.wisc.edu/~billh/g/hibbard_agi11a.pdf

In JAGI as a co-author (Sam really wrote it, I just
made a few comments):
https://sciendo.com/article/10.2478/jagi-2021-0001
https://arxiv.org/abs/2101.12047


On Mon, 9 Dec 2024, dissip...@gmail.com wrote:
On Monday, December 09, 2024, at 1:17 PM, James Bowery wrote:
      Is this inadequate to prevent the random agent strategy?

In addition to what you quoted, another point that I forgot to add is that in
Matching Pennies you can play randomly strategically to observe your opponent
without taking more than a 50% loss, and also hide your strategy from your
opponent. You can do the same in PtB, but hiding your current strategy is less
advantageous because your choice is just one of potentially many (depending on 
how
many agents are in the game), so your strategy is not being observed directly 
and
you are not taking advantage of non-randomness that could be occurring in 
current
rounds.

On Monday, December 09, 2024, at 11:30 AM, Matt Mahoney wrote:
      It is also a proof of Wolpert's theorem, that two computers cannot
      mutually predict the other's actions. Imagine a variation of the game
      where each player receives the source code and initial state of their
      opponent as input before the start of the game. Who wins?

Wolpert's theorem is the reason AI is dangerous. We measure intelligence by
prediction accuracy. If an AI is more intelligent than you, then it can
predict your actions, but you can't predict (and therefore can't control)
its actions.

Wolpert's Theorem just emphasizes the need for meta-learning in AGI systems. No
single agent can dominate every game of PtB, but if the system is fed the game
results from current and past games, it should learn how to create a better 
agent
for the next PtB games, especially taking meta-game information into account. 
Artificial General Intelligence List / AGI / see discussions + participants +
delivery options Permalink


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T705ed500a1a7e589-Mcdb90befc084bf5f7c96fd02
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to