David Doshay wrote:

As an aside, the pro in question won the US Open, so comments about him being a weak pro seem inappropriate. I spoke with him a number of times, and I firmly believe that he took the match as seriously as any other public exhibition of his skill that involves handicap stones for the opponent. He has an open mind about computer Go, unlike some other pro players I talked to here at the congress.

Kim did state before the match that in his opinion computers would never be as strong as the best humans. I don't believe he was asked afterward whether he had any reason to change that opinion.

After the banquet last night, I was talking to Peter Drake when Kim walked up and started asking questions about how MoGo played go. Peter explained very well, but I'm not sure he completely understood.

BTW, David, I also pointed out to Chris Garlock that you'd been misquoted, shortly after the story went up on the AGA website, but he didn't reply.


Also BTW, let me introduce myself to the list and ask a question. I'm a 2D go player, also an AI researcher affiliated with Dartmouth. I did my Ph.D. at MIT on games and puzzles. However, I never seriously worked on computer go, because I was always convinced go was "AI- complete" -- that we would have strong go programs when we had general- purpose AI, and not before. Mostly my current work is on general- purpose AI heavily inspired by neuroscience.

However, with the advent of the Monte Carlo programs I'm about ready to change my mind. I'm tempted to try to work in the area and see whether I can contribute anything.


Now, my question. Sorry if this has already been beaten to death here. After the match, one of the MoGo programmers mentioned that doubling the computation led to a 63% win rate against the baseline version, and that so far this scaling seemed to continue as computation power increased.

So -- quick back-of-the-envelope calculation, tell me where I am wrong. 63% win rate = about half a stone advantage in go. So we need 4x processing power to increase by a stone. At the current rate of Moore's law, that's about 4 years. Kim estimated that the game with MoGo would be hard at 8 stones. That suggests that in 32 years a supercomputer comparable to the one that played in this match would be as strong as Kim.

This calculation is optimistic in assuming that you can meaningfully scale the 63% win rate indefinitely, especially when measuring strength against other opponents, and not a weaker version of itself. It's also pessimistic in assuming there will be no improvement in the Monte Carlo technique.

But still, 32 years seems like a surprisingly long time, much longer than the 10 years that seems intuitively reasonable. Naively, it would seem that improvements in the Monte Carlo algorithms could gain some small number of stones in strength for fixed computation, but that would just shrink the 32 years by maybe a decade.

How do others feel about this?

I guess I should also go on record as believing that if it really does take 32 years, we *will* have general-purpose AI before then.


Bob Hearn

_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to