From: Bob Hearn <[EMAIL PROTECTED]>
>Now, my question. Sorry if this has already been beaten to death here. After >the match, one of the MoGo programmers mentioned that doubling the computation >led to a 63% win rate against the baseline version, and that so far this >scaling seemed to continue as computation power increased. >So -- quick back-of-the-envelope calculation, tell me where I am wrong. 63% >win rate = about half a stone advantage in go. So we need 4x processing power >to increase by a stone. At the current rate of Moore's law, that's about 4 >years. Kim estimated that the game with MoGo would be hard at 8 stones. That >suggests that in 32 years a supercomputer comparable to the one that played in >this match would be as strong as Kim. >This calculation is optimistic in assuming that you can meaningfully scale the >63% win rate indefinitely, especially when measuring strength against other >opponents, and not a weaker version of itself. It's also pessimistic in >assuming there will be no improvement in the Monte Carlo technique. >But still, 32 years seems like a surprisingly long time, much longer than the >10 years that seems intuitively reasonable. Naively, it would seem that >improvements in the Monte Carlo algorithms could gain some small number of >stones in strength for fixed computation, but that would just shrink the 32 >years by maybe a decade. >How do others feel about this? >I guess I should also go on record as believing that if it really does take 32 >years, we *will* have general-purpose AI before then. I suspect that Mogo -- good as it is -- is far from being the optimal algorithm. In ten years time new methods will emerge which should yield considerable improvements. In addition, the 800-core supercomputer used was not today's "state of the art"; the Mogo team almost obtained a 3000-core supercomputer for this exhibition, which would be nearly 4x as large; as Computer Go becomes more exciting, we may be able to borrow still more impressive hardware -- current state-of-the-art is 65k or even 128k processors. Third, the 32 year figure is highly sensitive to one's expectation of Moore's Law. A doubling every 18 months would be a quadrupling every 36 months, which is three years; this factor alone shrinks the 32 years to 24. We may see a faster rate of growth - GPUs have been improving faster than general-purpose CPUs, and the coming multicore processors may have more in common with GPUs than with previous generations of X86 cores -- we may revert to simpler RISC cores, which use less silicon. In short, reaching the top of the pyramid would be a thousand-fold improvement in processing power -- about 4 to the 4th power, or half way to the goal. During the same period, the petaflops race and Moore's Law would continue to increase the power of the Top 500. Stir in some algorithmic improvements, and we should be within range in something closer to ten, not 32 years. If "general purpose AI" means an AI which can solve every problem at the expert level, that is probably not a prerequisite for solving one problem at an expert level. We're not asking for a program which can skillfully play a teaching game against a weaker player, as a human pro would, nor are we asking that it be able to dance the salsa; it just needs to beat a pro in an even game. We have just barely started optimizing the search. What do humans know that computers don't? How do pros manage to play well without the ability to examine trillions of playouts?
_______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/