As with anything, an efficient serial algorithm (alpha-beta, UCT, etc...)
becomes less efficient when made parallel.  I think you can see some
significant improvement with parallel machines, but it may be that you'll
get diminishing returns.

I can think of two parallel approaches:
1. Instruct multiple instances to simulate from the same starting point
(getting higher confidence estimates more quickly)
2. Instruct different instances to examine different subtrees.

Option #1 has the downfall that it may sample a subtree more than necessary
before rejecting it.  It may also lose some subtree information between
processes

Option #2 has the downfall that different instances may dwell on a
particular subtree for too long before getting instructed to simulate
something else.



On 4/11/07, Tom Cooper <[EMAIL PROTECTED]> wrote:

Thank you Sylvain for conducting these experiments.  We have had some very
enlightening results posted here recently in my opinion.  I have to admit,
I'm surprised at how well the program seems to scale.  Fortunately, I
didn't
make a bet. :)

Taking for granted that these results indeed show what they seem to, and
combining them with the success of Monte-Carlo methods on 7x7 and 9x9
boards,
I'll have to change my opinion about the future of computer go quite
radically.

It now seems believable to me that computer go will go the way of
computer chess,
and within the next decade or so as well.  Or maybe Chrilly will make a
monster
go machine even before that.

Could somebody comment please on the likely usefulness of massively
parallel
machines to UCT-like algorithms.

Thanks again.
Tom.

_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to