Re: [SPAM] Re: [computer-go] Re: First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-30 Thread Olivier Teytaud
> > > I think in correpondence chess humans still hold against computers > > Petri > Are there sometimes games organized like that ? This is really impressive to me. (maybe MCTS might win against alpha-beta in chess with huge time settings :-) ) ___ com

Re: [SPAM] Re: [computer-go] Re: First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-30 Thread Petri Pitkanen
I cant recall any offocoal challenges. I do remember some such statement in some other challenge, but failed to google it up. Human computer chess challenges are not likely to happen anymore. What would be the point for human? Hydra could probably beat anyone. And as processors get faster any of t

Re: [SPAM] Re: [computer-go] Re: First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-30 Thread Seo Sanghyeon
2009/10/30 Olivier Teytaud : >> I think in correpondence chess humans still hold against computers >> >> Petri > > Are there sometimes games organized like that ? This is really impressive to > me. Arno Nickel played three games with Hydra over a few months in 2005. He won 2.5-0.5 http://en.wikip

Re: [SPAM] Re: [SPAM] Re: [computer-go] Re: First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-30 Thread Olivier Teytaud
Thanks a lot for this information. This is really very interesting and not widely known. Maybe chess is less closed than I would have believed :-) Olivier > > > Arno Nickel played three games with Hydra over a few months in 2005. > He won 2.5-0.5 > > http://en.wikipedia.org/wiki/Arno_Nickel > > I

Re: [SPAM] Re: [computer-go] First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-30 Thread Seo Sanghyeon
2009/10/30 terry mcintyre : > This may be useful in computer Go. One of the reasons human pros do well is > that they compute certain sub-problems once, and don't repeat the effort > until something important changes. They know in an instant that certain > positions are live or dead or seki; they k

[computer-go] MPI vs Thread-safe

2009-10-30 Thread Brian Sheppard
>I personally just use root parallelization in Pachi I think this answers my question; each core in Pachi independently explores a tree, and the master thread merges the data. This is even though you have shared memory on your machine. >Have you read the Parallel Monte-Carlo Tree Search paper?

Re: [computer-go] MPI vs Thread-safe

2009-10-30 Thread Jason House
Sent from my iPhone On Oct 30, 2009, at 9:53 AM, "Brian Sheppard" wrote: confirming the paper's finding that the play improvement is larger than multiplying number of sequential playouts appropriately. Well, this is another reason why I doubt the results from the Mango paper. Paral

[computer-go] MPI vs Thread-safe

2009-10-30 Thread Brian Sheppard
>I only share UCT wins and visits, and the MPI version only >scales well to 4 nodes. The scalability limit seems very low. Just curious: what is the policy for deciding what to synchronize? I recall that MoGo shared only the root node. ___ computer-go

[computer-go] Reservations about Parallelization Strategy

2009-10-30 Thread Brian Sheppard
While re-reading the parallelization papers, I tried to formulate why I thought that they couldn't be right. The issue with Mango reporting a super-linear speed-up was an obvious red flag, but that doesn't mean that their conclusions were wrong. It just means that Mango's exploration policy needs t

Re: [computer-go] MPI vs Thread-safe

2009-10-30 Thread Petr Baudis
On Fri, Oct 30, 2009 at 07:53:15AM -0600, Brian Sheppard wrote: > >I personally just use root parallelization in Pachi > > I think this answers my question; each core in Pachi independently explores > a tree, and the master thread merges the data. This is even though you have > shared memory on yo

RE: [computer-go] MPI vs Thread-safe

2009-10-30 Thread David Fotland
I share all uct-nodes with more than N visits, where N is currently 100, but performance doesn't seem very sensitive to N. Does Mogo share RAVE values as well over MPI? I agree that low scaling is a problem, and I don't understand why. It might be the MFGO bias. With low numbers of playouts M

RE: [SPAM] Re: [computer-go] First ever win of a computer against a pro 9P as black (game of Go, 9x9).

2009-10-30 Thread David Fotland
Many Faces caches both local tactical results (ladders etc), and life and death reading results. For tactics it records a "shadow". For life and death it saves the whole tree. The tactical caches are still active in the UCT-MC code since reading tactics is part of move generation. The Life a

[computer-go] MPI vs Thread-safe

2009-10-30 Thread Brian Sheppard
>> Parallelization *cannot* provide super-linear speed-up. > >I don't see that at all. This is standard computer science stuff, true of all parallel programs and not just Go players. No parallel program can be better than N times a serial version. The result follows from a simulation argument. Su

Re: [computer-go] New list member

2009-10-30 Thread René van de Veerdonk
Welcome Aldric, Not a frequent poster myself, here are two resources that you may find useful. 1) an extensive library of articles related to computer-go is collected on http://www.citeulike.org/group/5884/library This list provides a wealth of articles tracing back many years and used to be ver

[computer-go] MPI vs Thread-safe

2009-10-30 Thread Brian Sheppard
Back of envelope calculation: MFG processes 5K nodes/sec/core * 4 cores per process = 20K nodes/sec/process. Four processes makes 80K nodes/sec. If you think for 30 seconds (pondering + move time) then you are at 2.4 million nodes. Figure about 25,000 nodes having 100 visits or more. UCT data is ro

Re: [computer-go] MPI vs Thread-safe

2009-10-30 Thread Petr Baudis
On Fri, Oct 30, 2009 at 10:50:05AM -0600, Brian Sheppard wrote: > >> Parallelization *cannot* provide super-linear speed-up. > > > >I don't see that at all. > > This is standard computer science stuff, true of all parallel programs and > not just Go players. No parallel program can be better than

RE: [computer-go] MPI vs Thread-safe

2009-10-30 Thread David Fotland
In the MPI runs we use an 8-core node, so the playouts per node are higher. I don't ponder, since the program isn't scaling anyway. The number of nodes with high visits is smaller, and I only send nodes that changed since the last send. I do progressive unpruning, so most children have zero vists