>I personally just use root parallelization in Pachi

I think this answers my question; each core in Pachi independently explores
a tree, and the master thread merges the data. This is even though you have
shared memory on your machine.


>Have you read the Parallel Monte-Carlo Tree Search paper?

Yes, both the Mango team's work and Bruno Bouzy's pioneering work.


>It sums up the possibilities nicely.

Well, I have doubts. I am not saying anything negative regarding their work,
which I am confident is an accurate representation of the experimental data.
But there are many possibilities for parallelization that are not covered by
those papers.

For instance, the dichotomy between "global" and "local" mutexes is
artificial. You can take any number N of physical locks and multiplex locks
for tree nodes onto those N. This provides a range of intermediate
algorithms that do not have as much contention as "global," and not as much
data as "local."

You also have possibilities for largely lockless thread safety. For
instance, the Intel architecture has atomic memory access instructions that
allow lockless data safety. Remi Coulom published a paper on this subject.

I am not even sure that "leaf" parallelization is really ruled out. For
example, if GPU-based implementation works, then leaf parallelization must
be reconsidered.


> similar to what you probably mean by MPI, though without resyncs

MPI is "message passing interface" an industry standard API for supporting
high performance computing.

It is used for sharing data among multiple processes (that is, no shared
memory). I recall that MoGo published that their massively scalable strategy
is based on this approach.


>confirming the paper's finding that the play improvement is
>larger than multiplying number of sequential playouts appropriately.

Well, this is another reason why I doubt the results from the Mango paper.
Parallelization *cannot* provide super-linear speed-up. The existence of
super-linear speed-up proves that the underlying single-threaded program is
flawed.


_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to