>
>
> I think in correpondence chess humans still hold against computers
>
> Petri
>
Are there sometimes games organized like that ? This is really impressive to
me.
(maybe MCTS might win against alpha-beta in chess with huge time settings
:-) )
___
com
I cant recall any offocoal challenges. I do remember some such statement in
some other challenge, but failed to google it up.
Human computer chess challenges are not likely to happen anymore. What would
be the point for human? Hydra could probably beat anyone. And as processors
get faster any of t
2009/10/30 Olivier Teytaud :
>> I think in correpondence chess humans still hold against computers
>>
>> Petri
>
> Are there sometimes games organized like that ? This is really impressive to
> me.
Arno Nickel played three games with Hydra over a few months in 2005.
He won 2.5-0.5
http://en.wikip
Thanks a lot for this information.
This is really very interesting and not widely known.
Maybe chess is less closed than I would have believed :-)
Olivier
>
>
> Arno Nickel played three games with Hydra over a few months in 2005.
> He won 2.5-0.5
>
> http://en.wikipedia.org/wiki/Arno_Nickel
>
> I
2009/10/30 terry mcintyre :
> This may be useful in computer Go. One of the reasons human pros do well is
> that they compute certain sub-problems once, and don't repeat the effort
> until something important changes. They know in an instant that certain
> positions are live or dead or seki; they k
>I personally just use root parallelization in Pachi
I think this answers my question; each core in Pachi independently explores
a tree, and the master thread merges the data. This is even though you have
shared memory on your machine.
>Have you read the Parallel Monte-Carlo Tree Search paper?
Sent from my iPhone
On Oct 30, 2009, at 9:53 AM, "Brian Sheppard"
wrote:
confirming the paper's finding that the play improvement is
larger than multiplying number of sequential playouts appropriately.
Well, this is another reason why I doubt the results from the Mango
paper.
Paral
>I only share UCT wins and visits, and the MPI version only
>scales well to 4 nodes.
The scalability limit seems very low.
Just curious: what is the policy for deciding what to synchronize? I recall
that MoGo shared only the root node.
___
computer-go
While re-reading the parallelization papers, I tried to formulate why I
thought that they couldn't be right. The issue with Mango reporting a
super-linear speed-up was an obvious red flag, but that doesn't mean that
their conclusions were wrong. It just means that Mango's exploration policy
needs t
On Fri, Oct 30, 2009 at 07:53:15AM -0600, Brian Sheppard wrote:
> >I personally just use root parallelization in Pachi
>
> I think this answers my question; each core in Pachi independently explores
> a tree, and the master thread merges the data. This is even though you have
> shared memory on yo
I share all uct-nodes with more than N visits, where N is currently 100, but
performance doesn't seem very sensitive to N.
Does Mogo share RAVE values as well over MPI?
I agree that low scaling is a problem, and I don't understand why.
It might be the MFGO bias. With low numbers of playouts M
Many Faces caches both local tactical results (ladders etc), and life and death
reading results. For tactics it records a "shadow". For life and death it
saves the whole tree.
The tactical caches are still active in the UCT-MC code since reading tactics
is part of move generation. The Life a
>> Parallelization *cannot* provide super-linear speed-up.
>
>I don't see that at all.
This is standard computer science stuff, true of all parallel programs and
not just Go players. No parallel program can be better than N times a serial
version.
The result follows from a simulation argument. Su
Welcome Aldric,
Not a frequent poster myself, here are two resources that you may find
useful.
1) an extensive library of articles related to computer-go is collected on
http://www.citeulike.org/group/5884/library
This list provides a wealth of articles tracing back many years and used to
be ver
Back of envelope calculation: MFG processes 5K nodes/sec/core * 4 cores per
process = 20K nodes/sec/process. Four processes makes 80K nodes/sec. If you
think for 30 seconds (pondering + move time) then you are at 2.4 million
nodes. Figure about 25,000 nodes having 100 visits or more. UCT data is
ro
On Fri, Oct 30, 2009 at 10:50:05AM -0600, Brian Sheppard wrote:
> >> Parallelization *cannot* provide super-linear speed-up.
> >
> >I don't see that at all.
>
> This is standard computer science stuff, true of all parallel programs and
> not just Go players. No parallel program can be better than
In the MPI runs we use an 8-core node, so the playouts per node are higher.
I don't ponder, since the program isn't scaling anyway.
The number of nodes with high visits is smaller, and I only send nodes that
changed since the last send.
I do progressive unpruning, so most children have zero vists
17 matches
Mail list logo