In message
<8b686dd80902161848q5fe39a83o56a934c089692...@mail.gmail.com>, Eric
Boesch writes
An amateur 5D also beat Mogo with 3 handicap. I would love to see more
serious games between top programs and roughly evenly matched human
opponents.
Some are listed at http://www.computer-go.info/h-c
terry mcintyre wrote:
Does Fuego make use of multiple cores? Does it require some switch setting to
do so?
The number of threads is controlled by the GTP command "uct_param_search
number_threads". On Intel and AMD CPUs, you should also set
""uct_param_search lock_free 1", see
http://www
On Feb 16, 2009, at 5:45 PM, Andy wrote:
See attached a copy of the .sgf. It was played private on KGS so you
can't get it there directly. One of the admins cloned it and I saved
it off locally.
I changed the result to be B+4.5 instead of W+2.5.
Here is another copy of the game record, wit
At the moment I (and another member of my group) are doing research on
applying machine learning to constructing a static evaluator for Go
positions (generally by predicting the final ownership of each point
on the board and then using this to estimate a probability of
winning). We are looking for
I'd be more than happy to work with you and the other members of your
group. I'm getting close to wrapping up a restructuring of my bot that
allows easily swapping out evaluation methods and search techniques.
As an example, here's the code that does a few basic MC searches:
11 static if (s
While your goal is laudable, I'm afraid there is no such thing
as a "simple" tree search with a plug-in evaluator for Go. The
problem is that the move generator has to be very disciplined,
and the evaluator typically requires elaborate and expensive to
maintain data structures. It all tends to b
I am aware such a decoupled program might not exist, but I don't see
why one can't be created. When you say the "move generator has to be
very disciplined" what do you mean? Do you mean that the evaluator
might be used during move ordering somehow and that generating the
nodes to expand is tightl
>Do you mean that the evaluator might be used during move ordering somehow
>and that generating the nodes to expand is tightly coupled with the static
>evaluator?
That's the general idea.
No search program can afford to use a fan-out factor of 361. The information
about what to cut has to co
>Do you mean that the evaluator might be used during move ordering somehow
>and that generating the nodes to expand is tightly coupled with the static
>evaluator?
That's the general idea.
No search program can afford to use a fan-out factor of 361. The information
about what to cut has to co
A simple alfabeta searcher will only get a few plies deep on 19x19, so it won't
be very useful (unless your static evaluation function is so good that it
doesn't really need an alfabeta searcher)
Dave
Van: computer-go-boun...@computer-go.org namens George Dahl
You're right of course. We have a (relatively fast) move pruning
algorithm that can order moves such that about 95% of the time, when
looking at pro games, the pro move will be in the first 50 in the
ordering. About 70% of the time the expert move will be in the top
10. So a few simple tricks li
This is old and incomplete, but still is a starting point you might
find useful http://www.andromeda.com/people/ddyer/go/global-eval.html
General observations (from a weak player's point of view):
Go is played on a knife edge between life and death. The only evaluator
that matters is "is thi
On Tue, 2009-02-17 at 20:04 +0100, dave.de...@planet.nl wrote:
> A simple alfabeta searcher will only get a few plies deep on 19x19, so
> it won't be very useful (unless your static evaluation function is so
> good that it doesn't really need an alfabeta searcher)
I have to say that I believe this
I really don't like the idea of ranking moves and scoring based on the
distance to the top of a list for a pro move. This is worthless if we
ever want to surpass humans (although this isn't a concern now, it is
in principle) and we have no reason to believe a move isn't strong
just because a pro d
George Dahl wrote:
I guess the another question is, what would you need to see a static
evaluator do to be so convinced it was useful that you then built a
bot around it? Would it need to win games all by itself with one ply
lookahead?
Here is one way to look at it: Since a search tends to se
On Tue, Feb 17, 2009 at 8:23 PM, George Dahl wrote:
> It is very hard for me to figure out how good a given evaluator is (if
> anyone has suggestions for this please let me know) without seeing it
> incorporated into a bot and looking at the bot's performance. There
> is a complicated trade off b
Michael Williams wrote:
As for the source of applicable positions, that's a bit harder, IMO. My
first thought was to use random positions since you don't want any bias,
but that will probably result in the evaluation of the position being
very near 0.5 much of the time. But I would still try
I agree with you, but I wouldn't qualify MC evaluation with MCTS as a static
evaluation function on top of a pure alfabeta search to a fixed depth (I have
the impression that this is what George Dahl is talking about).
Dave
Van: computer-go-boun...@computer-go
Dave Dyer wrote:
> If you look at GnuGo or some other available program, I'm pretty sure
> you'll find a line of code where "the evaluator" is called, and you could
> replace it, but you'll find it's connected to a pile of spaghetti.
That would have to be some other available program. GNU Go does
I've been looking into CGT lately and I stumbled on some articles about
approximating strategies for determining the sum of subgames (Thermostrat,
MixedStrat, HotStrat etc.)
It is not clear to me why approximating strategies are needed. What is the
problem? Is Ko the problem? Is an exact computa
On Feb 17, 2009, at 12:55 PM, Dave Dyer wrote:
While your goal is laudable, I'm afraid there is no such thing
as a "simple" tree search with a plug-in evaluator for Go. The
problem is that the move generator has to be very disciplined,
and the evaluator typically requires elaborate and expens
On Feb 17, 2009, at 4:39 PM, wrote:
I've been looking into CGT lately and I stumbled on some articles
about approximating strategies for determining the sum of subgames
(Thermostrat, MixedStrat, HotStrat etc.)
Link?
It is not clear to me why approximating strategies are needed. What
i
From: Jason House
On Feb 17, 2009, at 4:39 PM, wrote:
I've been looking into CGT lately and I stumbled on some articles about
approximating strategies for determining the sum of subgames (Thermostrat,
MixedStrat, HotStrat etc.)
Link?
http://www.cs.rice.edu/
> I think it would be much more informative to compare evaluator A and
> evaluator B in the following way.
> Make a bot that searched to a fixed depth d before then calling a
> static evaluator (maybe this depth is 1 or 2 or something small). Try
> and determine the strength of a bot using A and a
Really? You think that doing 20-50 uniform random playouts and
estimating the win probability, when used as a leaf node evaluator in
tree search, will outperform anything else that uses same amount of
time? I must not understand you. What do you mean by static
evaluator? When I use the term, I
First, my code is in no shape for sharing.
Some time back, I experimented with training a neural net to predict ownership
maps from 9x9 board positions. I wasn't looking for a static evaluator; I
wanted something to speed up my MCTS bot. I used my own engine to generate
ownership maps for train
> Really? You think that doing 20-50 uniform random playouts and
> estimating the win probability, when used as a leaf node evaluator in
> tree search, will outperform anything else that uses same amount of
> time?
Same amount of clock time for the whole game. E.g. if playing 20 random
playouts t
On Tue, Feb 17, 2009 at 8:35 PM, George Dahl wrote:
> Really? You think that doing 20-50 uniform random playouts and
> estimating the win probability, when used as a leaf node evaluator in
> tree search, will outperform anything else that uses same amount of
> time?
You'll probably find a variet
From: "dhillism...@netscape.net"
> Perhaps the biggest problem came from an unexpected quarter. MC playouts are
> very fast and neural nets are a bit slow. (I am talking about the forward
> pass, not the off-line training.) In the short time it took to feed a b
GPUs can speed up many types of neural networks by over a factor of 30.
- George
On Tue, Feb 17, 2009 at 8:35 PM, terry mcintyre wrote:
>
> From: "dhillism...@netscape.net"
>
>> Perhaps the biggest problem came from an unexpected quarter. MC playouts
>> are very
On Mon, Feb 16, 2009 at 7:45 PM, Andy wrote:
> See attached a copy of the .sgf. It was played private on KGS so you
> can't get it there directly. One of the admins cloned it and I saved
> it off locally.
>
> I changed the result to be B+4.5 instead of W+2.5.
I forgot to make a disclaimer: I a
I think you mean Many Faces of Go, not Crazystone.
David
> -Original Message-
> From: computer-go-boun...@computer-go.org [mailto:computer-go-
> boun...@computer-go.org] On Behalf Of Andy
> Sent: Tuesday, February 17, 2009 10:08 PM
> To: computer-go
> Subject: Re: [computer-go] Congratula
It is very clear that nonuniform random playouts is a far better evaluator
than any reasonable static evaluation, given the same amount of time. Many
people (including myself) spent decades creating static evaluations, using
many techniques, and the best ones ended up with similar strength program
Many Faces of Go has a static position evaluator, but it's not spaghetti :)
It makes many passes over the board building up higher level features from
lower level ones, and it does local lookahead as part of feature evaluation,
so it has a lot of code, and is fairly slow.
David
> -Original Me
It's not true that MCTS only goes a few ply. In 19x19 games on 32 CPU
cores, searching about 3 million play outs per move, Many Faces of Go
typically goes over 15 ply in the PV in the UCT tree.
I agree that it is much easier to reliably prune bad moves in go than it is
in chess.
Many Faces (pre
One way to figure out how good your static evaluator is, is to have it do a
one ply search, evaluate, and display the top 20 or so evaluations on a go
board. Ask a strong player to go through a pro game, showing your
evaluations at each move. He can tell you pretty quickly how bad your
evaluator
Many Faces uses information from the static evaluator to order and prune
moves during move generation. For example if the evaluation finds a big
unsettled group, the move generator will favor eye making or escaping moves
for the big group.
David
> -Original Message-
> From: computer-go-b
37 matches
Mail list logo