Re: [computer-go] Super-duper computer

2008-09-28 Thread Claus Reinke
> If you're looking for spare processors, how about a "[EMAIL PROTECTED]"
> program for Go?-)

It appears that the Chess community has had such a project already:

ChessBrain: a Linux-Based Distributed Computing Experiment
http://www.linuxjournal.com/article/6929

ChessBrain II - A Hierarchical Infrastructure for Distributed
Inhomogeneous Speed-Critical Computation
IEEE CIG06, Reno NV, May 2006 (6 pages)
(IEEE Symposium on Computational Intelligence and Games)
http://chessbrain.net/docs/chessbrainII.pdf

old project site:
http://chessbrain.net/

>From the chessbrainII paper, it seems they considered Go, but
before the recent developments that made parallel processing
promising. The papers might also be interesting for their discussion
of parallel tree search and communication issues.

Claus

> Local versions of the top programs could offer to connect to their main 
> incarnation's games, 
> explaining internal state ("it is sure it will win", "it thinks that group is 
> dead", ..) in 
> exchange for borrowing processing resources. Or, instead of doing this on a 
> per-program basis, 
> there could be a standard protocol for donating processing power from 
> machines whose users view a 
> game online.
>
> That way, the more kibitzes a game attracts, the better the computer
> player plays; and if the game cannot hold an audience, the computer
> player might start to seem distracted, losing all those borrowed
> processors;-)
>
> Mogo might even find some related research at INRIA ([EMAIL PROTECTED]
> style (desktop) grid computing, ..), so perhaps there's scope for
> collaboration there?
>
> Claus
>
> Q: why do you search for extra-terrestrial intelligence?
> A: we've exhausted the local search space. 



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Using playouts for more than position evaluation?

2008-09-28 Thread Claus Reinke
>From browsing Monte-Carlo Go papers (*), I get the impression that random
playouts are used mainly to approximate an evaluation function, determining
some value for board positions arising in more traditional tree search.

Is that correct? It seems somewhat wasteful to calculate all those possible
board positions and only take a single value before throwing them away.
Have there been any attempts to extract other information from the playouts?

For instance, if an intersection belongs to the same colour in all playouts,
chances are that it is fairly secure (that doesn't mean one shouldn't play
there, sacrifices there may have an impact on other intersections).

Or, if an intersection is black in all playouts won by black, and white in
all playouts won by white, chances are that it is fairly important to play
there (since playouts are random, there is no guarantee, but emphasizing
such intersections, and their ordering, in the top-level tree search seems
profitable).

Secondly, I have been surprised to see Go knowledge being applied to the
random playouts - doesn't that run the danger of blinding the evaluation
function to border cases? It would seem much safer to me to keep the
random playouts unbiased, but to extract information from them to guide
the top-level tree search. Even the playout termination criterion (not filling
eyes) has to be defined fairly carefully (and there have been variations),
to avoid blinding playouts against sacrifices.

Since most Go knowledge isn't absolute, but comes with caveats, it would
seem that any attempt to encode Go knowledge in the playouts is risky
(mind you, I'm a weak player, so I might be wrong;-). For instance, a
bamboo joint connects two strings, unless (insert various exceptions here),
so if you encode a bamboo joint as a firm connection, your playouts include
a systematic error. Shouldn't the same hold for nearly all Go "rules"?

Thirdly, I have been trying to understand why random playouts work
so well for evaluating a game in which there is sometimes a very narrow
path to victory. Naively, it would seem that if there was a position from
which exactly one sequence of moves led to a win, but starting on that
sequence would force the opponent to stay on it, then random playouts
would evaluate that position as lost, even if the forced sequence would
make it a win.

Is it the full search at the top of the tree that avoids this danger (every
starting move gets explored, and for the correct starting move, random
plays are even worse for the opponent than being forced, so the forcing
sequence will emerge, if slowly and not certainly)? If yes, that would
explain the "horizon effect", where Monte-Carlo programs with slightly
deeper non-random search fare better at judging positions and squash
their opponents even without other improvements.

It might also explain why bots like Leela sometimes seem overconfident
of their positions, abandoning local fights before they are entirely stable.
Such overplay has traditionally been useful in playing against other bots,
even though it can be punished severly against strong human players. If
the opponent bot can't see the winning sequence, it may not continue
the local fight, and if it does continue the local fight with anything but 
the
optimal move, Leela tends to come back with strong answers, as if it
could suddenly see the danger. Either way tends to justify Leela's playing
elsewhere, if only against a bot opponent.

Of course, the third and second issue above are somewhat related:
if incorporating Go knowledge in the playouts is the only way to
avoid missing narrow paths to certain evaluations, one might have
to risk adding such knowledge, even if it boomerangs in other situations
(are ladders one such case, or are they better left to random evaluation?).

Ok, way too many questions already;-) I hope someone has some
answers, even if partial or consisting of references to more papers.

Claus

(*) btw, Computer Go related papers seem to be widely distributed -
is there a central bibliography that keeps track of papers and urls?




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Results of recent Computer Go events

2008-09-28 Thread Nick Wedd
Does anyone have any information on the results of [the computer Go 
aspects of] these events?


Cotsen go tournament 2008
September 20 & 21
http://www.cotsengotournament.com/   treats it as being in the future

Jiuding Cup
September 22-26
http://219.142.86.87/English/index.asp  times out

World 9x9 Computer Go Championship
September 26 & 27
http://go.nutn.edu.tw/eng/main_eng.htm   treats it as in the future




Why do organisers of Go events held outside Europe so rarely publish the 
results?  Do they assume that no-one cares who won?  This isn't just 
computer Go, it is all Go events.


In Europe, even the smallest events, such as the "Cornish Open" with 24 
participants, produce results tables which are published promptly:  see 
http://www.britgo.org/results/2008/cornwall.html.  But the 2008 North 
American Go Congress, which must have had hundreds of participants, has 
never produced a full table of results.  I am sure a lot of people would 
be interested in a results table like this one for the 2008 European Go 
Congress: http://egc2008.eu/en/congress/scoreboard/index.php



Nick
--
Nick Wedd[EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Using playouts for more than position evaluation?

2008-09-28 Thread dhillismail
I agree with much of what you say (to the degree that anyone needs to "agree" 
with questions).

The discussions on this list dealing with "ownership maps", RAVE and AMAF have 
to do with using additional information from the playouts.

Playouts can't be "unbiased." Picking a move with uniform probability is a bias 
too, and not a good one.

Computer go papers here: http://www.citeulike.org/group/5884/library

- Dave Hillis

-Original Message-
From: Claus Reinke <[EMAIL PROTECTED]>
To: computer-go@computer-go.org
Sent: Sun, 28 Sep 2008 10:05 am
Subject: [computer-go] Using playouts for more than position evaluation?



>From browsing Monte-Carlo Go papers (*), I get the impression that random
playouts are used mainly to approximate an evaluation function, determining
some value for board positions arising in more traditional tree search.

Is that correct? It seems somewhat wasteful to calculate all those possible
board positions and only take a single value before throwing them away.
Have there been any attempts to extract other information from the playouts?

For instance, if an intersection belongs to the same colour in all playouts,
chances are that it is fairly secure (that doesn't mean one shouldn't play
there, sacrifices there may have an impact on other intersections).

Or, if an intersection is black in all playouts won by black, and white in
all playouts won by white, chances are that it is fairly important to play
there (since playouts are random, there is no guarantee, but emphasizing
such intersections, and their ordering, in the top-level tree search seems
profitable).

Secondly, I have been surprised to see Go knowledge being applied to the
random playouts - doesn't that run the danger of blinding the evaluation
function to border cases? It would seem much safer to me to keep the
random playouts unbiased, but to extract information from them to guide
the top-level tree search. Even the playout termination criterion (not filling
eyes) has to be defined fairly carefully (and there have been variations),
to avoid blinding playouts against sacrifices.

Since most Go knowledge isn't absolute, but comes with caveats, it would
seem that any attempt to encode Go knowledge in the playouts is risky
(mind you, I'm a weak player, so I might be wrong;-). For instance, a
bamboo joint connects two strings, unless (insert various exceptions here),
so if you encode a bamboo joint as a firm connection, your playouts include
a systematic error. Shouldn't the same hold for nearly all Go "rules"?

Thirdly, I have been trying to understand why random playouts work
so 
well for evaluating a game in which there is sometimes a very narrow
path to victory. Naively, it would seem that if there was a position from
which exactly one sequence of moves led to a win, but starting on that
sequence would force the opponent to stay on it, then random playouts
would evaluate that position as lost, even if the forced sequence would
make it a win.

Is it the full search at the top of the tree that avoids this danger (every
starting move gets explored, and for the correct starting move, random
plays are even worse for the opponent than being forced, so the forcing
sequence will emerge, if slowly and not certainly)? If yes, that would
explain the "horizon effect", where Monte-Carlo programs with slightly
deeper non-random search fare better at judging positions and squash
their opponents even without other improvements.

It might also explain why bots like Leela sometimes seem overconfident
of their positions, abandoning local fights before they are entirely stable.
Such overplay has traditionally been useful in playing against other bots,
even though it can be punished severly against strong human players. If
the opponent bot can't see the winning sequence, it may not continue
the local fight, and if it does continue the local fight with anything but 
the
optimal move, Leela tends to come back with strong answers, as if it
could suddenly see the danger. Either way tends to justify Leela's playing
elsewhere, if only against a bot opponent.

Of course, the third and second issue above are somewhat related:
if incorporating Go knowledge in the playouts is the only way to
avoid missing narrow paths to certain evaluations, one might have
to risk adding such knowledge, even if it boomerangs in other situations
(are ladders one such case, or are they better left to random evaluation?).

Ok, way too many questions already;-) I hope someone has some
answers, even if partial or consisting of references to more papers.

Claus

(*) btw, Comp
uter Go related papers seem to be widely distributed -
is there a central bibliography that keeps track of papers and urls?




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/

RE: [computer-go] Results of recent Computer Go events

2008-09-28 Thread David Fotland
Many Faces of Go participated in the main Cotsen tournament, playing against
people, on a 2 core machine, run by volunteer Terry McIntyre.  It lost 3
times to 3 kyu, beat a 4 kyu, and beat a 5 kyu.

The Computer game Olympiad in Beijing is being played now.  9x9 results are
up after each round here:

http://www.grappa.univ-lille3.fr/icga/tournament.php?id=180

Each round is 2 games, 30 minutes each player.  After 2 rounds Mogo, Leela,
and Many Faces are undefeated.  Mogo and Many Faces played round 3 early, on
KGS.  One game was scored by both programs as a win for Many Faces, but the
board has a seki, so the correct score is Mogo wins.  I think the monthly
KGS tournaments would give this win to Many Faces since both programs agreed
on the final score, but I don't know yet what will be the ruling here.

David

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nick Wedd
Sent: Sunday, September 28, 2008 7:27 AM
To: computer-go@computer-go.org
Subject: [computer-go] Results of recent Computer Go events

Does anyone have any information on the results of [the computer Go 
aspects of] these events?

Cotsen go tournament 2008
September 20 & 21
http://www.cotsengotournament.com/   treats it as being in the future

Jiuding Cup
September 22-26
http://219.142.86.87/English/index.asp  times out

World 9x9 Computer Go Championship
September 26 & 27
http://go.nutn.edu.tw/eng/main_eng.htm   treats it as in the future




Why do organisers of Go events held outside Europe so rarely publish the 
results?  Do they assume that no-one cares who won?  This isn't just 
computer Go, it is all Go events.

In Europe, even the smallest events, such as the "Cornish Open" with 24 
participants, produce results tables which are published promptly:  see 
http://www.britgo.org/results/2008/cornwall.html.  But the 2008 North 
American Go Congress, which must have had hundreds of participants, has 
never produced a full table of results.  I am sure a lot of people would 
be interested in a results table like this one for the 2008 European Go 
Congress: http://egc2008.eu/en/congress/scoreboard/index.php


Nick
-- 
Nick Wedd[EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: Results of recent Computer Go events

2008-09-28 Thread Hideki Kato
Nick Wedd: <[EMAIL PROTECTED]>:
>Does anyone have any information on the results of [the computer Go 
>aspects of] these events?
>
>Cotsen go tournament 2008
>September 20 & 21
>http://www.cotsengotournament.com/   treats it as being in the future
>
>Jiuding Cup
>September 22-26
>http://219.142.86.87/English/index.asp  times out
>
>World 9x9 Computer Go Championship
>September 26 & 27
>http://go.nutn.edu.tw/eng/main_eng.htm   treats it as in the future

Now we can see the results of all five rounds at
http://go.nutn.edu.tw/eng/main_eng.htm.
However, Fudo Go won against HappyGo in round 4.

Following is my private summary.
Pos.Program Score   SOS SoD
1   MoGo10
2   Go Intellect6   16  18
3*  Jimmy   6   16  12
4*  Erica   6   16  12
5   Fudo Go 6   16  6
6   CPS 6   10
7   GoStar  4   20
8   GoKing  4   18
9   HappyGo 2
10  ChangJung1  0   
See http://go.nutn.edu.tw/eng/rule_eng.htm for the rules.
*Jimmy won against Erica in round 5.

Hideki

>
>Why do organisers of Go events held outside Europe so rarely publish the 
>results?  Do they assume that no-one cares who won?  This isn't just 
>computer Go, it is all Go events.
>
>In Europe, even the smallest events, such as the "Cornish Open" with 24 
>participants, produce results tables which are published promptly:  see 
>http://www.britgo.org/results/2008/cornwall.html.  But the 2008 North 
>American Go Congress, which must have had hundreds of participants, has 
>never produced a full table of results.  I am sure a lot of people would 
>be interested in a results table like this one for the 2008 European Go 
>Congress: http://egc2008.eu/en/congress/scoreboard/index.php
>
>
>Nick
--
[EMAIL PROTECTED] (Kato)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Using playouts for more than position evaluation?

2008-09-28 Thread Peter Drake

On Sep 28, 2008, at 7:05 AM, Claus Reinke wrote:

For instance, if an intersection belongs to the same colour in  
all playouts,
chances are that it is fairly secure (that doesn't mean one  
shouldn't play
there, sacrifices there may have an impact on other  
intersections).


Or, if an intersection is black in all playouts won by black,  
and white in
all playouts won by white, chances are that it is fairly  
important to play
there (since playouts are random, there is no guarantee, but  
emphasizing
such intersections, and their ordering, in the top-level tree  
search seems

profitable).


We (the Orego team) have done some work along these lines this  
summer. We're working on a paper.


Secondly, I have been surprised to see Go knowledge being applied  
to the
random playouts - doesn't that run the danger of blinding the  
evaluation

function to border cases?


Yes, but you try every move in the actual search tree (unless you  
have a VERY safe exclusion rule, such as "don't play on the extreme  
edge of the board unless it's within 4 points Manhattan distance of  
an existing stone").



Thirdly, I have been trying to understand why random playouts work
so well for evaluating a game in which there is sometimes a very  
narrow
path to victory. Naively, it would seem that if there was a  
position from

which exactly one sequence of moves led to a win, but starting on that
sequence would force the opponent to stay on it, then random playouts
would evaluate that position as lost, even if the forced sequence  
would

make it a win.


It's true, this is a problem; raw Monte Carlo fares poorly at reading  
ladders.


Peter Drake
http://www.lclark.edu/~drake/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/