Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Darren Cook
> take a look at some of the corner josekis, some of them has *many*
> variations (http://en.wikipedia.org/wiki/Taisha_joseki) and go for
> *many* moves (50+?). most humans can't choose the best variation that
> takes advantage of the stones in the adjacent corners ...

A couple of related comments. First, in the 2-day games the pros spend
almost all their thinking time in the opening, i.e. considering
different joseki and how they work together.  By the time they get into
the endgame they are playing almost all moves in under a minute (and
most of that time is spent checking their counting to see if they should
play the quiet move or if they need to complicate things).

Second, the very top pros all seem to say their endgame is the strongest
and most important part of their game. I.e. not the moves they spend
most time on. That always strikes me as strange :-).

The 1971 Honinbo Tournament book is good background on this topic.

Darren
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Magnus Persson

It is true that MC-programs has a bias towards overconcentration. But...

1) A improvements to the simulations of MC-program as implemented by MoGo and
Valkyria does diminish the problem. In fact most of the strength of these
programs from doing that. I think it is next to possible to explicitly program
the concept of overconcentration, but fortunately you do not have to. The
problem goes away when you fixed other more immediate problems with the
simulations.

2) Deeper search makes these programs stronger. Valkyria always get stronger
with increased search. Just playing MoGo on fixed time limits does not 
tell you
anything about what it could possibly find out. It is more informative 
to watch
the principal variation change over time. As it goes deeper the 
sequence always

get better and better, and when you see that you know that increased thinking
will imrove the principla variation forever. This does not mean that always
plays the best move in the end, but it means that algorithm recursively
improves the analysis all over the tree and only a serious bug can stop that
from happen.

Note that MC-programs do not have high level concepts programmed into 
them. Thus

you will alwyays be able to say "it played X because it does not understand Y"
whenver it plays a bad move but this is wrong. Correct is: "it played X 
because

it did not search deep enough".

-Magnus

Quoting terry mcintyre <[EMAIL PROTECTED]>:


From: Don Dailey <[EMAIL PROTECTED]>

On Mon, 2007-01-22 at 03:43 +0100, alain Baeckeroot wrote:

The few games i played against mogobot on 19x19 shows that it does not
"know" overconcentration. And i can safely bet that increasing
thinking time will not solve this,


By definition, a scalable program can solve all problems so your 
statement is incorrect.


Is Computer Go infinitely scalable? Is Mogobot, in particular, scalable?

Most programs reach scalability limits at some point. If the concept 
of "overconcentrated shape" is not inherent in a given program, it 
might take a vast number of simulations to compensate for that lack.


I suspect that monte carlo will need to be merged with some method of 
learning patterns and concepts to play higher-level Go in human-scale 
timeframes.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Sanghyeon Seo

2007/1/22, Darren Cook <[EMAIL PROTECTED]>:

A couple of related comments. First, in the 2-day games the pros spend
almost all their thinking time in the opening, i.e. considering
different joseki and how they work together.  By the time they get into
the endgame they are playing almost all moves in under a minute (and
most of that time is spent checking their counting to see if they should
play the quiet move or if they need to complicate things).

Second, the very top pros all seem to say their endgame is the strongest
and most important part of their game. I.e. not the moves they spend
most time on. That always strikes me as strange :-).


Note that professionals do not play perfect endgame, actually they are
far from it. Bill Spight soundly proved this with his "Go Thermography"
paper analyzing endgame of Jiang 9p and Rui 9p. He found 5 blunders
in the late endgame sequence of ~50 moves, all with value less than
4 points.

Also, post-mortem analysis of pro games published in go magazines
routinely finds some game-result changing improvements in the endgame.
And sometimes, even amateur readers can understand them, once
better sequence is found.

--
Seo Sanghyeon
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea for a new measure of a computer go program's rank.

2007-01-22 Thread steve uurtamo
> Yes, we heard that argument for years in computer chess and it never
> happened. 
>
> Do you have some kind of basis for believe that?   

i wouldn't argue that future algorithms can't be time-doubled beyond
the existing skill level of people, just that the current evidence is weak
that we already have such algorithms in hand.

s.




 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Darren Cook
> Note that professionals do not play perfect endgame, ...

Enough, apparently, that it separates a world champion from a
run-of-the-mill 9-dan.

> Also, post-mortem analysis of pro games published in go magazines
> routinely finds some game-result changing improvements in the endgame.

Yes, though in the Honinbo book I mentioned there is something like
Ishida lost by 0.5pt in one game and in the post-mortem they decided he
had played at least the last 100 moves correctly. (from my fuzzy memory
- I don't have the book to hand). However he was probably the world's
strongest endgame player at the time and at the peak of his concentration.

Darren

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Matt Gokey
Been following this thread pretty closely and thought I would jump in 
with a thought and try to find some common ground.  I think there is 
truth to be found in both sides of this argument.


Of course people improve with time and so do computers with certain 
algorithms.  The question is about the curve and whether there is a 
significant difference in this curve between chess and go.


Don believes there is probably no difference and states a rule: doubling 
thinking time = linear improvement in play.


What if we look at it mathematically by looking at the branching factor? 
Go’s branching factor is generally considered to be about an order of 
magnitude greater than chess – perhaps a bit less, right?  That means 
that after each ply go becomes another additional order of magnitude 
more complex.  Now of course, in practice it’s not so simple, as 
breaking the game into regions and doing local reading and global 
analysis reduces the complexity somewhat, but in general go explodes a 
lot faster than chess and this fact is commonly used as one of the 
reasons methods used for computer chess don’t work on go especially 
combined with the lack of reliable evaluation.


For the sake of argument if we assume that doubling thinking time allows 
one to double the number of positions and alternatives that can be 
analyzed, this doubling would seem to have lesser impact in go where the 
explosion is much quicker than in chess and thus the same relationship 
may not hold.  The improvement may not be linear or it may not hold for 
very long.  The point of diminishing returns for a human due to this 
could be much earlier in go than in chess.  As go players get better 
they must learn to “sense” the board based on years of experience 
combined with our evolutionary tuned super parallel visual pattern 
matcher.  This provides the player shortcuts that otherwise would be 
impossible (for humans) to read out.  Assuming enough processing power 
and memory this problem would not necessarily hold for computer-go.  By 
the way, I think some of this very same thing applies to both chess and 
go, just a matter of different degrees.


From my own experience with chess and go, I can say that I don’t feel 
overwhelmed when playing chess.  That is, I always feel like I can think 
and reason about the moves fairly deeply and use simple evaluation like 
piece counts, protection, mobility, etc. to decide between lines of 
play.  I may be entirely wrong but I feel like I can think about it 
anyway.  I’m not a real strong player, but I had a friend in high school 
and college whose Dad was a Grand Master.  His son was pretty good and 
we would play a lot of chess together.  Once in while I would play 
against his Dad and usually get slaughtered.  One time he was doing one 
of those events where he would play 30 people at once.  I played and 
managed to keep him challenged well into the middle game.  I could tell 
he was worried.  On one trip around I still hadn’t made my move and was 
forced to make the best one I had.  It was a blunder but I didn’t see it 
yet.  Immediately he took advantage of it and I didn’t have a chance 
after that.  He confirmed this after the game was over and set the 
pieces up as they were at that point and showed me what I should have 
done (I thought an amazing feat given he was playing 30 or more 
different simultaneous games). Anyway in this situation where I had a 
lot more time than he did I was able to challenge him and only after 
making a blunder was I in trouble.  So I see where Don is coming from 
with Chess.


Now with Go as a beginner still, on the other hand, I almost always felt 
and still feel quite overwhelmed without enough tools to understand how 
to plan and evaluate moves in all but the simplest isolated cases.  I 
know that giving me tons of extra time against a good player would not 
help.  The interactions between areas and the explosion of the game and 
lack of experience to be able to “sense” good shape and proper balance 
early enough to lead to life and territory just simply overwhelms me. 
The feeling is not as severe as it was when I first learned, but it is 
still there.  I wonder whether even for strong amateurs this is still 
the case, but just happens a bit deeper.  Is this the time limit that 
Ray talks about where any more time is not helpful?  It is that point 
when it becomes so terribly complex and overwhelming that no more 
thinking can help given your current ability to judge potential in the 
positions.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: libgoboard v0.97 released

2007-01-22 Thread Łukasz Lew

Direct link:
http://www.mimuw.edu.pl/~lew/download.php?file_no=8

Łukasz

On 1/22/07, Łukasz Lew <[EMAIL PROTECTED]> wrote:

Hi,

Few interesting things has happened so I decided to announce new version:

  - bug-fix: komi was too big (1 point) so program as white tended to
loose by 0.5 point

  - improve portability (if just "make" is not enough for You, please tell me)

  - simple_playout::run is a simplest playout loop (80% performance)
You can use it as a starting point/tutorial for Your Monte-Carlo Go.
(I dare to paste the code at the end of the message)

  - more convenient printing functions

  - column coordinates around the board printed as letters (was numbers)

  - faster RNG + more accurate performance measurement

  - performance
43 kpps on 1.4 GHz
70 kpps on 2.2 GHz
will anybody send me info about 3.2 GHz?

  - no more darcs repository, will publish mercurial repository soon.



Best Regards,
Łukasz Lew

 Code of simple playout loop ---


player::t run (board_t* board, player::t first_player) {

v::t  v;
bool  was_pass[player::cnt];
uint  move_no;
player::t act_player;

act_player = first_player;
player_for_each (pl)
  was_pass [pl] = false;
move_no = 0;

do {
  v = play_one (board, act_player);
  was_pass [act_player] = (v == v::pass);
  act_player = player::other (act_player);
  move_no++;

  if ((was_pass [player::black] & was_pass [player::white]) |
  (move_no > max_playout_length)) break;

} while (true);

return board->winner ();
}


v::t play_one (board_t* board, player::t player) {

v::t v;
uint start;

// find random place in vector of empty vertices
start = pm::rand_int (board->empty_v_cnt);

// search for a move in start ... board->empty_v_cnt-1 interval
for (uint ev_i = start; ev_i != board->empty_v_cnt; ev_i++) {
  v = board->empty_v [ev_i];
  if (!board->is_eyelike (player, v) &&
  board->play (player, v) != board_t::play_ss_suicide) return v;
}

// search for a move in 0 ... start interval
for (uint ev_i = 0; ev_i != start; ev_i++) {
  v = board->empty_v [ev_i];
  if (!board->is_eyelike (player, v) &&
  board->play (player, v) != board_t::play_ss_suicide) return v;
}

board->check_no_more_legal (player); // powerfull check
return v::pass;
}

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread dave . devos

- Oorspronkelijk bericht -
Van: Matt Gokey <[EMAIL PROTECTED]>
Datum: maandag, januari 22, 2007 6:27 pm
Onderwerp: Re: [computer-go] an idea... computer go program's rank vs 
time

> Been following this thread pretty closely and thought I would jump 
> in 
> with a thought and try to find some common ground.  I think there 
> is 
> truth to be found in both sides of this argument.
> 
> Of course people improve with time and so do computers with 
> certain 
> algorithms.  The question is about the curve and whether there is 
> a 
> significant difference in this curve between chess and go.
> 
> Don believes there is probably no difference and states a rule: 
> doubling 
> thinking time = linear improvement in play.
> 
> What if we look at it mathematically by looking at the branching 
> factor? 
> Go’s branching factor is generally considered to be about an order 
> of 
> magnitude greater than chess – perhaps a bit less, right?  That 
> means 
> that after each ply go becomes another additional order of 
> magnitude 
> more complex.  

Mathematically a bigger branching factor does not matter. If the level 
increase is proportional to the 2log of time, then it is also 
proportional to the 10log of time. The only difference is the 
proportionality factor.

Dave


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Don Dailey
Hi Matt,

What you wrote is well thought out.   I give some comments.


On Mon, 2007-01-22 at 11:27 -0600, Matt Gokey wrote:
> Been following this thread pretty closely and thought I would jump in 
> with a thought and try to find some common ground.  I think there is 
> truth to be found in both sides of this argument.
> 
> Of course people improve with time and so do computers with certain 
> algorithms.  The question is about the curve and whether there is a 
> significant difference in this curve between chess and go.
> 
> Don believes there is probably no difference and states a rule: doubling 
> thinking time = linear improvement in play.

To a point the improvement is linear - however there is gradual curve to
this so it's not completely linear.   The surprise in computer chess was
that it was difficult to detect that a doubling was NOT completely
linear
and only when computers really became strong did it start to become
clear there was a gradual tapering off.   

I believe the same curve exists in a properly scaled GO program but 
since the game is deeper I believe it will appear linear for a long
time to come, but in fact it's not strictly linear.

I would also like to point out that although I believe that each
doubling is roughly linear,  I'm not claiming you get the same 
improvement as in chess.   You might get more or you might get
less - actually the evidence is that you get more, perhaps because
the game is much longer and minor improvements over hundreds of
move add up to more likelihood of a win.


> What if we look at it mathematically by looking at the branching factor? 
> Go’s branching factor is generally considered to be about an order of 
> magnitude greater than chess – perhaps a bit less, right?  That means 
> that after each ply go becomes another additional order of magnitude 
> more complex.  Now of course, in practice it’s not so simple, as 
> breaking the game into regions and doing local reading and global 
> analysis reduces the complexity somewhat, but in general go explodes a 
> lot faster than chess and this fact is commonly used as one of the 
> reasons methods used for computer chess don’t work on go especially 
> combined with the lack of reliable evaluation.

I think this is more about information, not brute force ply.  With twice
as much time you have twice as many thoughts, twice as much time to 
produce a better idea,  twice everything.I also suspect it's about
playing the odds.  Give me twice as much time to think about something
interesting and challenging, and it "increases my chances" of having a
meaningful insight.  

It's always tempting to simply compare to depth or ply but I don't think
it's about that.   When I talk about scalable algorithms instantly
search
comes to mind - but we can be open about this.   Who is to say some
other
scalable algorithm will not come along?   Perhaps some iterative
computation
that systematically produces (on average) a better and better move.   

I'm winging it here, but I can imagine some proof procedure, where you 
consider each move on the board and systematically prove that one move
cannot be as good as another - playing the elimination game.   Or
perhaps
a similar iterative procedure that can do this in stages with
probabilities.
(It's starting to sound suspiciously like UCT.)

But my point is that this is about information processing, not ply depth
although there may be a great deal of overlap.


> For the sake of argument if we assume that doubling thinking time allows 
> one to double the number of positions and alternatives that can be 
> analyzed, this doubling would seem to have lesser impact in go where the 
> explosion is much quicker than in chess and thus the same relationship 
> may not hold.  

It's not clear to me that the effective branching factor is that much
higher.   Right now it is for computers,  but for a good player most
of the garbage is pruned and the tree is very narrow.But even if
I'm wrong about this,  you must remember that for ELO purposes it's 
more about how much information you can process relative to your
opponent,
not how close you can get to perfect play.   Maybe that's why there is
so much confusion about this.  


> The improvement may not be linear or it may not hold for 
> very long.  The point of diminishing returns for a human due to this 
> could be much earlier in go than in chess.  

I actually believe the point of diminishing returns in GO to be much 
more extended than in chess because the game is deeper strategically.

> As go players get better 
> they must learn to “sense” the board based on years of experience 
> combined with our evolutionary tuned super parallel visual pattern 
> matcher.  This provides the player shortcuts that otherwise would be 
> impossible (for humans) to read out.  Assuming enough processing power 
> and memory this problem would not necessarily hold for computer-go.  By 
> the way, I think some of this very same thing applies to both che

Re: [computer-go] an idea for a new measure of a computer go program's rank.

2007-01-22 Thread Mark Boon


On 21-jan-07, at 19:27, Don Dailey wrote:


not considering biological factors
which would cut into this a bit.


There was a time when there were no time-limits in Go, which was  
abused by many players by turning a game into a stamina contest. I  
believe this practice was abandoned when someone collapsed at the  
board and died.


Mark

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Matt Gokey

[EMAIL PROTECTED] wrote:
What if we look at it mathematically by looking at the branching 
factor? 
Go’s branching factor is generally considered to be about an order 
of 
magnitude greater than chess – perhaps a bit less, right?  That 
means 
that after each ply go becomes another additional order of 
magnitude 
more complex.  



Mathematically a bigger branching factor does not matter. If the level 
increase is proportional to the 2log of time, then it is also 
proportional to the 10log of time. The only difference is the 
proportionality factor.
I'm not sure I follow your logic here.  Sounds like you are using the 
hypothesis to support itself in a circular argument.  Perhaps you could 
elaborate.




___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] libgoboard v0.97 released

2007-01-22 Thread Łukasz Lew

Hi,

Few interesting things has happened so I decided to announce new version:

 - bug-fix: komi was too big (1 point) so program as white tended to
loose by 0.5 point

 - improve portability (if just "make" is not enough for You, please tell me)

 - simple_playout::run is a simplest playout loop (80% performance)
   You can use it as a starting point/tutorial for Your Monte-Carlo Go.
   (I dare to paste the code at the end of the message)

 - more convenient printing functions

 - column coordinates around the board printed as letters (was numbers)

 - faster RNG + more accurate performance measurement

 - performance
   43 kpps on 1.4 GHz
   70 kpps on 2.2 GHz
   will anybody send me info about 3.2 GHz?

 - no more darcs repository, will publish mercurial repository soon.



Best Regards,
Łukasz Lew

 Code of simple playout loop ---


player::t run (board_t* board, player::t first_player) {

   v::t  v;
   bool  was_pass[player::cnt];
   uint  move_no;
   player::t act_player;

   act_player = first_player;
   player_for_each (pl)
 was_pass [pl] = false;
   move_no = 0;

   do {
 v = play_one (board, act_player);
 was_pass [act_player] = (v == v::pass);
 act_player = player::other (act_player);
 move_no++;

 if ((was_pass [player::black] & was_pass [player::white]) |
 (move_no > max_playout_length)) break;

   } while (true);

   return board->winner ();
}


v::t play_one (board_t* board, player::t player) {

   v::t v;
   uint start;

   // find random place in vector of empty vertices
   start = pm::rand_int (board->empty_v_cnt);

   // search for a move in start ... board->empty_v_cnt-1 interval
   for (uint ev_i = start; ev_i != board->empty_v_cnt; ev_i++) {
 v = board->empty_v [ev_i];
 if (!board->is_eyelike (player, v) &&
 board->play (player, v) != board_t::play_ss_suicide) return v;
   }

   // search for a move in 0 ... start interval
   for (uint ev_i = 0; ev_i != start; ev_i++) {
 v = board->empty_v [ev_i];
 if (!board->is_eyelike (player, v) &&
 board->play (player, v) != board_t::play_ss_suicide) return v;
   }

   board->check_no_more_legal (player); // powerfull check
   return v::pass;
}
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Nick Apperson

He is saying this (I think):

to read m moves deep with a branching factor of b you need to look at p
positions, where p is given by the following formula:

p = b^m   (actually slightly different, but this formula is close enough)

which is:

log(p) = m log(b)
m = log(p) / log(b)

We assume that a doubling in time should double the number of positions we
can look at, so:


m(with doubled time) = log(2p) / log(b)
m(with doubled time) = log(2) * log(p) / log(b)

So, as we can see we get a linear relationship no matter what b, the
branching factor, is.  However, the slop on the line changes accordingly.
We are also assuming that the number of moves deep that one reads is
proportional to playing strength.  This is in fact simply a way of defining
playing strength.  There are many scales that could be constructed where
this is not the case, however, this is a good one to pick from a theory
perspective because the formula holds with any game that has a branching
factor.  What you are talking about in terms of using "sense" is another
factor is strength that isn't really addressed by the above math (in my
opinion).  There are certainly ways in which one could say that by having a
better sense of the game one can effectively reduce the branching factor,
but one could also say that sense could be capable of restructuring the way
people read a situation so that it is possible to read much deeper than is
allowed by simple brute force search strategy.  That is an issue for
psychologists to address.  I am looking forward to good data from
experiments on this subject to answer some of these questions.  I think
until we have good data for go, we won't really know.  Just my opinion
(guess it is the scientist in me...)

- Nick


On 1/22/07, Matt Gokey <[EMAIL PROTECTED]> wrote:


[EMAIL PROTECTED] wrote:
>>What if we look at it mathematically by looking at the branching
>>factor?
>>Go's branching factor is generally considered to be about an order
>>of
>>magnitude greater than chess – perhaps a bit less, right?  That
>>means
>>that after each ply go becomes another additional order of
>>magnitude
>>more complex.
>
>
> Mathematically a bigger branching factor does not matter. If the level
> increase is proportional to the 2log of time, then it is also
> proportional to the 10log of time. The only difference is the
> proportionality factor.
I'm not sure I follow your logic here.  Sounds like you are using the
hypothesis to support itself in a circular argument.  Perhaps you could
elaborate.



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Matt Gokey

Don Dailey wrote:

Thanks Don, overall you may have missed my point.  I am not saying that 
human thinking time does not help in go like in chess, but rather that 
the relationship (the curve) between thinking time and strength may not 
be the same between chess and go as I thought you had been asserting and 
trying to offer some evidence for this, one part via logic and one part 
via my personal intuition.  I think there is some nugget of truth in the 
intuition of very strong go players with regard to this.  Personally I 
think the improvement given more time is less than in chess and returns 
diminish more quickly.  I was not referring to computer go or any 
computer algorithms though.



To a point the improvement is linear - however there is gradual curve to
this so it's not completely linear.   The surprise in computer chess was
that it was difficult to detect that a doubling was NOT completely
linear
and only when computers really became strong did it start to become
clear there was a gradual tapering off.
Are we talking about humans or computers here? Computer game play is 
entirely based on the algorithms and techniques in use.  In chess these 
are well established, scalable, and mature and so you can make 
generalizations. Not so in computer-go where there are so many wildly 
different techniques being used, some scalable to some degree or another 
and some not.




I believe the same curve exists in a properly scaled GO program but 
since the game is deeper I believe it will appear linear for a long

time to come, but in fact it's not strictly linear.
You may very well be right about computer-go with the proper algorithms 
since computers do not have the same limitations or abilities as the 
human mind. Whether it's linear with doubling depends on way too many 
unknown factors to even hazard a guess.



I would also like to point out that although I believe that each
doubling is roughly linear,  I'm not claiming you get the same 
improvement as in chess.   You might get more or you might get

less - actually the evidence is that you get more, perhaps because
the game is much longer and minor improvements over hundreds of
move add up to more likelihood of a win.
Here is where we would differ - I think it is less and it tapers off 
faster than in chess (for humans).


What if we look at it mathematically by looking at the branching factor? 
Go’s branching factor is generally considered to be about an order of 
magnitude greater than chess – perhaps a bit less, right?  That means 
that after each ply go becomes another additional order of magnitude 
more complex.  Now of course, in practice it’s not so simple, as 
breaking the game into regions and doing local reading and global 
analysis reduces the complexity somewhat, but in general go explodes a 
lot faster than chess and this fact is commonly used as one of the 
reasons methods used for computer chess don’t work on go especially 
combined with the lack of reliable evaluation.


I think this is more about information, not brute force ply.  With twice
as much time you have twice as many thoughts, twice as much time to 
produce a better idea,  twice everything.I also suspect it's about

playing the odds.  Give me twice as much time to think about something
interesting and challenging, and it "increases my chances" of having a
meaningful insight.  


It's always tempting to simply compare to depth or ply but I don't think
it's about that.   When I talk about scalable algorithms instantly
search
comes to mind - but we can be open about this.   Who is to say some
other
scalable algorithm will not come along?   Perhaps some iterative
computation
that systematically produces (on average) a better and better move.   
Commenting about the above two paragraphs: Of course, I oversimplify as 
I pointed out to make the point about the complexity explosion due to 
both branching factor and evaluation difficulty - that is all. Reality 
is much messier.  And again I was not referring to computer-go algorithms.



(snip)

But my point is that this is about information processing, not ply depth
although there may be a great deal of overlap.

Agreed

For the sake of argument if we assume that doubling thinking time allows 
one to double the number of positions and alternatives that can be 
analyzed, this doubling would seem to have lesser impact in go where the 
explosion is much quicker than in chess and thus the same relationship 
may not hold.  


It's not clear to me that the effective branching factor is that much
higher.   Right now it is for computers,  but for a good player most
of the garbage is pruned and the tree is very narrow.But even if
I'm wrong about this,  you must remember that for ELO purposes it's 
more about how much information you can process relative to your

opponent,
not how close you can get to perfect play.   Maybe that's why there is
so much confusion about this.  
It is much higher, but pattern matching based on past experie

Re: [computer-go] Re: libgoboard v0.97 released

2007-01-22 Thread Vlad Dumitrescu

Hello Lukasz,

On 1/22/07, Łukasz Lew <[EMAIL PROTECTED]> wrote:

> Few interesting things has happened so I decided to announce new version:


I have a few observations:

* in order to make libgoboard compile under cygwin I had to rename
"const float infinity", it wasn't used and there is a clash with
infinity() from math.h

* running playout_test_opt on Ubuntu Linux gives a weird result, looks
like an overflow of some kind: from 20 playouts, black wins 85732
and white wins 3213567648 or 3219491824 or some number in that
neighbourhood.
On Cygwin, the results are ok and always the same.

best regards,
Vlad
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

[computer-go] New ICGA web site

2007-01-22 Thread Nick Wedd
Some results of Computer Go (and other computer games) events have long 
been available on the old ICGA web site at 
http://www.cs.unimaas.nl/icga/ .  There is now a new ICGA web site at 
http://www.grappa.univ-lille3.fr/icga/ with fuller information on these 
events, including many game records that were not (so far as I know) 
previously available.  (Urban Hafner will be pleased to see that this 
new site includes RSS feeds.)


Nick
--
Nick Wedd[EMAIL PROTECTED]
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Matt Gokey

Nick Apperson wrote:


He is saying this (I think):

to read m moves deep with a branching factor of b you need to look at p 
positions, where p is given by the following formula:


p = b^m   (actually slightly different, but this formula is close enough)

which is:

log(p) = m log(b)
m = log(p) / log(b)

We assume that a doubling in time should double the number of positions 
we can look at, so:



m(with doubled time) = log(2p) / log(b)
m(with doubled time) = log(2) * log(p) / log(b)

Your math is wrong (I think).

The correct equivalency for the last line would be:
m(with doubled time) = (log(2) + log(p)) / log(b)



So, as we can see we get a linear relationship no matter what b, the 
branching factor, is.  However, the slop on the line changes 
accordingly. 
But anyway that is not the exact relationship of interest here.  There 
is another key variable, that being the number of positions the player 
can effectively evaluate in one period of time which is a subset of m, 
not all of m and is presumed to be roughly fixed at some maximum value 
relative to the strength of the player.  Doubling this number and 
comparing it to the next ply number of possibilities is the correct 
relationship and this curve is much different depending on the branching 
factor and not a linear relationship.


We are also assuming that the number of moves deep that 
one reads is proportional to playing strength.  This is in fact simply a 
way of defining playing strength.  There are many scales that could be 
constructed where this is not the case, however, this is a good one to 
pick from a theory perspective because the formula holds with any game 
that has a branching factor.  What you are talking about in terms of 
using "sense" is another factor is strength that isn't really addressed 
by the above math (in my opinion).  There are certainly ways in which 
one could say that by having a better sense of the game one can 
effectively reduce the branching factor, but one could also say that 
sense could be capable of restructuring the way people read a situation 
so that it is possible to read much deeper than is allowed by simple 
brute force search strategy.  That is an issue for psychologists to 
address.  I am looking forward to good data from experiments on this 
subject to answer some of these questions.  I think until we have good 
data for go, we won't really know.  Just my opinion (guess it is the 
scientist in me...)

Interesting thoughts.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: libgoboard v0.97 released

2007-01-22 Thread Łukasz Lew

On 1/22/07, Vlad Dumitrescu <[EMAIL PROTECTED]> wrote:

Hello Lukasz,

On 1/22/07, Łukasz Lew <[EMAIL PROTECTED]> wrote:
> > Few interesting things has happened so I decided to announce new version:

I have a few observations:

* in order to make libgoboard compile under cygwin I had to rename
"const float infinity", it wasn't used and there is a clash with
infinity() from math.h


Thanks, I will fix it.



* running playout_test_opt on Ubuntu Linux gives a weird result, looks
like an overflow of some kind: from 20 playouts, black wins 85732
and white wins 3213567648 or 3219491824 or some number in that
neighbourhood.


Which version?
What about engine_opt / engine_debug / playout_test_debug?

Maybe it is not initialized properly. Can You print the value before
any playout?


On Cygwin, the results are ok and always the same.


I will add randomization of seed.

Thanks for that,
Lukasz



best regards,
Vlad

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Re: [computer-go] Re: libgoboard v0.97 released

2007-01-22 Thread David Doshay
Randomization of seed may not be a good idea. For some experiments it  
is better to know the starting seed and keep it the same, for others,  
like play against humans, randomization is probably preferable.


I would suggest having a runtime flag that can be set either way.

Cheers,
David



On 22, Jan 2007, at 2:01 PM, Łukasz Lew wrote:


On Cygwin, the results are ok and always the same.


I will add randomization of seed.



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] an idea... computer go program's rank vs time

2007-01-22 Thread Ray Tayek

At 09:27 AM 1/22/2007, you wrote:

...
Don believes there is probably no difference and 
states a rule: doubling thinking time = linear improvement in play.


i agree with this over some small range of powers of two.

..., as breaking the game into regions and doing 
local reading and global analysis reduces the 
complexity somewhat, but in general go explodes a lot faster than chess ...


... I can say that I don’t feel overwhelmed when playing chess.   ...
Now with Go as a beginner still, on the other 
hand, I almost always felt and still feel quite overwhelmed  ...


yes, i usually feel this way in tournament games. 
and again more time will help (for some small powers of 2).


i think more time works better because go has 
more battles going on at the same time.


... The interactions between areas and the 
explosion of the game and lack of experience to 
be able to “sense” good shape and proper 
balance early enough to lead to life and 
territory just simply overwhelms me. The feeling 
is not as severe as it was when I first learned, 
but it is still there.  I wonder whether even 
for strong amateurs this is still the case, but 
just happens a bit deeper.  Is this the time 
limit that Ray talks about where any more time is not helpful?  ...


if you are analyzing one battle, maybe more 
powers of two. if it's many battles, maybe fewer 
powers of two as you will hit your mental limit  sooner.


thanks

---
vice-chair http://ocjug.org/


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/