Quoting Brian Sheppard <[email protected]>:

I think 9x9 go, even though compared to chess in complexity,  is still more
complex than chess and that the book will have a little less impact,
although still a lot.

My projection is the opposite: I think that 9x9 will be "played out" within
5 years. Not weakly solved, exactly, but close to it. Zen and CrazyStone
have the ability to start on that project already.

Is bonobot on CGOS in fact CrazyStone? (it would be nice to know what kind of hardware it runs on)


My impression is that the opening books are routinely worth a few hundred
rating points in 9x9 CGOS.



I would cite Valkyria, which has a version that is playing near the top of
the CGOS ladder most of the time. A comparable version was playing ~200
rating points within the last year, and I suspect that the opening book
knowledge that comes from its long-term memory is the dominant contributor.

I have run the version 3.5.9 which is stable and strong for a long time on CGOS on an old P4 single core computer using one thread.

This is from the current CGOS Bayeselo

90      Valkyria3.5.9_P4Bx      2599    7       7       36195   79%     2131
115     Valkyria3.5.9_P4B       2559    23      23      2218    82%     2121
140     Valkyria3.5.9_P4        2505    30      29      1155    76%     2081
149     Valkyria3.5.9_P4_x      2499    31      30      1201    71%     2216

The first two versions have been playing recently. The other two played only in beginning of this experiment.

A 'B' in the names means it used a manually edited book. Ans 'x' means it uses a permanent hash table. Book moves are played without search so a deeper book also means less positions have to be stored in the hash table. This book is hardcoded in the program and has about 125 position where moves are proposed.

Here the book and hash tables seems to make about 100 ELO points contributing equally. It is a small book but a lot of effort went into it.

But as you get deeper into the tree adding a position to the book will have less and less impact on playing strength.

I need to repeat this experiment with the latest book and hash tables. Note however that all versions of Valkyria with 4c in the name is running on a almost 10 times faster modern machine. So the bayeselo rating 2700-2800 it is probably not much due to the opening book.

On the other hand the book is in some parts experiemental so I could probably take out some branching in it a win a little more.

I also cite the Little Golem server, which is dominated by programs that
have opening books.

For Valkyria I cannot use my book for 7.5 because it would be wrong for 5.5 where Black has the advantage and not White. Therefore LG games are based on search. But I do take notes on every position searched and build a book. But with very few games against strong players this book is very small so far compared to the CGOS book.

Based on the work of Mogo and Valkyria, I suspect that if you take a pretty
good player and create a feedback system then you get a great opening book.
With an effective branching factor of maybe 2 to 3, you can get pretty far
into the game.

Still, bonobot has beaten Valkyria 70-80 % of the games against the latest version of my book used by version 3.6.x. I will soon take a closer look at what happens, but to me it seems bonobot just plays whatever it "feels" to play and still win in the end. And it seems to get out of Valkyrias book quickly because it often plays slightly unusual moves.

I think there is a lot of fine tuning to be done in opening plays. Sometimes several moves are playable with perfect play, but form a complexity point of view one moves is best from the pragmatic point of view. I think Valkyrias book avoids a lot of bad game losing blunders which is enough against weaker programs, but against really strong play it will slip the advantage (if playing white with 7.5 komi for example) a little by little and finally the position gets so complicated it cannot read out the tactics and loses.

Here is an example. I found a variation starting with 1.Be5 2.Wg5 3.Be3? 4.Wc6 5. Bc4

3.Be3 was played a lot by MyGoFriend in the Computer Olympiad. Is it a good move? It is certainly playable. At that point in time it was only played by weaker programs on CGOS but after the olympiad many stronger played in Valkyria started to as well. Initially it looked really strong but after a while I started to figure out of to counter it (as well as playing it effectively as black to) as white for example with 4.Wc6

Now suddenly Fuego-1491-25t won 9 out of 10 games or so playing 5.Bc4. which i did not remember as have been played at all. In my book there was a sequence about 8 play deep and some notes that it had been played a few times before. Now to fix this book (as white Valkyria should win 70% or more not just 10%) I looked at the games played and both programs seemed to follow a slighlty forced sequence 10 ply deep. I verified the moves yesterday and this morning using the iterative deepning search with unlimited time. And came down to a variation which should be strong. After that there was a ko fight and the eval of Valkyria was really good. And finally I identified a position where V. with long search wanted to play a ko threat instead defending a multi-step ko. The kothreat however looked ugly, and V did not search the obvious reply to it. So I manually forced it to searched and after a couple of minutes the evalauation started to go crazy. The iterative deepening search is usually quit stable it goes up and down 7-8 % all the time, but here the swings were 25%. And finally it looked certain that it would lose the fight.

This was evidently a position where the search in general fails. And it was the kind of vague position where there are now simple basic tactics one could fix in the playouts. Whatever went wrong it probably happened another 10 ply deeper in the tree.

I now backed up to the ko threat again. And the hash table apparently now had proper information about the search so it switched opinion completely and proposed the follow up move to the ko threat first which it promised was 70%. It is still searching this position at home...

Ok if you bothered to follow this far it was not my idea that understand the details of this example. It is just to say that making a perfect book is not easy and it is not a question about a couple of ply. Against fuego and stronger programs the opening book fights go deep! Basically good games are resolved tactically not until almost the entire board if filled.

Valkyria is pretty strong given a lot of search time but there are positions where 12 hours search time PER MOVE is really necessary. And even then it may completely refuse to search an important move and make very bad conclusions.

In the case above it turned out that I myself maybe found a way to refute Fuegos line, but I am not sure of it. And a automatic system would have a hard time even find the position it should search.

In short: most programs are full of holes and building a strong opening book means one has to play other strong programs and discover these holes. And that is hard *manual* work.

Valkyrias book is not strong enough to compete with the best search only programs apparently for example. (But this may just be my incompetence) (or that the abundance of CGOS games involving Valkyria used for machine learning of patterns makes these programs strong, implicitley using the book of Valkyria against itself).

Best
Magnus
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to