On 10/9/07, Andrés Domínguez <[EMAIL PROTECTED]> wrote: > 2007/10/9, Eric Boesch <[EMAIL PROTECTED]>: > > Naive null move is unhelpful because throughout much of a go game, > > almost every move is better than passing, > > I think this is not the point of null move. Null move is "if pass is good > enough > to an alpha cut, then will be a _better_ move". It is not important if > pass is the > worse move, is important that there is a better (>=) move than pass (not > zugzwang). Then you bet searching not so deep.
Sorry, that was sloppy writing in several places. I was not trying to argue why null-move pruning (NMP) would give the wrong answer, but why, even if NMP performs as intended and horizon effects don't spoil its evaluation, it might not prune many moves. The hope is to prune variations that are bad and lopsided enough that starting at some point, one side loses the equivalent of a whole move compared to, er, more or less, correct play by both sides, right? The fraction of variations that fit that description will increase with the depth of the tree and the variability of move quality. The depth and variability are both likely to be lower in global go search than in global chess search. (As for local go search, as I already explained, I think that even if NMP is effective when compared with bare-bones alpha-beta, it is still less effective than other approaches like lambda search.) If all moves except pass for both players are better than nothing, then if NMP works as intended, no moves will be pruned in the first two plies of the search (it takes at least two moves by the same player to fall a full move behind). If an average move is more than two-thirds as valuable as the best move -- which is usually true in go for, very roughly, the first 20 moves of a typical 19x19 game -- you'll have to go six levels deep before you see many NMP cutoffs (even if white's sequence is below average and cumulatively a move worse than best, it may not lose a full move to black's imperfect responses, so only a minority of 6-ply sequences will be eligible, and then you have to consider how many of those sequences would be cut off by alpha-beta anyhow -- I would assume the sequences that NMP might prune would be cut off by ordinary alpha-beta at a greater rate than more balanced sequences would be). You won't see NMP cutoffs at the bottom of the tree, either, because it's too late to prune then. If NMP doesn't prune much near the root or the deepest nodes, and the tree is not very deep because the branching factor and static evaluation cost are high enough that there isn't time to search very deeply, then NMP isn't doing much, period. I think that is at least part of what has limited the benefits of null move pruning for full-breadth global search in go. Selective global search allows deeper searches, but a good selector should prune away most of the sequences NMP might otherwise have caught. None of this is an argument that NMP would be literally useless, just that it's unlikely to lead to a dramatic strength improvement. Even in chess, Chrilly Donninger said NMP was good for, what, 100 Elo points? The only alpha-beta tweak that can add 400 Elo to a chess program on its own is transposition tables, and everybody already has those. That makes it difficult to understand why non-go-programmers are sometimes so willing to believe that just souping up an alpha-beta search could turn today's top go programs, which I would say are at about the go equivalent of class B at 19x19, into the go equivalent of grandmasters. A simple-but-fast full-breadth alpha-beta go solver would have even further to go to reach grandmaster level, because it would need to reach the level of being competitive with the top tier of extant programs first (which no such program currently is). Either way, in terms of performance measured in human terms, the jump from the state of the art to world-champion-caliber play would be a far bigger leap beyond the state of the art than Deep Thought and Deep Blue ever made. (The "leap" to dan level, if gaining just two stones can be called that, surely requires only throwing a little more hardware at existing programs.) Okay, enough of that. If people aren't persuaded by other programmers' experience trying to map computer chess methods to computer go in a straightforward way, then they're not likely to be convinced by my hand-waving arguments either. [Regarding programmers' experience: when a top chess programmer (Chrilly) and a successful go programmer (Peter Woitke) collaborated on a chess-style go program, the result fell -- at last report, anyhow -- about 600 Elo points short of the top tier of programs at 9x9, and presumably much farther short at real go. (The 600 figure is derived from Chrilly's claims of a 60% success record against GnuGo, and GnuGo's placement nearly 700 Elo points behind Mogo on CGOS -- 9x9 is not GnuGo's long suit.) That should dispel any residual hopes that applying state-of-the-art chess-search techniques to a go program otherwise short on distinguishing ideas could yield a state-of-the-art go program, let alone something stronger. (I assume Suzie does have other distinguishing ideas, but surely it would only be weaker without them.)] _______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/