I think many of the programs have a mechanism for dealing with “slow” knowledge. For example in Fuego, you can call a knowledge function for each node that reaches some threshold T of playouts. The new technical challenge is dealing with the GPU. I know nothing about it myself, but from what I read it seems to work best in batch mode - you don’t want to send single positions for GPU evaluation back and forth.
My impression is that we will see a combination of both in the future - “normal”, fast knowledge which can be called as initialization in every node, and can be learned by Remi Coulom’s method (e.g. Crazy Stone, Aya, Erica) or by Wistuba’s (e.g. Fuego). And then, on top of that a mechanism to improve the bias using the slower deep networks running on the GPU. It would be wonderful if some of us could work on an open source network evaluator to integrate with Fuego (or pachi or oakfoam). I know that Clark and Storkey are planning to open source theirs, but not in the very near future. I do not know about the plans of the Google DeepMind group, but they do mention something about a strong Go program in their paper :) Martin > Thanks for sharing. I'm intrigued by your strategy for integrating > with MCTS. It's clear that latency is a challenge for integration. Do > you have any statistics on how many searches new nodes had been > through by the time the predictor comes back with an estimation? Did > you try any prefetching techniques? Because the CNN will guide much of > the search at the frontier of the tree, prefetching should be > tractable. > > Did you do any comparisons between your MCTS with and w/o CNN? That's > the direction that many of us will be attempting over the next few > months it seems :)
_______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go