> David, do I have this right? And is K > N? Or K >> N?

This is somewhat similar in MoGo, with N=5 and K=10 nearly (I say
"nearly" because we have two levels of go expertise, the second being
much more expensive and not yet operationel, so we have in fact K1 and
K2).


> In Pebbles, BTW, the progressive widening policy is rudimentary, consisting
> mostly of distance and 3x3 pattern.

I think that "progressive widening" means that you add arms one at a time
(precisely, the n^{th} simulation chooses between the f(n) first
possible nodes for the heuristic (the heuristic might be Rave,
patterns,...), whereas
"progressive unpruning" means that you combine linearly the two values
(e.g. the UCT value and the heuristic value), the second one
decreasing e.g. linearly. For us "progressive unpruning" is much
better than "progressive widening" which might add a child node
whenever the first nodes have success rate 1. We have in fact a much
more complicated formula than that,
but I don't know to which extent our terms are general.


> joseki) apply only to the UCT process. Does MoGo apply such knowledge in
> playouts?

No, in the current version. We have an experimental version in which
the knowledge has a strong impact on the playouts, and this looks nice
when we apply mogo to some particular situations, but on average it
makes mogo weaker - this is very disappointing because the approach
was very appealing and we've believed in it when we have seen that
some typical situations in which mogo is weak were now correctly
considered by mogo :-(

Best regards,
Olivier
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to