On Sep 22, 2008, at 7:59 AM, Magnus Persson <[EMAIL PROTECTED]> wrote:

In the case of the ladders the heavy playouts of Valkyria correctly prunes playing out ladders for the loser. But sometimes in the playouts the ladder is broken and after that there is a chance that the stones escape anyway. This means that almost always when the escaping move is played it is a good move! Thus AMAF will assign a very good score to this move....

My solutions to this was simply to turn off AMAF-eval for all shapes commonly misevaluated for ladders. But I think this problem is true for many shapes in general. What makes ladders special is that the problem repeats it self and the effect get stronger and thus even more likely the larger the ladder gets.

I think a better solution would be to modify AMAF in some way to avoid these problems, or perhaps change the playouts in a way to balance the problem. Does anyone know something to do about it or have any ideas?

My RAVE formulation includes a per-move parameter for RAVE confidence. This allows heuristics to fix situations like above. Sadly, my bot isn't mature enough to take advantage yet :(

The concept I used for the derivation is simple. I treat everything as gaussian estimators. It's easy to find the max of the distribution. I then use the same trick as bayeselo to estimate variance. I then add a Gaussian noise term to represent RAVE bias.






The results of the math are most easilly expressed in terms of inverse variance (iv=1/variance)

Combined mean = sum( mean * iv )
Combined iv = sum( iv )

I'll try to do a real write-up if anyone is interested.
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to