>
> But, while that may be the case, perhaps we can say that they are
> hitting a wall in their observable playing strength against non-MCTS
> players (such as humans) at higher levels. In [2] I touched upon how the
> nature of the game changes at higher levels, and how scaling results
> obtained between weaker players may not apply at those higher levels. I
> was talking about pure random playouts in that article, but the
> systematic bias Olivier mentions can lead to the same problems as no
> bias at all...


I completly agree with this.
It's like a Monte-Carlo evaluation (in a non-MCTS framework).
Parallelization reduces the variance, but not the bias. You still get
improvements,
and you can believe that parallelization brings a lot, but in fact it does
not.
We spent a lot of energy trying to remove bias by various statistical tricks
(combining MCTS with tactical search or with reweighting according to
macroscopie informations on simulations such as the size of captures) but
nothing works yet
in the case of Go (for some other games we have positive results, but as I'm
not the main author I won't give too much informations on this - I hope
we'll find a similar
solution for Go, but for the moment in spite of many trials it does not work
and I'm not far of being tired of trying plenty of different implementations
of these ideas :-) ).

Best regards,
Olivier
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to