MM and CLOP are completely different from each other.

MM is for supervised learning of a playing policy from game records. It will 
tune parameters (pattern weights) to match a given set of samples moves 
(typically from a collection of game records of strong players).

CLOP is black-box optimization, that is to say it can optimize the win rate of 
your program by letting your program play games against a reference opponent. 
It will try plenty of different parameter values, play games with them, and try 
to estimate the best value for the parameter from that.

They both optimize something: MM optimizes how the playing policy matches some 
given sample moves, and CLOP optimizes the win rate against a reference 
opponent. But you cannot use one method to do the job of the other.

Rémi

On 11 févr. 2014, at 20:42, Peter Drake <[email protected]> wrote:

> A naive question:
> 
> In what situations is it better to use Coulom's Elo method vs his CLOP method 
> for setting parameters? It seems they are both techniques for optimizing a 
> high-dimensional, noisy function.
> 
> -- 
> Peter Drake
> https://sites.google.com/a/lclark.edu/drake/
> _______________________________________________
> Computer-go mailing list
> [email protected]
> http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to