Hi,

Minorization-maximization is a simple optimization method, and I agree that it is likely that more efficient algorithms can be applied.

Newton's method implies estimating the inverse of the Hessian matrix. Really computing the inverse has a cost cubic in the size of the matrix, so it is not practical with tens of thousands of parameters.

Nevertheless, Newton's method can be applied to each parameter one by one. If I understand correctly, this is what Jason proposed. One step of Newton's method on just one parameter has a computational cost very similar to MM, and is much more efficient (it should converge in just one or two iterations).

Still, MM has the advantage that it can be applied in batch mode, thus saving a lot of computation.

I cannot say for sure which approach would be more efficient.

Rémi
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to