On Dec 20, 2007 10:36 PM, Jason House <[EMAIL PROTECTED]> wrote: > > > On Dec 20, 2007 5:39 PM, Álvaro Begué <[EMAIL PROTECTED]> wrote: > > > Over lunch I thought of another way of doing it that would be very > > general and easy to implement. Basically, I can compute the log-likelihood > > for a particular setting of the weights, and I can compute the gradient and > > the Jacobian matrix (the derivative with respect to each weight or pair of > > weights, respectively). Then I can approximate the function I am minimizing > > using a paraboloid computed from that data, compute the minimum of the > > paraboloid and take that point to be the new setting of the weights. Has > > anyone tried something like this? It seems like the natural way to do it to > > me. > > > My knowledge of the Jacobian matrix is limited. > http://en.wikipedia.org/wiki/Jacobian > seems to imply that your definition of the Jacobian matrix is unusual. >
Ooops! It's been too long since I learned these things. What I meant to say was the Hessian matrix ( http://en.wikipedia.org/wiki/Hessian_matrix ), which does contain the second derivatives. You are right that assuming the derivative changes linearly and using a paraboloid are the exact same thing. So we are probably thinking of the same method, although I think I can code it in very elegantly (maybe slow too?). Álvaro.
_______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/