There are books on this, can't repeat them here...

Roughly speaking, Fisher Scoring is quadratically convergent, hence requires 
much fewer iterations than gradient descent methods which are generally only 
linear, and sometimes very slowly so (in highly collinear cases, usually). 
I.e., it is a matter of extra work per iterations against more iterations. 
Besides, glm() wants the information matrix for the variance-covariance matrix 
of estimates anyway.

-pd

On 03 Jul 2014, at 19:32 , Vijay goel <bgo...@gmail.com> wrote:

> R base function glm() uses Fishers Scoring for MLE, while the glmnet uses the
> coordinate descent method to solve the same equation ? Coordinate descent is
> more time efficient than Fisher Scoring as fisher scoring calculates the
> second order derivative matrix and some other matrix operation which makes
> it space and time expensive, while coordinate descent can do the same task
> in O(np) time.
> 
> Why R base function uses Fisher Scoring or this method has advantage over
> other optimization methods? What will be comparison between coordinate
> descent and Fisher Scoring ? I am relatively new to do this field so any
> help or resource will be helpful
> 
> Regards
> Vij
> 
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd....@cbs.dk  Priv: pda...@gmail.com

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to