On May 31, 2013, at 17:10 , Stefano Sofia wrote:

> I find difficult to understand why in
> lm(log(Y) ~ X)
> Y is assumed lognormal.
> I know that if Y ~ N then Z=exp(Y) ~ LN, and that if Y ~ LN then Z=log(Y) ~ N.
> In
> lm(log(Y) ~ X)
> I assume Y ~ N(mu, sigma^2), and then exp(Y) would be distributed by a LN, 
> not l
> og(Y).
> Where is my mistake?

It is log(Y) that is assumed N(mu, sigma^2), and exp(log(Y)) is LN.  


> 
> Moreover, in
> glm(Y ~ X, family=gaussian(link=log))
> the regression is
> log(mu) = beta0 + beta1*X.
> In
> lm(log(Y) ~ X)
> the regression is
> exp(mu+(1/2)*sigma^2) = beta0 + beta1*X.
> Correct?

Probably not. (What is mu? If it is E(log(Y)), then it should just be just 
mu=beta0+beta1*X)

-- 
Peter Dalgaard, Professor,
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd....@cbs.dk  Priv: pda...@gmail.com

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to