Hi Geoff - just have a quick minute.. so, I'll hazard a response without
thinking about it too much :)

On 8/16/06, Geoffrey Poole < [EMAIL PROTECTED]> wrote:
>
>
> Doesn't sqrt(SSx) increase with n?  If so, won't the "standard error of
> the slope" decrease with increasing sample size??


Yes - the standard error of the slope will decrease with increasing sample
size.



> I realize SE of estimate and SE of slope do not represent the same
> thing, statistically, but by comparing the SE of estimate across
> regressions of the same X and Y variables from different environments,
> couldn't one assess the expected accuracy of resulting predictions
> across environments using data sets with different sample size?  I think
> this is what Sarah is looking for...


Well - (again, not having thought about this much!)  if I wanted to assess
the "accuracy" of predictions, I would take a look at the prediction bands
of the regression lines.  But, all of these things (SE estimate, prediction
intervals, R^2, etc.) are all related measures of the "accuracy" of a
regression.


> I suspect that what Zar is referring to here
> > is that the standard error of the estimate is in the same units as the
> > dependent variable.  Hence, you can divide it by the mean to get a
> > "unitless" measure.
> >
> If your suspicion is true, why would Zar have continued on to say "...
> making the examination of [the SE of estimate] a poor method for
> comparing regressions" (page 335, fourth edition).  Why would a unit-ed
> (i.e. non-unitless) measure automatically be poor for comparing
> regressions?  The continuation of the statement would make a lot more
> sense to me if Zar really were talking about instances where SE of
> estimate were proportional to the magnitude of the dependent variable.

I read Zar's comment "(a unitless measure)" (p335) as a reminder that
> you would want to correct for any effect of the magnitude of Y by
> dividing the SE of estimate (not residual variance) by the mean to avoid
> mixed units...
>
> Also, what would be the point of dividing by the mean Y if not to remove
> an effect of increasing magnitude of Y?  Is there another compelling
> reason to do this?


Well - the only reason I can think of is to avoid "mixed units" - as you
pointed out.  It's the same basic principle of using a coefficient of
variation.  Perhaps a better characterization of the relationship between
the SE of the estimate and the magnitude of Y is that, "the SE of the
estimate TENDS to be proportinal to the magnitude of the dependent
variable."  That is - although it is not necessarily so (as in adding a
constant to all values), observations with a larger mean tend to have a
larger variance than observations with a smaller mean, as in your example of
weights.



I'd appreciate your thoughts...
>
> Thanks,
>
> -Geoff
>

Reply via email to