All,
        The situation Li presented seems relatively similar to the
Kolmogorov-Smirnov test for normality, in which probability values greater
than your given alpha are interpreted as indicative of similar
distributions, a normal distribution in this specific case.  As such, Li's
premise would appear to be correct.  The rigor of that approach may be
another thing all together though, but I haven't done much reading in this
area and can't comment on that.

-djg


David Gillett    [email protected]                                
Department of Biological Sciences  
Virginia Institute of Marine Science   
College of William and Mary
P.O. Box 1346
Gloucester Point, VA 23062 
 
(804) 684-7740 (phone)
(804) 684-7889 (fax) 

-----Original Message-----
From: Ecological Society of America: grants, jobs, news
[mailto:[email protected]] On Behalf Of Gavin Simpson
Sent: Sunday, February 07, 2010 2:21 PM
To: [email protected]
Subject: Re: [ECOLOG-L] Statistical test about equality

On Sat, 2010-02-06 at 21:44 -0800, Li An wrote:
> Dear Ecologers,
> 
> In testing ecological models, we often use t-test as a way to compare 
> our model results with observed data. If they are close enough, we 
> obtain more confidence about our model. However, in most traditional 
> situations, we put "no difference" as the null and regarded it as the 
> default. This means that unless we find substantial evidence, we would 
> retain the null hypothesis. For instance, we can use this type of test 
> to examine if a drug has a noticeable effect.
> 
> In our model performance situation (testing observed data = predicted 
> numbers from a model, assuming data independence), I argue that we 
> should keep the alternative hypothesis as the default, making every 
> effort to find substantial evidence to support the null hypothesis (if 
> unable, we retain the alternative hypothesis related to inequality 
> between the model predictions and the data). In this case, we can 
> still use the traditional test statistic such as z or p values, but 
> interpret the results differently. Rather than using the criterion of 
> p > 0.05 (or
> Z<1.96 or t < a big number) to retain the null hypothesis, we should 
> use a more strict standard--e.g., p > a much larger number (e.g., 0.9) 
> or z < a much smaller number (e.g.,0.125), to retain the null 
> hypothesis about equality between the model predictions and the data. 
> This seems mofrea philosophical issue. Does this make sense?
> 
> Li

You  might like to look at the field of equivalency testing. Some references
cited in the 'equivalence' package by Andrew Robinson for R
are:

Robinson, A.P., and R.E. Froese. 2004. Model validation using equivalence
tests. Ecological Modelling 176, 349-358.

Wellek, S. 2003. Testing statistical hypotheses of equivalence. Chapman and
Hall/CRC. 284 pp.

Westlake, W.J. 1981. Response to T.B.L. Kirkwood: bioequivalence testing
- a need to rethink. Biometrics 37, 589-594.

HTH

G
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
 Dr. Gavin Simpson             [t] +44 (0)20 7679 0522
 ECRC, UCL Geography,          [f] +44 (0)20 7679 0565
 Pearson Building,             [e] gavin.simpsonATNOSPAMucl.ac.uk
 Gower Street, London          [w] http://www.ucl.ac.uk/~ucfagls/
 UK. WC1E 6BT.                 [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

Reply via email to