Bert,

I think that you are missunderstanding my point.  At least part of the blame 
for that is mine, I should have put more time into my post, but I had to catch 
my bus.

Se inline below:

> -----Original Message-----
> From: Bert Gunter [mailto:gunter.ber...@gene.com]
> Sent: Tuesday, May 11, 2010 3:50 PM
> To: Greg Snow; 'Bak Kuss'; murdoch.dun...@gmail.com;
> jorism...@gmail.com
> Cc: R-help@r-project.org
> Subject: RE: [R] P values
> 
> Inline below.
> 
> -- Bert
> 
> 
> Bert Gunter
> Genentech Nonclinical Statistics
> 
> -----Original Message-----
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> project.org] On
> Behalf Of Greg Snow
> Sent: Tuesday, May 11, 2010 2:37 PM
> To: Bak Kuss; murdoch.dun...@gmail.com; jorism...@gmail.com
> Cc: R-help@r-project.org
> Subject: Re: [R] P values
> 
> Bak,
> 
> ...
> 
> "  Small p-values indicate a hypothesis and data that are not very
> consistent and for small enough p-values we would rather disbelieve the
> hypothesis (though it may still be theoretically possible). "
> 
> 
> This is false.
> 
> It is only true when

So it is not universally false.

>     the hypotheses are prespecified (not derived from
> looking at the data first), when there's only one being tested
> (not,say,
> 10,000), etc. etc.

I agree and have not advocated differently.  I reread my post and don't see 
where I implied that hypotheses come after looking at the data or doing 
multiple tests, but I could have been clearer in explicitly saying not to do 
those things.

Maybe this will clarify.  Consider a case where I want to test the null 
hypothesis: mu=0 vs. the alternative of mu > 0.  Some (including myself) would 
say that the 1 sided alternative means that the null is really mu<=0.  I 
specify this null hypothesis and chose an alpha level before collecting the 
data.  Then when I have my data I do a single t-test and the computed p-value 
is less than alpha, the t-statistic is computed using mu0=0.  This means that 
under standard statistical practice I reject the null hypothesis, this means 
not only that I don't believe the true mean is 0, but also that I don't believe 
the true mean is any of the infinite number of values below 0 either.  I only 
did one test, but I have rejected an infinite set of values as being 
inconsistent with the data.

Now a t-test also assumes normality and iid sampling and significant results 
could be due to these assumptions being violated.  Do I believe that any 
situation will ever give me exact normality and exact iid sampling? No, but I 
do believe that there are studies that close enough that the test will still be 
meaningful.


> 
> (Incidentally, "small enough" is not meaningful; making it so is
> usually
> impossible).

Again I think that there is a misunderstanding here, at least I hope so, 
otherwise if your understanding matches mine then this implies that you only 
make decisions based on 100% confidence intervals/levels.

When I teach (or review) hypothesis testing I start with a demonstration having 
3 students draw from my deck of Cardboard Randomization Devices (CaRDs), after 
the 1st draws I tell them to "show the card to the rest of the class, but don't 
let me see that it is the 6 of clubs", which usually gets a surprised reaction 
since their card is the 6 of clubs (except 1 time when I messed up), the other 
2 also end up drawing the 6 of clubs.  I then ask the class who believes that 
they just observed 3 completely random draws from a regular deck.  Now it is 
theoretically possible that 3 random draws from a regular deck will result in 
the 6 of clubs each time, the probability of this happening is about 1 in 
140,000, but the vast majority of my students would rather believe that I 
cheated than that I was that lucky (I suspect that the small minorities either 
were not paying attention or also thought I cheated but were too timid to 
accuse me).  I agree with my students that 1/140000 is "sm!
 all enough", is there anyone who thinks otherwise?

> 
> IMHO far from being niggling details, these are the crux of the reason
> that
> the conventional use of P values to decide what is scientifically
> "valid"
> and what is not is a pox upon science. This is not an original view, of
> course.

Is it the P values themselves and their theory that is the pox? Or is it 
peoples misunderstanding and misuse of them that is the pox? (I agree if it is 
the later).

My post was an attempt to help clarify Bak's overinterpreting of P values.

> Don't want to stir up a hornet's nest, so feel free to silently
> dismiss. But
> contrary views are welcome, as always.

Well I clearly did not silently dismiss, but I don't think my view is 
completely contrary.

> No more from me on this -- I promise!
> 
> -- Bert
> 


-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to