Dear all,

I'm performing a t-test on two normal distributions with identical mean &
standard deviation, and repeating this tests a very large number of times to
describe an representative p value distribution in a null case. As a part of
this, the program bins these values in 10 evenly distributed bins between 0
and 1 and reports the number of observations in each bin. What I have
noticed is that even after 500,000 replications the number in my lowest bin
is consistently ~5% smaller than the number in all the other bins, which are
similar within about 1% of each other. Is there any reason, perhaps to do
with random number generation in R or the nature of the normal distribution
simulated by the rnorm function that could explain this depletion?

Here are two key parts of my code to show what functions I'm working with:

#Calculating the p values
while(i<numtests){
Group1<-rnorm(6,-0.0065,0.0837)
Group2<-rnorm(6,-0.0065,0.0837)
PV<-t.test(Group1,Group2)$p.value
pscoresvector<-c(PV,pscoresvector)
i<-i+1
}

#Binning the results
freqtbl1<-binning(pscoresvector,breaks=bins)

Thanks in advance for any insights,

Andrew

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to