Hi terdon,

Take a look at ?p.adjust and its argument n.  For example, you could adjust
B pv values by using

p.adjust(pv[1:B], method = 'BH', n = B)

Then, you can continue processing other subsets of pv and concatenate the
result.  Here, a for() loop might be useful.

HTH,
Jorge


On Tue, Mar 8, 2011 at 8:49 AM, terdon <> wrote:

> Hello all,
>      I am calculating probabilities of association for terms in the
> GeneOntology database. I have ~4e7 probabilities and I would like to
> correct
> for multiple testing. At the moment I have something like the following:
>
> pv is a vector containing my 4e7 probabilities.
>
> To run the multiple testing corrections I do:
>
> mt <- mt.rawp2adjp(pv, proc="BH")
>
> Because of the size of the vector this takes a VERY long time and a LOT of
> memory. I cannot split the job into smaller ones since all values in the
> vector are required for the calculation.
>
> Can anyone think of a trick that would allow me to reduce the memory usage
> of this procedure? Or t least make it faster?
>
> Thanks
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Multiple-testing-corrections-on-very-large-vector-tp3341398p3341398.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to