Ding!
On Fri, Nov 16, 2012 at 11:42 PM, Ryan C. Thompson
wrote:
> The difference is that in the parallel package, you use mclapply for
> multicore and parLapply for multi-machine parallelism. If you want to switch
> from one to the other, you have to change all your code that uses either
> funct
The difference is that in the parallel package, you use mclapply for
multicore and parLapply for multi-machine parallelism. If you want to
switch from one to the other, you have to change all your code that
uses either function to the other one. If you use llply(...,
.parallel=TRUE), then all y
On Fri, Nov 16, 2012 at 11:44 AM, Ryan C. Thompson wrote:
> You don't have to use foreach directly. I use foreach almost exclusively
> through the plyr package, which uses foreach internally to implement
> parallelism. Like you, I'm not particularly fond of the foreach syntax
> (though it has some
To be more specific, instead of:
library(parallel)
cl <- ... # Make a cluster
parLapply(cl, X, fun, ...)
you can do:
library(parallel)
library(doParallel)
library(plyr)
cl <- ...
registerDoParallel(cl)
llply(X, fun, ..., .parallel=TRUE)
On Fri 16 Nov 2012 11:44:06 AM PST, Ryan C. Thompson wrot
You don't have to use foreach directly. I use foreach almost
exclusively through the plyr package, which uses foreach internally to
implement parallelism. Like you, I'm not particularly fond of the
foreach syntax (though it has some nice features that come in handy
sometimes).
The appeal of f
I'm not sure I understand the appeal of foreach. Why not do this within the
functional paradigm, i.e, parLapply?
Michael
On Fri, Nov 16, 2012 at 9:41 AM, Ryan C. Thompson wrote:
> You could write a %dopar% backend for the foreach package, which would
> allow any code using foreach (or plyr which
Oh wow, that reminds me -- I did send Hadley a patch to use plyr through
foreach a long, long time ago. I think it's been replaced with something a
lot better, but at least there's something I can be reasonably certain of
working...
On Fri, Nov 16, 2012 at 9:41 AM, Ryan C. Thompson wrote:
> You
You could write a %dopar% backend for the foreach package, which would
allow any code using foreach (or plyr which uses foreach) to parallelize
using your code.
On a related note, it might be nice to add Bioconductor-compatible
versions of foreach and the plyr functions to BiocParallel if they
On Fri, Nov 16, 2012 at 10:33 AM, Hahne, Florian wrote:
> Sort of. My implementation assumes parLapply to be a generic function, and
> there is an object called SGEcluster, which in a way is equivalent to the
> 'cluster' class objects in the parallel package. Rather than providing a
> bunch of no
Sort of. My implementation assumes parLapply to be a generic function, and
there is an object called SGEcluster, which in a way is equivalent to the
'cluster' class objects in the parallel package. Rather than providing a bunch
of nodes to compute on, it contains the necessary information for Ba
Oh wait, correction. Not your fault. I thought you call FilterRules
directly. So ShortRead is using it internally in methods-SRFilter.R and
never imports it from IRanges. Looks like a simple fix in there.
Florian
--
On 11/16/12 12:40 PM, "Anita Lerch" wrote:
>Hi,
>
>I have two problems whe
This sounds very useful when mixing batch jobs with an interactive session.
In fact, it's something I was planning to do, since I noticed their
execution model is completely asynchronous. Is it actually a new cluster
backend for the parallel package?
Michael
On Fri, Nov 16, 2012 at 12:18 AM, Hahn
Hi Anita,
it seems that FilterRules is defined in the IRanges package. If you want
to use it you will have to explicitly import it. Can't quite see how this
is related to importing or not importing ShortRead. The only reason why
this works when you require ShortRead is because it depends on IRanges
Hi,
I have two problems when I import ShortRead into my packages.
1. I get the following warning, when I load my package
Warning message:
"replacing previous import ‘show’ when loading ‘ShortRead’"
--> It looks to me like a conflict between BiocGenerics and
ShortRead. How can I get
I am interested in all three, and for many of our large genomics
experiments 3) seems to become more and more important. All large
centralized clusters seem to rely on scheduling systems these days.
--
On 11/15/12 7:53 PM, "Henrik Bengtsson" wrote:
>Is there any write up/discussion/plans o
I've hacked up some code that uses BatchJobs but makes it look like a
normal parLapply operation. Currently the main R process is checking the
state of the queue in regular intervals and fetches results once a job has
finished. Seems to work quite nicely, although there certainly are more
elaborate
16 matches
Mail list logo