> On Aug 28, 2019, at 4:44 PM, James Spottiswoode wrote:
>
> Hi Bert,
>
> Thanks for your advice. Actually i’ve already done this and have checked out
> doParallel and future packages. The trouble with doParallel is that it forks
> R processes which spend a lot of time loading data and pa
Your first option is always to serially compute results. When the computation
time is long compared to session overhead and data I/O, you can consider
parallel computing. You should first consider laying out your independent
computation work units as a sequence, and then allocate segments of tha
I would suggest that that you search on "parallel computing" at the
Rseek.org site. This brought up what seemed to be many relevant hits
including, of course, the High Performance and parallel Computing Cran task
view.
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people
Hi All,
I have a piece of well optimized R code for doing text analysis running
under Linux on an AWS instance. The code first loads a number of packages
and some needed data and the actual analysis is done by a function called,
say, f(string). I would like to parallelize calling this function a
4 matches
Mail list logo