There is always overhead in starting and stopping parallel processes, but the
"per subject terms ... slow and complex" suggests to me that this is already a
small price.
Mcapply tends to be good when you need to share a lot of the same data with all
processes and have many processors with share
I'm looking for advice on which of the parallel systems to use.
Context: maximize a likelihood, each evaluation is a sum over a large number of
subjects (>5000) and each of those per subject terms is slow and complex.
If I were using optim the context would be
fit <- optim(initial.values, myfu
2 matches
Mail list logo