Hello group,

I am experimenting with parallel processing on my quad core Win 7 32
bit machine. Using these packages for the first time.

I can see all my processor running at full performance when I use a
smaller dataset

require(snow)
require(doSNOW)
require(foreach)
#change the 8 to however many cores\phys processors you have on your machine
cl.tmp = makeCluster(rep("localhost",4), type="SOCK")
registerDoSNOW(cl.tmp)


optData.df <- head(pristine,100000)

system.time(
optData.df$impliedVol <-
foreach(i=1:NROW(optData.df),.packages="RQuantLib") %dopar%
with(optData.df[i,],
        tryCatch({EuropeanOptionImpliedVolatility(type,value,underlying,
strike, dividendYield, riskFreeRate, maturity,
volatility)$impliedVol},
                error = function (ex){0}))
)

This works fine!

PROBLEM: However, when I do the same but with optData.df <- pristine
... which has about 3.8 million options data ... the cores do not seem
to be fully utilized (they seem to run at 25%).

I noticed some slight delay before the processing starts running ...
when I did with the 100k dataset ... do i need to wait longer for any
allocations to be done?

Thank you.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to