Hello Giovanni,

I don't know if my workflow would suit you but I tend to want the
opposite when I launch a parallel process. I tend to want to keep the
processes alive as long as they can. If the computation time is long I
would not want to lose everything.

lapply..8 <- function(X,FUN,...){
    max..cores <- as.numeric(system("grep ^processor /proc/cpuinfo 2>/dev/null 
| wc -l",TRUE))
    mclapply(X,FUN,mc.cores=min(max..cores,8))
}

lapply_with_error..8 <- function(X,FUN,...) {    
    lapply..8(X, function(x, ...) tryCatch(FUN(x, ...),
                                            error=function(e){
                                       print(e)
                                       e}
                                       ))
}

res <- lapply_with_error..8(X = 1:12, FUN = function(x) {Sys.sleep(0.1); if(x 
== 4) stop()})

Then we can investigate the problem for the element that generated
errors. It is even better if we could anticipate the errors and avoid
surprises  by well _testing_ the function before launching a long
process. 

If you want the processes to fail fast, I fear that you want to launch the 
parallel process too soon without
having tested your function enough.  

HTH,

Jeremie

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to