Using Parallel or similar will be the easiest and most efficient, but does
require that you have control of the R code. There are different approaches
for work within a single node or across many nodes, with the latter having a
set-up and tear-down cost so the workload within the loop must be w
On 10/7/20 10:13 pm, David Bellot wrote:
NodeName=foobar01 CPUs=80 Boards=1 SocketsPerBoard=2 CoresPerSocket=20
ThreadsPerCore=2 RealMemory=257243 State=UNKNOWN
With this configuration Slurm is allocation a single physical core (with
2 thread units) per task. So you are using all (physical) c
Indeed, it makes sense now. However, if I launch many R processes using the
"parallel" package, I can easily have all the "logical" cores running. In
the background, if I'm correct ,R will "fork" and not create a thread. So
we have independent processes. On a 20 cores CPU for example, I have 40
"lo