Hi,
I installed a cluster with 10 nodes and I'd like to try compiling a very
large code base using all the nodes. The context is as follows:
- my code base is in C++, I use gcc.
- configuration is done with CMake
- compilation is processed by ninja (something similar to make)
I can srun ninja and
result, or should I rather launch 20 jobs per node and have each job
split in two internally (using "parallel" or "future" for example)?
On Thu, Oct 8, 2020 at 6:32 PM William Brown
wrote:
> R is single threaded.
>
> On Thu, 8 Oct 2020, 07:44 Diego Zuccato, wrot
irst server.
>
> Could you try SelectTypeParameters=CR_CPU instead of
> SelectTypeParameters=CR_Core?
>
> Best regards,
> Rodrigo.
>
> On Thu, Oct 8, 2020, 02:16 David Bellot
> wrote:
>
>> Hi,
>>
>> my Slurm cluster has a dozen machines configured as follows
Hi,
my Slurm cluster has a dozen machines configured as follows:
NodeName=foobar01 CPUs=80 Boards=1 SocketsPerBoard=2 CoresPerSocket=20
ThreadsPerCore=2 RealMemory=257243 State=UNKNOWN
and scheduling is:
# SCHEDULING
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=