On 7/30/19 6:03 PM, Brian Andrus wrote:
I think this may be more on how you are calling mpirun and the
mapping of processes.
With the "--exclusive" option, the processes are given access
to all the cores on each box, so mpirun has a choic
I think this may be more on how you are calling mpirun and the mapping
of processes.
With the "--exclusive" option, the processes are given access to all the
cores on each box, so mpirun has a choice. IIRC, the default is to pack
them by slot, so fill one node, then move to the next. Whereas y
Hi,
Thanks, so I have to install and enable the client part of slurm (slurmd)
on the manager host along side with slurmctld.
Regards.
Le mar. 30 juil. 2019 à 14:23, Daniel Letai a écrit :
> Yes, just add it to the Nodes= list of the partition.
>
> You will have to install slurm-slurmd on it as
Yes, just add it to the Nodes= list of the partition.
You will have to install slurm-slurmd on it as well, and enable
and start as on any compute node, or it will be DOWN.
HTH,
--Dani_L.
On 7/30/19 3:45 PM, wodel youchi wrote:
Hi,
I am newbie in Slurm,
All examples I saw when they declare the Partition, only compute nodes are
used.
My question is : can I use the manager or the slurmctldhost (the master
host) as a compute node in and extended partition for example?
if yes how?
Regards.
Hi Everyone,
I've recently discovered that when an MPI job is submitted with the
--exclusive flag, Slurm fills up each node even if the --ntasks-per-node
flag is used to set how many MPI processes is scheduled on each node.
Without the --exclusive flag, Slurm works fine as expected.
Our system i