Hi all,

I'm seeing some odd behavior when using the --mem-per-gpu flag instead of
the --mem flag to request memory when also requesting all available CPUs on
a node (in this example, all available nodes have 32 CPUs):

$ srun --ntasks-per-node=8 --cpus-per-task=4 --gpus-per-node=gtx1080ti:1
--mem-per-gpu=1g --pty bash
srun: error: Unable to allocate resources: Requested node configuration is
not available

$ srun --ntasks-per-node=8 --cpus-per-task=4 --gpus-per-node=gtx1080ti:1
--mem=1g --pty bash
srun: job 3479971 queued and waiting for resources
srun: job 3479971 has been allocated resources
$

The nodes in this partition have a mix of gtx1080ti and rtx2080ti GPUs, but
only one type of GPU is in any one node. The same behavior does not occur
when requesting a (node with a) rtx2080ti instead.

Is there something I'm missing that would cause the --mem-per-gpu flag to
not be working in this example?

Thanks,
Matthew
-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to