Re: [slurm-users] Help with binding GPUs to sockets (NVlink, P2P)

2019-06-28 Thread Luis Altenkort
Hi, thanks for the answer. We actually had this already set up correctly, I simply forgot to add #SBATCH --sockets-per-node=1 to my script. Now --gpus-per-socket works! Am 28.06.19 um 09:27 schrieb Daniel Vecerka: Hi,  I'm not sure how it works in 19.0.5, but with 18.x  it's possible to spe

Re: [slurm-users] Help with binding GPUs to sockets (NVlink, P2P)

2019-06-28 Thread Daniel Vecerka
Hi,  I'm not sure how it works in 19.0.5, but with 18.x  it's possible to specify CPU affinity in the file  /etc/slurm/gres.conf Name=gpu Type=v100 File=/dev/nvidia0 CPUs=0-17,36-53 Name=gpu Type=v100 File=/dev/nvidia1 CPUs=0-17,36-53 Name=gpu Type=v100 File=/dev/nvidia2 CPUs=18-35,54-71 Name=g

[slurm-users] Help with binding GPUs to sockets (NVlink, P2P)

2019-06-27 Thread Luis Altenkort
Hello everyone, I have several nodes with 2 sockets each and 4 GPUs per Socket (i.e. 8 GPUs per bode). I now want to tell SLURM that GPUs with device ID 0,1,2,3 are connected to socket 0 and GPUs 4,5,6,7 are connected to socket 1. I want to do this in order to be able to use the new command --

[slurm-users] Help with binding GPUs to sockets (NVlink, P2P)

2019-06-27 Thread Luis Altenkort
Hello everyone, I have several nodes with 2 sockets each and 4 GPUs per Socket (i.e. 8 GPUs per bode). I now want to tell SLURM that GPUs with device ID 0,1,2,3 are connected to socket 0 and GPUs 4,5,6,7 are connected to socket 1. I want to do this in order to be able to use the new command --