there any way how to reserve some memory on GPU
nodes only for jobs in gpu partition and which can't be used for jobs in
longjobs partition?
Thanks in advance, Daniel Vecerka, CTU Prague
smime.p7s
Description: S/MIME Cryptographic Signature
Hi,
I'm not sure how it works in 19.0.5, but with 18.x it's possible to
specify CPU affinity in the file /etc/slurm/gres.conf
Name=gpu Type=v100 File=/dev/nvidia0 CPUs=0-17,36-53
Name=gpu Type=v100 File=/dev/nvidia1 CPUs=0-17,36-53
Name=gpu Type=v100 File=/dev/nvidia2 CPUs=18-35,54-71
Name=g
y, I think, that It didn't work or I've just lost my mind due
frustration.
Anyway, problem is solved.
Thanks, Daniel
On 23.05.2019 10:11, Daniel Vecerka wrote:
Jobs ends on the same GPU. If I run CUDA deviceQuery in the sbatch I get:
Device PCI Domain ID / Bus ID / location ID: 0
Jobs ends on the same GPU. If I run CUDA deviceQuery in the sbatch I get:
Device PCI Domain ID / Bus ID / location ID: 0 / 97 / 0
Device PCI Domain ID / Bus ID / location ID: 0 / 97 / 0
Device PCI Domain ID / Bus ID / location ID: 0 / 97 / 0
Device PCI Domain ID / Bus ID / location ID: 0
Type=v100 File=/dev/nvidia1 CPUs=0-17,36-53
Name=gpu Type=v100 File=/dev/nvidia2 CPUs=18-35,54-71
Name=gpu Type=v100 File=/dev/nvidia3 CPUs=18-35,54-71
Any help appreciated.
Thanks, Daniel Vecerka CTU Prague
smime.p7s
Description: S/MIME Cryptographic Signature