Chris,
> We do have the issue where the four free cores are on one socket,
> rather than being equally distributed across the sockets. When I
> solicited advice from SchedMD for our config it seems they are
> doing some work in this area that may hopefully surface in the next
> major release (thou
On 19/04/18 07:11, Barry Moore wrote:
My situation is similar. I have a GPU cluster with gres.conf entries
which look like:
NodeName=gpu-XX Name=gpu File=/dev/nvidia[0-1] CPUs=[0-5]
NodeName=gpu-XX Name=gpu File=/dev/nvidia[2-3] CPUs=[6-11]
However, as you can imagine 8 cores sit idle on thes
Hello All,
I saw this post from 2014 and I was wondering if anyone had a good
solution. Post:
https://groups.google.com/forum/#!searchin/slurm-users/split$20cores$20partition%7Csort:date/slurm-users/R43s9MBPtZ8/fGkIvSVMdHUJ
My situation is similar. I have a GPU cluster with gres.conf entries whi