> No, Slurm has to launch the batch script on compute node cores
> ... SNIP...
> Even with srun directly from a login node there's still processes that
> have to run on the compute node and those need at least a core
> (and some may need more, depending on the application).
Alright, understood.
On 6/21/24 3:50 am, Arnuld via slurm-users wrote:
I have 3500+ GPU cores available. You mean each GPU job requires at
least one CPU? Can't we run a job with just GPU without any CPUs?
No, Slurm has to launch the batch script on compute node cores and it
then has the job of launching the users
yes, the algorithm should be like that 1 cpu (core) per job(task).
Like someone mentioned already, need to to --oversubscribe=10 on cpu
cores, meaning 10 jobs on each core for you case. Slurm.conf.
Best,
Feng
On Fri, Jun 21, 2024 at 6:52 AM Arnuld via slurm-users
wrote:
>
> > Every job will need
> Every job will need at least 1 core just to run
> and if there are only 4 cores on the machine,
> one would expect a max of 4 jobs to run.
I have 3500+ GPU cores available. You mean each GPU job requires at least
one CPU? Can't we run a job with just GPU without any CPUs? This sbatch
script requ
Arnuld,
You may be looking for the srun parameter or configuration option of
"--oversubscribe" for CPU as that is the limiting factor now.
S. Zhang
On 2024/06/21 2:48, Brian Andrus via slurm-users wrote:
Well, if I am reading this right, it makes sense.
Every job will need at least 1 core j
Well, if I am reading this right, it makes sense.
Every job will need at least 1 core just to run and if there are only 4
cores on the machine, one would expect a max of 4 jobs to run.
Brian Andrus
On 6/20/2024 5:24 AM, Arnuld via slurm-users wrote:
I have a machine with a quad-core CPU and a