It may be difficult to narrow down the problem without knowing what commands 
you're running inside the salloc session. For example, if it's a pure OpenMP 
program, it can't use more than one node.
________________________________
From: Sundaram Kumaran via slurm-users <slurm-users@lists.schedmd.com>
Sent: Friday, August 9, 2024 7:10:16 AM
To: slurm-us...@schedmd.com <slurm-us...@schedmd.com>
Subject: [slurm-users] The issue in the distribution of job


External Email Warning

This email originated from outside the university. Please use caution when 
opening attachments, clicking links, or responding to requests.

________________________________

Dear All,

May I have your suggestion in my issue facing,

While the job is launched using “salloc -N4--mem 4000 -p active”  I find the 
job is running in the one compute node and the other 3 machines are free, I 
don`t find the job is distributed evenly, May I have your suggestion,

I do squeue /scontrol to find the job distribution and it displays the 4 
machines but when I check on the respective machines I don`t find the job 
running only one machine takes the whole node,

Is there any issue in my conf file or what needs to be done, May I have your 
suggestion pls.



FYI, salloc -N4 --mem 4000 -p active

[cid:image001.png@01DAE8B4.215FBA50]

 While using TOP,

I find only Debussy is used heavily, I don`t find my job is evenly distributed, 
May I have your guidance pls.





[cid:image002.jpg@01DAEA98.2573DE10]









Regards,

KumaranS





This e-mail and any attachments are only for the use of the intended recipient 
and may contain material that is confidential, privileged and/or protected by 
the Official Secrets Act. If you are not the intended recipient, please delete 
it or notify the sender immediately. Please do not copy or use it for any 
purpose or disclose the contents to any other person.
-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com

Reply via email to