Re: [slurm-users] some way to make oversubscribe jobs packed before spread

2018-08-08 Thread Douglas Jacobsen
One thing you could consider doing is setting a higher weight on the the long nodes (cluster[37-100] in your example). This would cause jobs submitted to the batch partition to attempt to schedule on low weight nodes first, then the higher weight nodes. So "long" would only get used if a job requ

Re: [slurm-users] "Owner" field in scontrol show node?

2018-08-08 Thread Jeffrey T Frey
https://github.com/SchedMD/slurm/blob/master/src/slurmctld/read_config.c Line 2511 -- if the node has been scheduled exclusively, this field is set to the uid of the user whose job(s) occupy the node. > On Aug 8, 2018, at 18:14 , Ryan Novosielski wrote: > > Does anyone have any idea or a poi

[slurm-users] some way to make oversubscribe jobs packed before spread

2018-08-08 Thread Allan, Benjamin
I have an application group that would improve throughput if we could configure jobs to run two on a node (but starting/finishing at individual job times) packed by the scheduler rather than spread out and overlapped only when the partition is fully loaded with one job per node. The users' workf

[slurm-users] "Owner" field in scontrol show node?

2018-08-08 Thread Ryan Novosielski
Does anyone have any idea or a pointer to documentation about what the node “owner” field is in “scontrol show node ” like the below (set out by *s): [root@hal0099 ~]# scontrol show node hal0097 NodeName=hal0097 Arch=x86_64 CoresPerSocket=16 CPUAlloc=0 CPUErr=0 CPUTot=32 CPULoad=0.01 Avail

Re: [slurm-users] Execute parallel commands on all nodes running jobs of a particular user

2018-08-08 Thread Bjørn-Helge Mevik
Ole Holm Nielsen writes: > Bjørn, that is a different task. I know, but related. Just meant as a tip for people who already use pdsh. -- Regards, Bjørn-Helge Mevik, dr. scient, Department for Research Computing, University of Oslo signature.asc Description: PGP signature