Re: [slurm-users] Checking memory requirements in job_submit.lua

2019-01-15 Thread Bjørn-Helge Mevik
Prentice Bisbal writes: > Sorry for the delayed reply. I didn't see this earlier (6 months > ago!). I'm just seeing it now as I clean up my inbox. That happens for me as well from time to time. :) > 1. Yes, slurm.user_msg does actually print out a message to the user > in this case. > > 2. I wa

Re: [slurm-users] Checking memory requirements in job_submit.lua

2019-01-14 Thread Prentice Bisbal
Bjorn, Sorry for the delayed reply. I didn't see this earlier (6 months ago!). I'm just seeing it now as I clean up my inbox. 1. Yes, slurm.user_msg does actually print out a message to the user in this case. 2. I was running 17.11.4 or 17.11.5 at the time. I've since upgraded to 18.08.

Re: [slurm-users] Checking memory requirements in job_submit.lua

2018-06-18 Thread Bjørn-Helge Mevik
Prentice Bisbal writes: > if job_desc.pn_min_mem > 65536 then >     slurm.user_msg("NOTICE: Partition switched to mque due to memory > requirements.") >     job_desc.partition = 'mque' >     job_desc.qos = 'mque' >     return slurm.SUCCESS > end Somewhat off-topic, but: So, does slurm.user_msg(

Re: [slurm-users] Checking memory requirements in job_submit.lua

2018-06-14 Thread Hendryk Bockelmann
Hi, based on information given in job_submit_lua.c we decided not to use pn_min_memory any more. The comment in src says: /* * FIXME: Remove this in the future, lua can't handle 64bit * numbers!!!. Use min_mem_per_node|cpu instead. */ Instead we check in job_submit.lua for s,th, like if

Re: [slurm-users] Checking memory requirements in job_submit.lua

2018-06-14 Thread Prentice Bisbal
On 06/13/2018 01:59 PM, Prentice Bisbal wrote: In my environment, we have several partitions that are 'general access', with each partition providing different hardware resources (IB, large mem, etc). Then there are other partitions that are for specific departments/projects. Most of this con

[slurm-users] Checking memory requirements in job_submit.lua

2018-06-13 Thread Prentice Bisbal
In my environment, we have several partitions that are 'general access', with each partition providing different hardware resources (IB, large mem, etc). Then there are other partitions that are for specific departments/projects. Most of this configuration is historical, and I can't just rearra