unsubscribe
I ran again with time command in front of g09.
The console output is
Wed Mar 14 09:15:58 EDT 2018
real32m14.136s
user53m56.946s
sys2m17.855s
Wed Mar 14 09:48:12 EDT 2018
So the wall clock time is 32 minutes roughly.
g09 says
Job cpu time: 0 days 0 hours 47 minutes 56.0 seco
Hi fellow slurm users,
Today I noticed that scontrol returns 0 when it denies a drain request
because no reason was supplied.
It seems to me that this is wrong behavior, it should return 1 or some
other error code so that scripts will know that the node was not actually
drain.
Thanks,
Eli
Slurm
Gaussian reports CPU time, sacct reports wall time here. Was Gaussian setup to
run with 2 CPU cores?
Best,
Shenglong
> On Mar 14, 2018, at 8:04 AM, Mahmood Naderan wrote:
>
> Hi,
> I see that slurm reports a 35 min duration for a completed job (g09) like this
>
> [mahmood@rocks7 ~]$ sacct -j
Hi,
I see that slurm reports a 35 min duration for a completed job (g09) like this
[mahmood@rocks7 ~]$ sacct -j 30 --format=start,end,elapsed,time
Start EndElapsed Timelimit
--- --- -- --
2018-03-14T06:07:17 2018-03
On Wednesday, 14 March 2018 9:39:50 PM AEDT Mahmood Naderan wrote:
> Is it possible to set a global memory limit for a user?
You can indeed, but you need the accounting database set up for that.
Settings limits is documented here:
https://slurm.schedmd.com/resource_limits.html
and the accounti
Hi,
Is it possible to set a global memory limit for a user? I want to set
an upper memory limit of 12GB for a user. He can submits more than one
job with different --mem sizes. However, the total mem size for that
user should not exceed 12GB. I see MaxMemPerCPU and MaxMemPerNode in
the manual, but
On Wednesday, 14 March 2018 9:14:45 PM AEDT Mahmood Naderan wrote:
> Thank you very much.
My pleasure, so glad it helped!
--
Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
On Wednesday, 7 March 2018 12:53:30 AM AEDT Ron Golberg wrote:
> I have many Jobs divided into job arrays - which makes me cross the Slurm’s
> 67 Million JOBuid limit.
This is a consequence of the addition of federation support, you can see why
here:
https://slurm.schedmd.com/SLUG17/FederatedSc
Jessica Nettelblad writes:
> Maybe look into if scontrol top is of any help. It was introduced in
> 16.05 and changed in 17.11 to take a job list. Note that NEWS and the
> man page has slightly different information on how to enable it, maybe
> the man page is only partially updated.
>
> We don't
>If you put this in your script rather than the g09 command what does it say?
>
>ulimit -a
That was a very good hint. I first ran ssh to compute-0-1 and saw
"unlimited" value for "max memory size" and "virtual memory". Then I
submitted the job with --mem=2000M and put the command in the slurm
scri
On Wednesday, 14 March 2018 7:37:19 PM AEDT Mahmood Naderan wrote:
> I tried with --mem=2000M in the slurm script and put strace command in front
> of g09. Please see some last lines
Gaussian is trying to allocate more than 2GB of RAM in that case.
Unfortunately your strace doesn't show anything
You can use the the "nice" features to "rearrange" jobs.
/M
On 2018-03-14 10:07, Loris Bennett wrote:
Hi,
I seem to remember reading something about users being able to change
the priorities within a group of their own jobs. So if a user suddenly
decided that a job submitted later that a simi
Hi,
I seem to remember reading something about users being able to change
the priorities within a group of their own jobs. So if a user suddenly
decided that a job submitted later that a similar job submitted earlier,
he/she would be able to switch the priorities.
Is it just wishful thinking on
Hi again
I tried with --mem=2000M in the slurm script and put strace command in
front of g09. Please see some last lines
fstat(3, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
fstat(0, {st_mode=S_IFREG|0664, st_size=6542, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -
15 matches
Mail list logo