puzzled two ways: why not use the numeric jobid,
I don't know the job id before the actual job submission. Hence I would like to
get some kind of place holder, and `scommit` the job later with the actual
resource requirements as comments in an usual jobscript.
OK, in that case, you can make
> Am 07.05.2019 um 16:00 schrieb Mark Hahn :
>
>> Some cluster sites need the creation of a workspace for the job in a
>> scratch area before the actual job submission, and on the other hand they
>> don't accept all characters as name of the workspace. Hence the plain job
>> name often can't be
Hi all,
We had to restart the slurmdbd service on one of our clusters running Slurm
17.11.7 yesterday, since folks were experiencing errors with job scheduling,
and running 'sacct':
-
$ sacct -X -p -o
jobid,jobname,user,partition%-30,nodelist,alloccpus,reqmem,cputime,qos,state,exitcode,All
Barbara,
Thank you for info. Will add to config.
--
SS
From: slurm-users On Behalf Of Barbara
Krašovec
Sent: Tuesday, May 7, 2019 8:53 AM
To: slurm-users@lists.schedmd.com
Subject: Re: [slurm-users] Slurm resource limits on jobs
Resources are limited with cgroups in SLURM. Check the document
On Tue, May 7, 2019 at 10:03 AM Mark Hahn wrote:
>
> > Some cluster sites need the creation of a workspace for the job in a
> >scratch area before the actual job submission, and on the other hand they
> >don't accept all characters as name of the workspace. Hence the plain job
> >name often can't
Some cluster sites need the creation of a workspace for the job in a
scratch area before the actual job submission, and on the other hand they
don't accept all characters as name of the workspace. Hence the plain job
name often can't be used here.
puzzled two ways: why not use the numeric jobid,
Resources are limited with cgroups in SLURM. Check the documentation:
https://slurm.schedmd.com/cgroups.html
You simply specify ProctrackType=proctrack/cgroup or/and
TaskPlugin=task/cgroup in slurm.conf and then configure which resources
are limited and how much in the cgroup.conf:
https://slurm
On 07/05/2019 13.47, David Baker wrote:
> We are experiencing quite a number of database failures.
> [root@blue51 slurm]#*less slurmdbd.log-20190506.gz | grep failed*
> [2019-05-05T04:00:05.603] error: mysql_query failed: 1213 Deadlock found when
> trying to get lock; try restarting transaction
Hello,
We are experiencing quite a number of database failures. We saw an outright
failure a short while ago where we had to restart the maria database and the
slurmdbd process. After restarting the database appear to be working well,
however over the last few days I have notice quite a number
Hi,
Some cluster sites need the creation of a workspace for the job in a scratch
area before the actual job submission, and on the other hand they don't accept
all characters as name of the workspace. Hence the plain job name often can't
be used here.
sblank (with possible array size give) wil
10 matches
Mail list logo