'/sys/fs/cgroup/cpuset/'
I've added the `cgroup_enable=memory swapaccount=1` to the cmdline but that
doesn't help
I've seen people having those kind of problems, but no one seem to be
able to solve it and keep the cgroupsThanks a lot
Arthur
--
Dr. Christoph Brüning
Univer
find it in the
docs.
Kind regards,
Heitor
--
Dr. Christoph Brüning
Universität Würzburg
HPC & DataManagement @ ct.qmat & RZUW
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
ing jobs per user for all the
users (whatever is the user)?
There is a qos with slurm but it seems always attached to a user or an
account, not to a partition ?
What would be the best thing to do here ?
Thanks in advance,
Christine Leroy
--
Dr. Christoph Brüning
Universität Würzburg
HPC &
dn't reveal anything useful for me so my searching tangents and
parts of the slurm source just gave me some directions. I'm guessing
slurm only knows cgroup v1 so it fails when it tries to interact with
cgorup v2. Am I correct or am I barking up the wrong tree?
Thanks for you feedback in advanc
ssue was resolved and multiple jobs were successfully allocated
to the same node and ran concurrently on the same node.
Does anyone know why such behavior is seen? Why does including
memory as consumable resource lead to node exclusive behavior?
Thanks,
Dura
ske
HPC Systems Engineer
Research Data Services
P: (858) 246-5593
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
DB.
The script does not touch any FairShare values for associations or QoS
entitlements, those are set manually.
I have to mention that our script is heavily influenced by Ole Holm
Nielsen's work. So, thanks a lot, Ole! :-)
Cheers,
Christoph
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
ve written:
$ scontrol show config | grep NEXT_JOB_ID
NEXT_JOB_ID = 2488059
The next jobid is presumably in the Slurm database.
/Ole
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
Christoph
On 20/05/2020 12.00, Christoph Brüning wrote:
Dear all,
we set up a floating partition as described in SLURM's QoS documentation
to allow for jobs with a longer than usual walltime on a part of our
cluster: QoS with GrpCPUs and GrpNodes limits attached to the
longer-walltime
ed to "N/A".
Did any of you observe this or similar behaviour?
FWIW, we are running SLURM 17.11 on Debian, an upgrade to 19.05 is
scheduled in the next couple of weeks.
Best,
Christoph
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
lurm be used to
schedule containers?
If someone has any experience using docker in HPC clusters, please let
me know.
Regards,
Mahmood
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
lectTypeParameters=CR_CPU_Memory
TaskPlugin=task/cgroup
ProctrackType=proctrack/cgroup
I would be grateful for any idea.
Best regards,
René
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
he user's historical usage which I guess is
ultimately what you want.
---
Sam Gallop
-Original Message-
From: slurm-users On Behalf Of
Christoph Brüning
Sent: 12 June 2019 10:58
To: slurm-users@lists.schedmd.com
Subject: [slurm-users] Rename account or move user from one account to
underlying MariaDB, it does
not exactly appear to be a convenient or elegant solution...
Best,
Christoph
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
x 2231
Brookings, SD 57007
Phone: 605-688-5767
www.sdstate.edu <http://www.sdstate.edu/>
cid:image007.png@01D24AF4.6CEECA30
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
ed for scheduling so we don't overcommit space).
Our epilog then cleans both the per job temporary space and per job
/dev/shm up at the end.
All the best,
Chris
--
Dr. Christoph Brüning
Universität Würzburg
Rechenzentrum
Am Hubland
D-97074 Würzburg
Tel.: +49 931 31-80499
16 matches
Mail list logo