unsuscribe
--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com
Hi all,
I'm answering to myself : in fact the memory leak happened when the slurm.conf
file was different on the nodes.
Sorry for the noise,
Have a good day,
Christine
De : LEROY Christine 208562 via slurm-users
Envoyé : mercredi 26 juin 2024 16:56
À : slurm-users@lists.schedmd.com
Cc : B
Hello Marco, all,
As I mention in another mail : in version 23-* the allowgroups is not OK :
there is a memory leak (Has anybody observed it ? could you check ?).
Thanks
Christine
De : Marko Markoc via slurm-users
Envoyé : lundi 1 juillet 2024 18:35
À : daijiangkui...@gmail.com
Cc : slurm-user
Hello Everyone,
We tried several version of slurm 23-x (till the last one: 23.11.8), and we
observe a memory leak in slurmctld in addition to a bad response time
(sometimes).
It seems to be due to a slurm.conf configuration with Partition containing
Allowgroups option.
Is it a known problem?
Is
Hello all,
Is there an env variable in SLURM to tell where the slurm.conf is?
We would like to have on the same client node, 2 type of possible submissions
to address 2 different cluster.
Thanks in advance,
Christine
user and administration friendliness, while keeping all the great things that
it could do.
On Fri, May 5, 2023 at 7:08 AM LEROY Christine 208562
mailto:christine.ler...@cea.fr>> wrote:
Hello Everyone,
We would like to improve our visibility on our cluster usage.
We have ganglia, and use sacct
Hello Everyone,
We would like to improve our visibility on our cluster usage.
We have ganglia, and use sacct actually, but I was wondering if there was a web
tool recommended to have both monitoring and accounting (user and admin
friendly) ?
Thanks in advance
Christine
efault, removing partitions or nodes from partitions might cause the jobs in
the relevant partitions to be killed.
HTH,
On Mon, Nov 29, 2021 at 6:46 PM LEROY Christine 208562
mailto:christine.ler...@cea.fr>> wrote:
Hello all,
I did some modification in my slurm.conf and I’ve restarted the s
Hello all,
I did some modification in my slurm.conf and I’ve restarted the slurmctld on
the master and then the slurmd on the nodes.
During this process I’ve lost some jobs (*), curiously all these jobs were on
ubuntu nodes .
These jobs were ok with the consumed resources (**).
Any Idea what co
Hello all,
For some software, we still need SL6 OS: is there a configure option to build
the wrapper tool (openlava and torque),
Thanks in advance,
Christine
Hello,
It doesn't seem possible to have a partition name called < default > , could
you confirm ?
Thanks in advance,
Christine
know if this an intended shorthand or something, at least I am not
aware of any documentation on this. I had expected this to fail if it's not a
valid option for QoS.
Best
Christoph
On 18/03/2021 13.27, LEROY Christine 208562 wrote:
> Hi all,
>
> I've finally configured a
xJobs MaxJobsPU
--- -
1 1
# sacctmgr modify qos lsmgui set MaxJobs=2
# sacctmgr show qos lsmgui format=MaxJobs,MaxJobsPU
MaxJobs MaxJobsPU
--- -
2 2
-Message d'origine-
De : slurm-users De la part de LEROY
Christine 208
Hello,
I’d like to reproduce a configuration we had with torque on queues/partitions :
• how to set a maximum number of running jobs on a queue ?
• and a maximum number of running jobs per user for all the users
(whatever is the user)?
There is a qos with slurm but it seems always a
14 matches
Mail list logo