No issue.
In fact that is the default/normal.
The 'slurm' user gets created with a shell when you install the rpms.
Brian Andrus
On 3/9/2021 6:24 AM, Sajesh Singh wrote:
I am looking to enable the cloud scheduling feature of Slurm and was
wondering if there are any issues with changing the
For the first does MaxJobs not do that? For the second you can set
MaxJobsPerUser. That's what we do here for our test partition, we set a
limit of 5 jobs per user running at any given time.
You can then tie the QoS to a specific partition using the QoS option in
the partition config in slur
I am looking to enable the cloud scheduling feature of Slurm and was wondering
if there are any issues with changing the user that slurm runs as to have a
login shell. The reason for doing this is that the ResumeProgram and
SuspendProgram scripts need to run as the SlurmUser.
Thank you,
Sajes
For those who are interested:
* https://bugs.schedmd.com/show_bug.cgi?id=11044
On 09/03/2021 14:21, Bas van der Vlies wrote:
I have found the problem and will submit a patch. If we find a partition
were a job can run but all nodes are busy. Save this state and return
this when all partitions a
I have found the problem and will submit a patch. If we find a partition
were a job can run but all nodes are busy. Save this state and return
this when all partitions are checked and job can not run in any.
Do not know if this is the right approach
regards
On 09/03/2021 09:45, Bas van der Vl
Then I have good news for you! There is the --delimiter option:
https://slurm.schedmd.com/sacct.html#OPT_delimiter=
Best,
Marcus
On 09.03.21 12:10, Reuti wrote:
Hi:
Am 09.03.2021 um 08:19 schrieb Bjørn-Helge Mevik :
"xiaojingh...@163.com" writes:
I am doing a parsing job on slurm fields.
Hi:
> Am 09.03.2021 um 08:19 schrieb Bjørn-Helge Mevik :
>
> "xiaojingh...@163.com" writes:
>
>> I am doing a parsing job on slurm fields. Sometimes when one field is
>> too long, slum will limit the length with a “+”.
>
> You don't say which slurm command you are trying to parse the output
>
Hello,
I’d like to reproduce a configuration we had with torque on queues/partitions :
• how to set a maximum number of running jobs on a queue ?
• and a maximum number of running jobs per user for all the users
(whatever is the user)?
There is a qos with slurm but it seems always a
Hi,
I need your help.
I have users that need an interactive shell on a compute node with the
possibility of running programs with a graphical user interface directly on the
compute node.
Looking for information I have found the xalloc command but it must be a
wrapper because It isn`t installed i
Hi Prentice,
Ansers inline
On 08/03/2021 22:02, Prentice Bisbal wrote:
Rather than specifying the processor types as GRES, I would recommending
defining them as features of the nodes and let the users specify the
features as constraints to their jobs. Since the newer processors are
backwards
Hi guys,
I would like to calculate the CPU efficiency and Memory efficiency of slurm
jobs.
I am having difficulty calculating the real “memory” a job use.
According to slurm, “maxRSS” means "Maximum resident set size of all tasks in
job”. If so, how can I get the memory used by a single job?
Hello Ward,
as a variant on what has already been suggested we also have the CPU type as a
feature:
Feature=E5v1,AVX
Feature=E5v1,AVX
Feature=E5v3,AVX,AVX2
Feature=S6g1,AVX,AVX2,AVX512
This allows people that want the same architecture and not just the same
instruction set for a multi-node
Hi Prentice,
On 8/03/2021 22:02, Prentice Bisbal wrote:
> I have a very hetergeneous cluster with several different generations of
> AMD and Intel processors, we use this method quite effectively.
Could you elaborate a bit more and how you manage that? Do you force you
users to pick a feature? W
13 matches
Mail list logo