_PER_NODE* tasks are permitted to execute per
> node. NOTE: *MAX_TASKS_PER_NODE* is defined in the file /slurm.h/
> and is not a variable, it is set at Slurm build time.
>
> I have used this successfully to run more jobs than cpus/cores avail.
>
> -e.
>
>
>
&g
Hello,
I am in the process of setting up our SLURM environment. We want to use
SLURM during our DDoS exercises for dispatching DDoS attack scripts. We
need a lot of parallel running jobs on a total of 3 nodes.I can't get it
to run more than 128 jobs simultaneously. There are 128 cpu's in the
compu
power8 and we are ignoring these
> messages for quite a while now.
> I'm not sure what impact it has on the scheduler or the jobs, but we
> generally don't play with the frequency anyway.
>
>
>> On Wed, Jun 23, 2021 at 7:16 PM Karl Lovink wrote:
>> Hello,
>
Hello,
I have compiled the version 20.11.7 for a IBM Power9 system running
Ubuntu 18.04. I have slurmd running but in the slurmd.log a predominant
error pops up. I did alreay some research but I cannot find a solution.
The error is:
[2021-06-23T18:02:01.550] error: all available frequencies not s
Hi,
We are using Splunk for monitoring our systems and networks. We like to
monitor Slurm also with Splunk.
Had anyone done an integration with Splunk and is willing to share the code,
searches and dashboards with us.
Sincerely yours,
Karl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hello,
I'm trying to configure slurmrestd but I haven't been very successful
so far. The plan is to use splunk to approach the endpoints. To test
the communication with slurmrestd I am now using curl. However, I
always get "Authentication failure" b
include {{ _sysconf_dir }}/slurm.conf
> AuthType=auth/jwt
> Run like this: slurmrestd -f /etc/slurm/slurmrestd.conf 0.0.0.0:6820
>
> -Original Message-
> From: slurm-users On Behalf Of Karl
> Lovink
> Sent: Friday, January 8, 2021 11:04 AM
> To: slurm-us...@