[slurm-users] /usr/lib64/slurm/prep_script.so: undefined symbol: run_script

2021-07-12 Thread Braulio Solano Rojas
Greetings, I would like to install SLURM on Clear Linux because of its good benchmarks.  I have followed the tutorial at https://docs.01.org/clearlinux/latest/tutorials/hpc.html . When I got to the step of the section "Create slurm.con

Re: [slurm-users] Priority Access to GPU?

2021-07-12 Thread Fulcomer, Samuel
Jason, I've just been working through a similar scenario to handle access to our 3090 nodes that have been purchased by researchers. I suggest putting the node into an additional partition, and then add a QOS for the lab group that has grptres=gres/gpu=1,cpu=M,mem=N (where cpu and mem are whateve

Re: [slurm-users] Assigning two "cores" when I'm only request one.

2021-07-12 Thread Rodrigo Santibáñez
I use ThreadsPerCore=1 in the node definitions. $ srun --cpus-per-task 1 --pty python3 Python 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.cpu_count() 16 >>> os.sched_getaffinity(0) {0} On

[slurm-users] Priority Access to GPU?

2021-07-12 Thread Jason Simms
Dear all, I feel like I've attempted to track this down before but have never fully understood how to accomplish this. I have a GPU node with three GPU cards, one of which was purchased by a user. I want to provide priority access for that user to the card, while still allowing it to be used by t

[slurm-users] Assigning two "cores" when I'm only request one.

2021-07-12 Thread Luis R. Torres
Hi Folks, I'm trying to run one task on one "core", however, when I test the affinity, the system gives me "two"; I'm assuming the two are threads since the system is a dual socket system. Is there anything in the configuration that I can change to have a single core or thread assigned to a singl

Re: [slurm-users] Minimum requirements for Slurm daemons?

2021-07-12 Thread Ole Holm Nielsen
On 12-07-2021 20:17, Heitor wrote: Hello, I'm trying to find the minimum requirements (mainly CPU and RAM) for the slurmctld, sulrmdbd, and slurmrestd daemons, but I did not find it in the docs. Maybe I missed some page? SchedMD recommends that the slurmctld server should have only a few, but

[slurm-users] Minimum requirements for Slurm daemons?

2021-07-12 Thread Heitor
Hello, I'm trying to find the minimum requirements (mainly CPU and RAM) for the slurmctld, sulrmdbd, and slurmrestd daemons, but I did not find it in the docs. Maybe I missed some page? Right now, we are allocating one entire physical machine to each one of those daemons, but it sounds overkill.

[slurm-users] Configless Slurm: DNS SRV record does not work without FQDN on EL8 systems

2021-07-12 Thread Ole Holm Nielsen
With Configless Slurm you can use a DNS SRV record to point to your slurmctld server. We're in the process of testing various CentOS 8 (EL8) alternatives (AlmaLinux, RockyLinux, CentOS 8 Stream), and I've found a strange behavior on all EL8 systems: On CentOS 7.9 compute nodes and servers the

Re: [slurm-users] Users Logout when job die or complete

2021-07-12 Thread Andrea Carotti
Dear Chris, thanks for the suggestions. I'm running Centos Stream 8.4. I've done a couple of tests: 1) I've modified as suggested this line as this ProctrackType=proctrack/linuxproc. Restarted the slurmctld and nd the nodes' slurmd(hope it's enough) but didn't changed the behaviour. 2) I've