Hi, Relatively new to Slurm. I've been using Sun GridEngine mostly. I have a cluster of 3 machines each with 8 cores. In SGE i allocate the PE slots per machine, where if i submit 24 jobs it'll run all 24 jobs (b/c each job will use 1 core). however, if i submit job in Slurm through sbatch i can only get it to run 3 jobs at a time, even when i define the cpus_per_task. I was told to use openMPI for this. i'm not familiar w/ openMPI so i did an apt install of libopenmpi-dev.
Do i have to loop through my job submission w/ mpirun and run an sbatch outside of it? Again, i'm still new to this, and w/ sge it was pretty straight fwd where all i had to do was: loop through files qsub -N {name of job} script.sh {filename}. not sure how would i do so here.