The output is certainly not enough to judge, but my first guess would be that your MPI (what is it btw?) is not support PMI that is enabled in Slurm. Note also, that Slurm now supports 3 ways of doing PMI and from the info that you have provided it is not clear which one you are using. To judge with a reasonable level of confidence the following info is needed: * Which MPI implementation is used. * What version. * How it was configured. If you do not provide "--mpi" option to srun you are using PMI1 which is implemented in Slurm core library. There are other implementations available: * PMI2 plugin ("srun --mpi=pmi2") * PMIx plugin ("srun --mpi=pmix") Those are more performant options but need some additional work: for PMI2 plugin you need to install the library from contrib/pmi2, for PMIx you need to build slurm --with-pmix=<path-to-pix>.
2018-04-13 8:33 GMT-07:00 Mahmood Naderan <mahmood...@gmail.com>: > I tried with one of the NAS benchmarks (BT) with 121 threads since the > number of cores should be square. With srun, I get > > WARNING: compiled for 121 processes > Number of active processes: 1 > > 0 1 408 408 408 > Problem size too big for compiled array sizes > > > > > However, with mpirun, it seems to be fine > > Number of active processes: 121 > > Time step 1 > Time step 20 > > > Regards, > Mahmood > > > > > On Fri, Apr 13, 2018 at 5:46 PM, Chris Samuel <ch...@csamuel.org> wrote: > > On 13/4/18 7:19 pm, Mahmood Naderan wrote: > > > >> I see some old posts on the web about performance comparison of srun vs. > >> mpirun. Is that still an issue? > > > > > > Just running an MPI hello world program is not going to test that. > > > > You need to run an actual application that is doing a lot of > > computation and communications instead. Something like NAMD > > or a synthetic benchmark like HPL. > > > > All the best, > > Chris > > -- > > Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC > > > > -- С Уважением, Поляков Артем Юрьевич Best regards, Artem Y. Polyakov