Mahmoud, By default 1 slot = 1 core, that is why you need —oversubscribe or —use-hwthread-cpus to run 16 MPI tasks.
It seems your lammps job benefits from hyper threading. Some applications behave like this, and this is not odd a priori. Cheers, Gilles On Wednesday, June 6, 2018, r...@open-mpi.org <r...@open-mpi.org> wrote: > I’m not entirely sure what you are asking here. If you use oversubscribe, > we do not bind your processes and you suffer some performance penalty for > it. If you want to run one process/thread and retain binding, then do not > use --oversubscribe and instead use --use-hwthread-cpus > > > On Jun 6, 2018, at 3:06 AM, Mahmood Naderan <mahmood...@gmail.com> wrote: > > Hi, > On a Ryzen 1800x which has 8 cores and 16 threads, when I run "mpirun -np > 16 lammps..." I get an error that there is not enough slot. It seems that > --oversubscribe option will fix that. > > Odd thing is that when I run "mpirun -np 8 lammps" it takes about 46 > minutes to complete the job while with "mpirun --oversubscribe -np 16 > lammps" it takes about 39 minutes. > > I want to be sure that I "-np 16" uses are logical cores. Is that > confirmed with --oversubscribe? > > Regards, > Mahmood > > > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users > > >
_______________________________________________ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users