On 3/27/19 8:39 AM, Mahmood Naderan wrote:

mpirun pw.x -imos2.rlx.in  <http://mos2.rlx.in>

You will need to read the documentation for this:

https://slurm.schedmd.com/heterogeneous_jobs.html

Especially note both of these:

IMPORTANT: The ability to execute a single application across more than one job allocation does not work with all MPI implementations or Slurm MPI plugins. Slurm's ability to execute such an application can be disabled on the entire cluster by adding "disable_hetero_steps" to Slurm's SchedulerParameters configuration parameter.

IMPORTANT: While the srun command can be used to launch heterogeneous job steps, mpirun would require substantial modification to support heterogeneous applications. We are aware of no such mpirun development efforts at this time.

So at the very least you'll need to use srun, not mpirun and confirm that the MPI you are using supports this Slurm feature.

Also, the partition names are weird. We have these entries:

Partitions are defined by the systems administrators, you'd need to speak with them about their reasoning for those.

All the best,
Chris
--
  Chris Samuel  :  http://www.csamuel.org/  :  Berkeley, CA, USA

Reply via email to