I've been watching this exchange with interest, because it is the
closest I have seen to what I want, but I want something slightly
different: 2 processes per node, with the first one bound to one core,
and the second bound to all the rest, with no use of hyperthreads.
Would this be
--map-by ppr
The only way I know of to do what you want is
--map-by ppr:32:socket --bind-to core --cpu-list 0,2,4,6,...
where you list out the exact cpus you want to use.
On Feb 28, 2021, at 9:58 AM, Luis Cebamanos via users mailto:users@lists.open-mpi.org> > wrote:
I could do --map-by ppr:32:socket:PE=
I could do--map-by ppr:32:socket:PE=1 --bind-to core (output below) but
I cannot see the way of mapping every 2 cores 0,2,4,
[epsilon110:1489563] MCW rank 0 bound to socket 0[core 0[hwt 0-1]]:
[BB/../../..
/../../../../../../../../../../../../../../../../../../../../../../../../../../.
./
Hi Ralph,
The "slot=N" directive saids to "put this proc on core N". In your file,
you stipulate that
>
> rank 0 is to be placed solely on core 0
> rank 1 is to be placed solely on core 2
> etc.
>
That is exactly what I want to achieve but from the mpirun cmd instead of
using a rankfile and I a
Did you read the documentation on rankfile? The "slot=N" directive saids to
"put this proc on core N". In your file, you stipulate that
rank 0 is to be placed solely on core 0
rank 1 is to be placed solely on core 2
etc.
That is not what you asked for in your mpirun cmd. You asked that each proc
Hi Ralph,
Thanks for this, however --map-by ppr:32:socket:PE=2 --bind-to core
reports the same binding than --map-by ppr:32:socket:PE=4 --bind-to
hwthread:
[epsilon104:2861230] MCW rank 0 bound to socket 0[core 0[hwt 0-1]],
socket 0[core 1[hwt 0-1]]: [BB/BB/../../../../
../../../../../../../../..
Your command line is incorrect:
--map-by ppr:32:socket:PE=4 --bind-to hwthread
should be
--map-by ppr:32:socket:PE=2 --bind-to core
On Feb 28, 2021, at 5:57 AM, Luis Cebamanos via users mailto:users@lists.open-mpi.org> > wrote:
I should have said, "I would like to run 128 MPI processes on 2
I should have said, "I would like to run 128 MPI processes on 2 nodes" and
not 64 like I initially said...
On Sat, 27 Feb 2021, 15:03 Luis Cebamanos, wrote:
> Hello OMPI users,
>
> On 128 core nodes, 2 sockets x 64 cores/socket (2 hwthreads/core) , I am
> trying to match the behavior of running