I’m afraid none of the current options is going to do that right now. I’ll put
a note on my to-do list to look at this, but I can’t promise when I’ll get to
it.
> On Apr 24, 2017, at 3:13 AM, Heinz-Ado Arnolds
> wrote:
>
> Dear Ralph,
>
> thanks for this new hint. Unfortunately I don't see h
Dear Ralph,
thanks for this new hint. Unfortunately I don't see how that would fulfill all
my requirements:
I like to have 8 OpenMPI jobs for 2 nodes -> 4 OpenMPI jobs per node -> 2 per
socket, each executing one OpenMP job with 5 threads
mpirun -np 8 --map-by ppr:4:node:pe=5 ...
How can I
Sorry for delayed response. I’m glad that option solved the problem. We’ll have
to look at that configure option - shouldn’t be too hard.
As for the mapping you requested - no problem! Here’s the cmd line:
mpirun --map-by ppr:1:core --bind-to hwthread
Ralph
> On Apr 19, 2017, at 2:51 AM, Heinz
Dear Ralph,
thanks a lot for this valuable advice. Binding now works like expected!
Since adding the ":pe=" option I'm getting warnings
WARNING: a request was made to bind a process. While the system
supports binding the process itself, at least one node does NOT
support binding memory to
You can always specify a particular number of cpus to use for each process by
adding it to the map-by directive:
mpirun -np 8 --map-by ppr:2:socket:pe=5 --use-hwthread-cpus -report-bindings
--mca plm_rsh_agent "qrsh" ./myid
would map 2 processes to each socket, binding each process to 5 HTs on
On 13.04.2017 15:20, gil...@rist.or.jp wrote:
...
> in your second case, there are 2 things
> - MPI binds to socket, that is why two MPI tasks are assigned the same
> hyperthreads
> - the GNU OpenMP runtime looks unable to figure out 2 processes use the
> same cores, and hence end up binding
>
Dear Gilles,
thanks a lot for your response!
1. You're right, my stupid error, I forgot the "export" of OMP_PROC_BIND in my
job script. Now this example is working nearly as expected:
[pascal-1-07:25617] MCW rank 0 bound to socket 0[core 0[hwt 0-1]], socket
0[core 1[hwt 0-1]], socket 0[core
Heinz-Ado,
it seems the OpenMP runtime did *not* bind the OMP threads at all as
requested,
and the root cause could be the OMP_PROC_BIND environment variable was
not propagated
can you try
mpirun -x OMP_PROC_BIND ...
and see if it helps ?
Cheers,
On 4/13/2017 12:23 AM, Heinz-Ado Arno
Dear Gilles,
thanks for your answer.
- compiler: gcc-6.3.0
- OpenMP environment vars: OMP_PROC_BIND=true, GOMP_CPU_AFFINITY not set
- hyperthread a given OpenMP thread is on: it's printed in the output below as
a 3-digit number after the first ",", read by sched_getcpu() in the OpenMP test
code
That should be a two steps tango
- Open MPI bind a MPI task to a socket
- the OpenMP runtime bind OpenMP threads to cores (or hyper threads) inside
the socket assigned by Open MPI
which compiler are you using ?
do you set some environment variables to direct OpenMP to bind threads ?
Also, how do
Open MPI isn’t doing anything wrong - it is doing exactly what it should, and
exactly what you are asking it to do. The problem you are having is that OpenMP
isn’t placing the threads exactly where you would like inside the process-level
“envelope” that Open MPI has bound the entire process to.
Dear rhc,
to make it more clear what I try to achieve, I collected some examples for
several combinations of command line options. Would be great if you find time
to look to these below. The most promise one is example "4".
I'd like to have 4 MPI jobs starting 1 OpenMP job each with 10 threads,
I’m not entirely sure I understand your reference to “real cores”. When we bind
you to a core, we bind you to all the HT’s that comprise that core. So, yes,
with HT enabled, the binding report will list things by HT, but you’ll always
be bound to the full core if you tell us bind-to core
The de
Dear OpenMPI users & developers,
I'm trying to distribute my jobs (with SGE) to a machine with a certain number
of nodes, each node having 2 sockets, each socket having 10 cores & 10
hyperthreads. I like to use only the real cores, no hyperthreading.
lscpu -a -e
CPU NODE SOCKET CORE L1d:L1i:L2
14 matches
Mail list logo