Re: [OMPI users] Groups and Communicators

2017-07-27 Thread Diego Avesani
Dear George, Dear all, A question regarding program design: The draft that I have sent to you has to be done many and many times. Does the splitting procedure ensure efficiency? I will try, at a lest to create groups and split them. I am a beginner in the MPI groups environment. really, really th

Re: [OMPI users] MPI_IN_PLACE

2017-07-27 Thread Gilles Gouaillardet
Volker, since you are only using include 'mpif.h' a workaround is you edit your /.../share/openmpi/mpifort-wrapper-data.txt and simply remove '-lmpi_usempif08 -lmpi_usempi_ignore_tkr' Cheers, Gilles On 7/27/2017 3:28 PM, Volker Blum wrote: Thanks! If you wish, please also keep me posted

Re: [OMPI users] MPI_IN_PLACE

2017-07-27 Thread Volker Blum
Dear Gilles, Thank you! Indeed, this appears to address the issue in my test. I have yet to run a full regression test on our actual code to ensure that there are no other side effects. I don’t expect any, though. ** Interestingly, removing '-lmpi_usempif08 -lmpi_usempi_ignore_tkr’ actually f

Re: [OMPI users] MPI_IN_PLACE

2017-07-27 Thread Volker Blum
Update: > I have yet to run a full regression test on our actual code to ensure that > there are no other side effects. I don’t expect any, though. Indeed. Fixes all regressions that I had observed. Best wishes Volker > On Jul 27, 2017, at 11:47 AM, Volker Blum wrote: > > Dear Gilles, > > T

Re: [OMPI users] Groups and Communicators

2017-07-27 Thread Diego Avesani
Dear George, Dear all, I have tried to create a simple example. In particular, I would like to use 16 CPUs and to create four groups according to rank is and then a communicator between masters of each group.I have tried to follow the first part of this example

Re: [OMPI users] Groups and Communicators

2017-07-27 Thread George Bosilca
This looks good. If performance is critical you can speed up the entire process by using MPI_Comm_create_group instead of the second MPI_COMM_SPLIT. The MPI_Comm_create_group is collective only over the resulting communicator and not over the source communicator, so its cost is only dependent of th

Re: [OMPI users] Questions about integration with resource distribution systems

2017-07-27 Thread Dave Love
"r...@open-mpi.org" writes: > Oh no, that's not right. Mpirun launches daemons using qrsh and those > daemons spawn the app's procs. SGE has no visibility of the app at all Oh no, that's not right. The whole point of tight integration with remote startup using qrsh is to report resource usage a

Re: [OMPI users] NUMA interaction with Open MPI

2017-07-27 Thread Dave Love
Gilles Gouaillardet writes: > Adam, > > keep in mind that by default, recent Open MPI bind MPI tasks > - to cores if -np 2 > - to NUMA domain otherwise Not according to ompi_info from the latest release; it says socket. > (which is a socket in most cases, unless > you are running on a Xeon Phi)

Re: [OMPI users] NUMA interaction with Open MPI

2017-07-27 Thread Gilles Gouaillardet
Dave, On 7/28/2017 12:54 AM, Dave Love wrote: Gilles Gouaillardet writes: Adam, keep in mind that by default, recent Open MPI bind MPI tasks - to cores if -np 2 - to NUMA domain otherwise Not according to ompi_info from the latest release; it says socket. thanks, i will double check that.