[OMPI users] How to get a verbose compilation?
How do I get the build system to echo the commands it is issuing? My Fortran compiler is throwing an error on one file, and I need to see the full compiler command line with all options in order to debug. Thanks ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users
Re: [OMPI users] How to get a verbose compilation?
On Sat, Aug 5, 2017 at 3:09 PM, Jeff Hammond wrote: > make V=1 > Thank you! ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users
[OMPI users] MPI_Neighbor_alltoallv questions
I'm looking at replacing/modernizing an old application-specific MPI layer that was written around the turn of the century (that sounds odd). A major part of it is a mesh subdomain "halo exchange" for domain decomposition. When I dug into the implementation I was a little surprised to see it used point-to-point communication with ISend/Recv and rank-randomized issues of the sends (for better performance?) rather than Alltoallv, which I think would have been a more straightforward alternative (but my MPI understanding is limited). Some questions: 1) It seems that now defining a virtual topology and using Neighbor_alltoallv is a perfect match for this problem. Is there any reason today to not prefer this over individual send/recv? 2) I'm baffled about what I'm supposed to with the possible reordering of ranks that MPI_Dist_Graph_create_adjacent does. I understand the benefit of the communication pattern between ranks being matched to the underlying hardware topology, however the processes are already pinned (?) to specific cores, so I'm not sure what the relevance is of the assigned rank -- it's just a label, no? Or am I expected to migrate my data; e.g. if old-com rank p becomes new-com rank q, am I supposed to migrate the data from the old rank p to old rank q before using the new com? Thanks! ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users
[OMPI users] Where can a graph communicator be used?
I've been successful at using MPI_Dist_graph_create_adjacent to create a new communicator with graph topology, and using it with MPI_Neighbor_alltoallv. But I have a few questions: 1. Where can I use this communicator? Can it be used with the usual stuff like MPI_Allgather, or do I need to hang onto the original communicator (MPI_COMM_WORLD actually) for that purpose? 2. It turns out that my graph isn't symmetric sometimes (but I think I understood that is okay). I usually just need to send stuff in one direction, but occasionally it needs to go in the reverse direction. Am I right that I need a second graph communicator built with the reverse edges to use with MPI_Neighbor_alltoallv for that communication? My testing seems to indicate so, but I'm not absolutely certain. 3. Is there any real advantage to using the non-symmetric graph, or should I just symmetrize it and use the one? Thanks for your help!
Re: [OMPI users] Where can a graph communicator be used?
On Mon, Feb 14, 2022 at 9:01 PM George Bosilca wrote: > On Mon, Feb 14, 2022 at 6:33 PM Neil Carlson via users < > users@lists.open-mpi.org> wrote: > >> 1. Where can I use this communicator? Can it be used with the usual >> stuff like MPI_Allgather, or do I need to hang onto the original >> communicator (MPI_COMM_WORLD actually) for that purpose? >> > > Anywhere a communicator is used. You just have to be careful and > understand what is the scope of the communication you use them with. > Ah! I was thinking that this graph topology information might only be relevant to MPI_Neighbor collectives. But would it be proper then to think of a communicator having an implicit totally-connected graph topology that is replaced by this one? If so would Bcast, for example, only send from the root rank to those it was a source for in the graph topology? Or Gather on a rank only receive values from those ranks that were a source for it? What would the difference be then between Alltoallv, say, and Neighbor_alltoallv?
Re: [OMPI users] Where can a graph communicator be used?
That clears up my uncertainty then. Thanks! On Tue, Feb 15, 2022 at 9:03 AM George Bosilca wrote: > Sorry, I should have been more precise in my answer. Topology information > is only used during neighborhood communications via the specialized API, in > all other cases the communicator would behave as a normal, fully connected, > communicator. > > George. > > > On Tue, Feb 15, 2022 at 9:28 AM Neil Carlson via users < > users@lists.open-mpi.org> wrote: > >> >> >> On Mon, Feb 14, 2022 at 9:01 PM George Bosilca >> wrote: >> >>> On Mon, Feb 14, 2022 at 6:33 PM Neil Carlson via users < >>> users@lists.open-mpi.org> wrote: >>> >>>> 1. Where can I use this communicator? Can it be used with the usual >>>> stuff like MPI_Allgather, or do I need to hang onto the original >>>> communicator (MPI_COMM_WORLD actually) for that purpose? >>>> >>> >>> Anywhere a communicator is used. You just have to be careful and >>> understand what is the scope of the communication you use them with. >>> >> >> Ah! I was thinking that this graph topology information might only be >> relevant to MPI_Neighbor collectives. But would it be proper then to think >> of a communicator having an implicit totally-connected graph topology that >> is replaced by this one? If so would Bcast, for example, only send from the >> root rank to those it was a source for in the graph topology? Or Gather on >> a rank only receive values from those ranks that were a source for it? What >> would the difference be then between Alltoallv, say, and Neighbor_alltoallv? >> >