> On Nov 6, 2017, at 7:46 AM, Florian Lindner <mailingli...@xgm.de> wrote: > > Am 05.11.2017 um 20:57 schrieb r...@open-mpi.org: >> >>> On Nov 5, 2017, at 6:48 AM, Florian Lindner <mailingli...@xgm.de >>> <mailto:mailingli...@xgm.de>> wrote: >>> >>> Am 04.11.2017 um 00:05 schrieb r...@open-mpi.org <mailto:r...@open-mpi.org>: >>>> Yeah, there isn’t any way that is going to work in the 2.x series. I’m not >>>> sure it was ever fixed, but you might try >>>> the latest 3.0, the 3.1rc, and even master. >>>> >>>> The only methods that are known to work are: >>>> >>>> * connecting processes within the same mpirun - e.g., using comm_spawn >>> >>> That is not an option for our application. >>> >>>> * connecting processes across different mpiruns, with the ompi-server >>>> daemon as the rendezvous point >>>> >>>> The old command line method (i.e., what you are trying to use) hasn’t been >>>> much on the radar. I don’t know if someone >>>> else has picked it up or not... >>> >>> What do you mean with "the old command line method”. >>> >>> Isn't the ompi-server just another means of exchanging port names, i.e. the >>> same I do using files? >> >> No, it isn’t - there is a handshake that ompi-server facilitates. >> >>> >>> In my understanding, using Publish_name and Lookup_name or exchanging the >>> information using files (or command line or >>> stdin) shouldn't have any >>> impact on the connection (Connect / Accept) itself. >> >> Depends on the implementation underneath connect/accept. >> >> The initial MPI standard authors had fixed in their minds that the >> connect/accept handshake would take place over a TCP >> socket, and so no intermediate rendezvous broker was involved. That isn’t >> how we’ve chosen to implement it this time >> around, and so you do need the intermediary. If/when some developer wants to >> add another method, they are welcome to do >> so - but the general opinion was that the broker requirement was fine. > > Ok. Just to make sure I understood correctly: > > The MPI Ports functionality (chapter 10.4 of MPI 3.1), mainly consisting of > MPI_Open_port, MPI_Comm_accept and > MPI_Comm_connect is not usuable without running an ompi-server as a third > process?
Yes, that’s correct. The reason for moving in that direction is that the resource managers, as they continue to integrate PMIx into them, are going to be providing that third party. This will make connect/accept much easier to use, and a great deal more scalable. See https://github.com/pmix/RFCs/blob/master/RFC0003.md <https://github.com/pmix/RFCs/blob/master/RFC0003.md> for an explanation. > > Thank again, > Florian > _______________________________________________ > users mailing list > users@lists.open-mpi.org > https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users