At the moment, this is still in "design" - all I've done for now is breadboard a connection that places them effectively in the same comm_world. Each process in both jobs is given the complete comm info for all the processes. Details of the MPI interface, however, remain to be determined by my MPI colleagues.

The mode isn't documented yet since it isn't available in any of the current tarballs or releases. I'm working on the infrastructure to allow multiple applications to collaborate across multiple clusters on a separate development branch - this capability is buried in that work.

I'm sure documentation will follow as soon as we get a little further towards completing implementation.
Ralph


Chris Gottbrath wrote:
Ralph, 

Interesting. How would the two jobs be 'connected' in this
scheme? 

Would they share a single MPI_COMM_WORLD or would they both 
be created with an intercommunicator to the other jobs
MPI_WORLD?

If so, how would that intercommmunicator be obtained in each 
program? 

Is this mode documented anywhere?

Cheers,
Chris

--
Chris Gottbrath
Partner Technologies Engineer    Etnus, LLC
chris.gottbr...@etnus.com        http://www.etnus.com/
Voice: 508-652-7700 x7735        Fax: 508-652-7787

---------- Forwarded message ----------
List-Post: users@lists.open-mpi.org
Date: Mon, 27 Mar 2006 06:44:04 -0700
From: Ralph Castain <r...@lanl.gov>
To: Open MPI Users <us...@open-mpi.org>
Subject: Re: [OMPI users] How to establish communication between two separate
    COM WORLD

Actually, in a not-too-distant future release, there will be an option to mpirun called "--connect"
that will allow you to specify that this job is to be connected to a specified earlier job. The
run-time environment will then spawn the new job and exchange all required communication information
between the two jobs for you. You could therefore accomplish your desired operation by:

  
nohup mpirun --np xx app1
    
(system returns job number to you)
  
mpirun --np yy --connect job1 app2
    
(system starts app2 and connects it to job1)

Should be a little more transparent. No specific coding for making the connection would be required
in your application itself.

Ralph


Jean Latour wrote:
      Hello,

      It seems to me there is only one way to create a communication between
      two MPI_COMM_WORLD :  use MPI_Open_Port with a specific
      IP + port address, and then MPI_comm_connect / MPI_comm_accept.

      In order to ease the port number communication, the use of MPI_publish-name
      / MPI_lookup_name is also possible with the constraint that the "publish"
      must be done before the "lookup", and this involves some synchronization
      between the processes anyway.

      Simple examples can be found in the handbook on MPI : "Using MPI-2"
      by William Gropp et al.

      Best Regards,
      Jean

      Ali Eghlima wrote:



            Hello,

            I have read MPI-2 documents as well as FAQ. I am confused as the best way to
            establish communication
            between two  MPI_COMM_WORLD which has been created by two mpiexec calls on
            the same node.

            mpiexec -conf  config1
                 This start 20 processes on 7 nodes

            mpiexec -conf  config2
                  This start 18 processes on 5 nodes

            I do appreciate any comments or pointer to a document or example.

            Thanks

            Ali,



            ------------------------------------------------------------------------

            _______________________________________________
            users mailing list
            us...@open-mpi.org
            http://www.open-mpi.org/mailman/listinfo.cgi/users


     ________________________________________________________________________________________________
 _______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

  

Reply via email to