Correct, I am using 1.2.6 and not running a persistent deamon (thank you for
the link Pak by the way).
However, in the Client/Server test thread I'm showing a complete example
where I tried to run both applications through the same mpirun command and
still having an internal error:
http://www.open-mpi.org/community/lists/users/2008/04/5537.php

On Mon, May 5, 2008 at 10:18 AM, Ralph Castain <r...@lanl.gov> wrote:

> I assume you are using some variant of OMPI 1.2?
>
> When last I checked, which admittedly was quite a while ago, this worked
> on
> the 1.2.x series. However, I note something here that may be a problem. In
> the 1.2.x series, we do not have a global publish/lookup service - the
> application doing the publish must be launched by the same mpirun as the
> application doing the lookup.
>
> The code below only does the lookup, and appears to be asking that you
> provide some server name. I assume you are somehow looking up the name of
> the mpirun that launched the application that did the publish, and hoping
> the two will cross-connect? Unfortunately, I don't believe the 1.2.x code
> is
> smart enough to figure out how to do that.
>
> This is cleaned up in the upcoming 1.3 release and should work much
> smoother.
>
> Ralph
>
>
>
> On 4/27/08 6:58 PM, "Alberto Giannetti" <albertogianne...@gmail.com>
> wrote:
>
> > I am having error using MPI_Lookup_name. The same program works fine
> > when using MPICH:
> >
> >
> > /usr/local/bin/mpiexec -np 2 ./client myfriend
> > Processor 0 (662, Sender) initialized
> > Processor 0 looking for service myfriend-0
> > Processor 1 (664, Sender) initialized
> > Processor 1 looking for service myfriend-1
> > [local:00662] *** An error occurred in MPI_Lookup_name
> > [local:00662] *** on communicator MPI_COMM_WORLD
> > [local:00662] *** MPI_ERR_NAME: invalid name argument
> > [local:00662] *** MPI_ERRORS_ARE_FATAL (goodbye)
> > [local:00664] *** An error occurred in MPI_Lookup_name
> > [local:00664] *** on communicator MPI_COMM_WORLD
> > [local:00664] *** MPI_ERR_NAME: invalid name argument
> > [local:00664] *** MPI_ERRORS_ARE_FATAL (goodbye)
> >
> >
> > int main(int argc, char* argv[])
> > {
> >    int rank, i;
> >    float data[100];
> >    char cdata[64];
> >    char myport[MPI_MAX_PORT_NAME];
> >    char myservice[64];
> >    MPI_Comm intercomm;
> >    MPI_Status status;
> >    int intercomm_size;
> >
> >    MPI_Init(&argc, &argv);
> >    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >    printf("Processor %d (%d, Sender) initialized\n", rank, getpid());
> >
> >    if( argc < 2 ) {
> >      fprintf(stderr, "Require server name\n");
> >      MPI_Finalize();
> >      exit(-1);
> >    }
> >
> >    for( i = 0; i < 100; i++ )
> >      data[i] = i;
> >
> >    sprintf(myservice, "%s-%d", argv[1], rank);
> >    printf("Processor %d looking for service %s\n", rank, myservice);
> >    MPI_Lookup_name(myservice, MPI_INFO_NULL, myport);
> >    printf("Processor %d found port %s looking for service %s\n",
> > rank, myport, myservice);
> >
> >    while( 1 ) {
> >      printf("Processor %d connecting to '%s'\n", rank, myport);
> >      if( MPI_Comm_connect(myport, MPI_INFO_NULL, 0, MPI_COMM_SELF,
> > &intercomm) == MPI_SUCCESS )
> >        break;
> >      sleep(1);
> >    }
> >    printf("Processor %d connected\n", rank);
> >
> >    MPI_Comm_remote_size(intercomm, &intercomm_size);
> >    printf("Processor %d remote comm size is %d\n", rank,
> > intercomm_size);
> >
> >    printf("Processor %d sending data through intercomm to rank 0...
> > \n", rank);
> >    MPI_Send(data, 100, MPI_FLOAT, 0, rank, intercomm);
> >    printf("Processor %d data sent!\n", rank);
> >    MPI_Recv(cdata, 12, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG,
> > intercomm, &status);
> >    printf("Processor %d received string data '%s' from rank %d, tag %d
> > \n", rank, cdata, status.MPI_SOURCE, status.MPI_TAG);
> >
> >    sleep(5);
> >
> >    printf("Processor %d disconnecting communicator\n", rank);
> >    MPI_Comm_disconnect(&intercomm);
> >    printf("Processor %d finalizing\n", rank);
> >
> >    MPI_Finalize();
> >    printf("Processor %d Goodbye!\n", rank);
> > }
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to