I'm testing a "hello world" code from the Pacheco[2] book about parallel programming:
 
#include <stdio.h>
#include <string.h>
#include <mpi.h>
const int MAX_STRING = 100;
int main(void)
{
    char     greeting[MAX_STRING];
    int     comm_sz;
    int    my_rank;
    MPI_Init(NULL, NULL);
    MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
    MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
    if(my_rank != 0)
    {
        sprintf(greeting, "Greetings from process %d of %d!\n" , my_rank, comm_sz);
        MPI_Send(greeting, strlen(greeting)+1, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
    }
    else
    {
        printf("Greetings from process %d of %d!\n" , my_rank, comm_sz);
        for(int q=1; q<comm_sz;q++)
        {
            MPI_Recv(greeting, MAX_STRING, MPI_CHAR, q, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
            printf("%s\n", greeting);
        }
    }
    MPI_Finalize();
    return 0;
}
 
I run in two computers running Solaris x86 (installed by package [1]) using the command:
solaris@solaris:~/mpi$ mpirun -H localhost,192.168.0.106 a.out
 
The messages are printed:
Greetings from process 0 of 2!
Greetings from process 1 of 2!
And, reading the mpirun manual, I discover the all process's stdout  are redirect to rank 0. Then OK, it sounds is working very nice but the messages below also are shown:
librdmacm: couldn't read ABI version.
librdmacm: assuming: 4
CMA: unable to get RDMA device list
 
I wonder what it means, if I need to fix.
 
Thank you!
 
[1] http://www.oracle.com/technetwork/articles/servers-storage-dev/011-161-ompt-sol11-1441028.html
[2] http://www.cs.usfca.edu/~peter/
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to