My ROCKS cluster includes one frontend and two  compute nodes.In my program,I 
have use the openmpi API  such as  MPI_Send and  MPI_Recv .  but  when I  run  
the progam with 3 processors . one processor  send a message ,other receive 
message .here are some code. 
 int*a=(int*)malloc(sizeof(int)*number);
MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD);
  
  int*b=(int*)malloc(sizeof(int)*number);
MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); 
  
 when number is least than 10000,it runs fast.
but number is more than 15000,it runs slowly
  
 why??  becesue openmpi API ?? or other  problems? 

 ------------------ ???????? ------------------
  ??????: "Ralph Castain";<r...@open-mpi.org>;
 ????????: 2013??12??3??(??????) ????1:39
 ??????: "Open MPI Users"<us...@open-mpi.org>; 
 
 ????: Re: [OMPI users] can you help me please ?thanks

 

 
 
 

 On Mon, Dec 2, 2013 at 9:23 PM, ???? <781578...@qq.com> wrote:
 A simple program at my 4-node ROCKS cluster runs fine with command: 
/opt/openmpi/bin/mpirun -np 4 -machinefile machines ./sort_mpi6
 

Another bigger programs runs fine on the head node only with command:
 
cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/sort_mpi6
 
But with the command:
 
cd /sphere; /opt/openmpi/bin/mpirun -np 4 -machinefile ../machines
../bin/sort_mpi6
 
It gives output that:
 
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot 
open
shared object file: No such file or directory
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot 
open
shared object file: No such file or directory
../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1: cannot 
open
shared object file: No such file or directory
  

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to