You are running 15000 ranks on two nodes?? My best guess is that you are
swapping like crazy as your memory footprint problem exceeds available
physical memory.



On Thu, Dec 5, 2013 at 1:04 AM, 胡杨 <781578...@qq.com> wrote:

> My ROCKS cluster includes one frontend and two  compute nodes.In my
> program,I have use the openmpi API  such as  MPI_Send and  MPI_Recv .  but
> when I  run  the progam with 3 processors . one processor  send a message
> ,other receive message .here are some code.
> int*a=(int*)malloc(sizeof(int)*number);
> MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD);
>
>  int*b=(int*)malloc(sizeof(int)*number);
> MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
>
> when number is least than 10000,it runs fast.
> but number is more than 15000,it runs slowly
>
> why??  becesue openmpi API ?? or other  problems?
> ------------------ 原始邮件 ------------------
>  *发件人:* "Ralph Castain";<r...@open-mpi.org>;
> *发送时间:* 2013年12月3日(星期二) 中午1:39
> *收件人:* "Open MPI Users"<us...@open-mpi.org>;
>  *主题:* Re: [OMPI users] can you help me please ?thanks
>
>
>
>
>
> On Mon, Dec 2, 2013 at 9:23 PM, 胡杨 <781578...@qq.com> wrote:
>
>> A simple program at my 4-node ROCKS cluster runs fine with command:
>>
>> /opt/openmpi/bin/mpirun -np 4 -machinefile machines ./sort_mpi6
>>
>>
>> Another bigger programs runs fine on the head node only with command:
>>
>> cd ./sphere; /opt/openmpi/bin/mpirun -np 4 ../bin/sort_mpi6
>>
>> But with the command:
>>
>> cd /sphere; /opt/openmpi/bin/mpirun -np 4 -machinefile ../machines
>> ../bin/sort_mpi6
>>
>> It gives output that:
>>
>> ../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1:
>> cannot open
>> shared object file: No such file or directory
>> ../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1:
>> cannot open
>> shared object file: No such file or directory
>> ../bin/sort_mpi6: error while loading shared libraries: libgdal.so.1:
>> cannot open
>> shared object file: No such file or directory
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to