On Dec 5, 2013, at 4:04 AM, ???? <781578...@qq.com> wrote: > My ROCKS cluster includes one frontend and two compute nodes.In my program,I > have use the openmpi API such as MPI_Send and MPI_Recv . but when I run > the progam with 3 processors . one processor send a message ,other receive > message .here are some code. > int*a=(int*)malloc(sizeof(int)*number); > MPI_Send(a,number, MPI_INT, 1, 1,MPI_COMM_WORLD); > > int*b=(int*)malloc(sizeof(int)*number); > MPI_Recv(b, number, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); > > when number is least than 10000,it runs fast. > but number is more than 15000,it runs slowly
Can you precisely define "fast" and "slowly"? What is the network that MPI is using -- TCP over 1Gb ethernet? I'm guessing that Open MPI is changing protocols (from an eager send to a rendezvous send) between these two sizes, but without more information, it's hard to say. -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/