Hi Bowen,
Thanks very much. I had checked my writev system call, I thought it was him
that caused all these bad things :)
Best Regards
Xianjun Meng
2010/12/8 Bowen Zhou
> On 12/05/2010 10:13 PM,
>
>> hi,
>>
>> I met a question recently when I tested the MPI_send and MPI_Recv
>> functions. Wh
On 12/05/2010 10:13 PM,
hi,
I met a question recently when I tested the MPI_send and MPI_Recv
functions. When I run the following codes, the processes hanged and I
found there was not data transmission in my network at all.
BTW: I finished this test on two X86-64 computers with 16GB memory an
Hi Gus Correa
First of all, thanks for your suggestions.
1) The malloc function do return a non_NULL pointer.
2) I didn't tried the MPI_Isend function, actually, The really function I
need to use is MPI_Allgatherv(). When I used it, I found this function
didn't work when the the data >= 2GB, the
Hi Xianjun
Suggestions/Questions:
1) Did you check if malloc returns a non-NULL pointer?
Your program is assuming this, but it may not be true,
and in this case the problem is not with MPI.
You can print a message and call MPI_Abort if it doesn't.
2) Have you tried MPI_Isend/MPI_Irecv?
Or perha
Hi
Are you running on two processes (mpiexec -n 2)?
Yes
Have you tried to print Gsize?
Yes, I had checked my codes several times, and I thought the errors came
from the OpenMpi. :)
The command line I used:
"mpirun -hostfile ./Serverlist -np 2 ./test". The "Serverlist" file include
several comput
Gus Correa wrote:
Hi Xianjun
Are you running on two processes (mpiexec -n 2)?
I think this code will deadlock for more than two processes.
The MPI_Recv won't have a matching send for rank>1.
Also, this is C, not MPI,
but you may be wrapping into the negative numbers.
Have you tried to print Gsi
Hi Xianjun
Are you running on two processes (mpiexec -n 2)?
I think this code will deadlock for more than two processes.
The MPI_Recv won't have a matching send for rank>1.
Also, this is C, not MPI,
but you may be wrapping into the negative numbers.
Have you tried to print Gsize?
It is probably
Hi,
What interconnect and command line do you use? For InfiniBand openib
component there is a known issue with large transfers (2GB)
https://svn.open-mpi.org/trac/ompi/ticket/2623
try disabling memory pinning:
http://www.open-mpi.org/faq/?category=openfabrics#large-message-leave-pinned
regards
hi,
In my computers(X86-64), the sizeof(int)=4, but the
sizeof(long)=sizeof(double)=sizeof(size_t)=8. when I checked my mpi.h file,
I found that the definition about the sizeof(int) is correct. meanwhile, I
think the mpi.h file was generated according to my compute environment when
I compiled the
On 12/5/2010 7:13 PM, 孟宪军 wrote:
hi,
I met a question recently when I tested the MPI_send and MPI_Recv
functions. When I run the following codes, the processes hanged and I
found there was not data transmission in my network at all.
BTW: I finished this test on two X86-64 computers with 16GB me
10 matches
Mail list logo