Well, mpich2 and mvapich2 are working smoothly for my app. mpich2 under
gige is also giving ~2X the performance of openmpi during the working
cases for openmpi. After the paper deadline, I'll attempt to package up
a simple test case and send it to the list.
Thanks!
-Mike
Mike Houston wrote
Sadly, I've just hit this problem again, so I'll have to find another
MPI implementation as I have a paper deadline quickly approaching.
I'm using single threads now, but I had very similar issues when using
multiple threads and issuing send/recv on one thread and waiting on a
posted MPI_Recv
Mike -
In Open MPI 1.2, one-sided is implemented over point-to-point, so I
would expect it to be slower. This may or may not be addressed in a
future version of Open MPI (I would guess so, but don't want to
commit to it). Where you using multiple threads? If so, how?
On the good news,
Well, I've managed to get a working solution, but I'm not sure how I got
there. I built a test case that looked like a nice simple version of
what I was trying to do and it worked, so I moved the test code into my
implementation and low and behold it works. I must have been doing
something a
Brian Barrett wrote:
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a
segfau
On Mar 20, 2007, at 3:15 PM, Mike Houston wrote:
If I only do gets/puts, things seem to be working correctly with
version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a
segfaults:
Anyone have suggesti
If I only do gets/puts, things seem to be working correctly with version
1.2. However, if I have a posted Irecv on the target node and issue a
MPI_Get against that target, MPI_Test on the posed IRecv causes a segfaults:
[expose:21249] *** Process received signal ***
[expose:21249] Signal: Segm