On 02/29/2012 03:15 PM, Jeffrey Squyres wrote:
> On Feb 29, 2012, at 2:57 PM, Jingcha Joba wrote:
>
>> So if I understand correctly, if a message size is smaller than it will use
>> the MPI way (non-RDMA, 2 way communication), if its larger, then it would
>> use the Open Fabrics, by using the ib
On Mar 1, 2012, at 1:17 AM, Jingcha Joba wrote:
> Aah...
> So when openMPI is compile with OFED, and run on a Infiniband/RoCE devices, I
> would use the mpi would simply direct to ofed to do point to point calls in
> the ofed way?
I'm not quite sure how to parse that. :-)
The openib BTL uses
I would just ignore these tests:
1. The use of MPI one-sided functionality is extremely rare out in the real
world.
2. Brian said there were probably bugs in Open MPI's implementation of the MPI
one-sided functionality itself, and he's in the middle of re-writing the
one-sided functionality any
Well, as Jeff says, looks like its to do with the 1 sided comm.
But the reason why I said was because of what I experienced a couple of
months ago: When I had a Myri-10G and an Intel gigabit ethernet card lying
around, I wanted to test the kernel bypass using open-mx stack and I ran
the osu benchm
Aah...
So when openMPI is compile with OFED, and run on a Infiniband/RoCE devices,
I would use the mpi would simply direct to ofed to do point to point calls
in the ofed way?
>
> More specifically: all things being equal, you don't care which is used.
> You just want your message to get to the re
Hi,
I tried executing those tests with the other devices like tcp instead
of ib with the same open-mpi 1.4.3.. It went fine but it took time to
execute, when i tried to execute the same test on the customized OFED
,tests are hanging at the same message size..
Can u please tel me, what could
On Feb 29, 2012, at 2:57 PM, Jingcha Joba wrote:
> So if I understand correctly, if a message size is smaller than it will use
> the MPI way (non-RDMA, 2 way communication), if its larger, then it would use
> the Open Fabrics, by using the ibverbs (and ofed stack) instead of using the
> MPI's s
So if I understand correctly, if a message size is smaller than it will use
the MPI way (non-RDMA, 2 way communication), if its larger, then it would
use the Open Fabrics, by using the ibverbs (and ofed stack) instead of
using the MPI's stack?
If so, could that be the reason why the MPI_Put "hangs
On Feb 29, 2012, at 2:30 PM, Jingcha Joba wrote:
> Squyres,
> I thought RDMA read and write are implemented as one side communication using
> get and put respectively..
> Is it not so?
Yes and no.
Keep in mind the difference between two things here:
- An an underlying transport's one-sided ca
Squyres,
I thought RDMA read and write are implemented as one side communication
using get and put respectively..
Is it not so?
On Wed, Feb 29, 2012 at 10:49 AM, Jeffrey Squyres wrote:
> FWIW, if Brian says that our one-sided stuff is a bit buggy, I believe him
> (because he wrote it). :-)
>
> T
FWIW, if Brian says that our one-sided stuff is a bit buggy, I believe him
(because he wrote it). :-)
The fact is that the MPI-2 one-sided stuff is extremely complicated and
somewhat open to interpretation. In practice, I haven't seen the MPI-2
one-sided stuff used much in the wild. The MPI-
When I ran my osu tests , I was able to get the numbers out of all the
tests except latency_mt (which was obvious, as I didnt compile open-mpi
with multi threaded support).
A good way to know if the problem is with openmpi or with your custom OFED
stack would be to use some other device like tcp in
I'm pretty sure that they are correct. Our one-sided implementation is
buggier than I'd like (indeed, I'm in the process of rewriting most of it
as part of Open MPI's support for MPI-3's revised RDMA), so it's likely
that the bugs are in Open MPI's onesided support. Can you try a more
recent rele
13 matches
Mail list logo