Cc: OpenMPI Users
Sent: Monday, 10 September 2012 9:11 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
So what you saying in short, leaving all the numbers aside, is the following:
In your particular application on your particular setup with this particular
Yevgeny Kliteynik
> *To:* Randolph Pullen
> *Cc:* OpenMPI Users
> *Sent:* Sunday, 9 September 2012 6:18 PM
> *Subject:* Re: [OMPI users] Infiniband performance Problem and stalling
>
> Randolph,
>
> O
See my comments in line...
From: Yevgeny Kliteynik
To: Randolph Pullen
Cc: OpenMPI Users
Sent: Sunday, 9 September 2012 6:18 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
On 9/7/2012 7:43 AM, Randolph Pullen wrote
Randolph,
On 9/7/2012 7:43 AM, Randolph Pullen wrote:
> Yevgeny,
> The ibstat results:
> CA 'mthca0'
> CA type: MT25208 (MT23108 compat mode)
What you have is InfiniHost III HCA, which is 4x SDR card.
This card has theoretical peak of 10 Gb/s, which is 1GB/s in IB bit coding.
> And more interest
-
> *From:* Yevgeny Kliteynik
> *To:* Randolph Pullen ; Open MPI Users
>
> *Sent:* Sunday, 2 September 2012 10:54 PM
> *Subject:* Re: [OMPI users] Infiniband performance Problem and stalling
>
> Randolp
2012 6:03 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
On 9/3/2012 4:14 AM, Randolph Pullen wrote:
> No RoCE, Just native IB with TCP over the top.
Sorry, I'm confused - still not clear what is "Melanox III HCA 10G card".
Could you run "ibstat&qu
---
> *From:* Yevgeny Kliteynik
> *To:* Randolph Pullen ; Open MPI Users
>
> *Sent:* Sunday, 2 September 2012 10:54 PM
> *Subject:* Re: [OMPI users] Infiniband performance Problem and stalling
>
> Randolph,
>
> Some clarification on the setup:
>
> &q
ay, 2 September 2012 10:54 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
Some clarification on the setup:
"Melanox III HCA 10G
cards" - are those ConnectX 3 cards configured to Ethernet?
That is, when you're using openib BTL, you mean RoCE, right?
Randolph,
Some clarification on the setup:
"Melanox III HCA 10G cards" - are those ConnectX 3 cards configured to Ethernet?
That is, when you're using openib BTL, you mean RoCE, right?
Also, have you had a chance to try some newer OMPI release?
Any 1.6.x would do.
-- YK
On 8/31/2012 10:53 AM,
ubject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
after reading this:
On 08/28/12 04:26, Randolph Pullen wrote:
> - On occasions it seems to stall indefinately, waiting on a single receive.
... I
would make a blind guess: are you aware about IB card parameters fo
64K and force short messages. Then the openib times are
the same as TCP and no faster.
I'ms till at a loss as to why...
From: Paul Kapinos
To: Randolph Pullen ; Open MPI Users
Sent: Tuesday, 28 August 2012 6:13 PM
Subject: Re: [OMPI users] Infin
Randolph,
after reading this:
On 08/28/12 04:26, Randolph Pullen wrote:
- On occasions it seems to stall indefinately, waiting on a single receive.
... I would make a blind guess: are you aware about IB card parameters for
registered memory?
http://www.open-mpi.org/faq/?category=openfabrics#
12 matches
Mail list logo