Hi,
I usually use infiniband network, where openmpi-1.7.3 and 1.6.5 works fine.
The other days, I had a chance to use tcp network(1GbE) and I noticed that
my application with openmpi-1.7.3 was quite slower than openmpi-1.6.5.
So, I did OSU MPI Bandwidth Test v3.1.1 as shown below, which shows
b
On Dec 16, 2013, at 2:24 PM, Gus Correa wrote:
> A question, for the benefit of OMPI 1.6.5 users (stable-version die hards
> like us here).
> When fixes like Ake's are applied to a stable version,
> do they make it to the (1.6.5) tarball or to some other code base?
They are currently going into
Has anyone tried to use openmpi 1.7.3 with the latest CentOS kernel
(well, nearly latest: 2.6.32-431.el6.x86_64), and especially with infiniband?
I'm seeing lots of weird slowdowns, especially when using infiniband,
but even when running with "--mca btl self,sm" (it's much worse with
IB, though
Noted. Thanks all for the tips !
On 16-Dec-2013 2:36 pm, "Jeff Squyres (jsquyres)"
wrote:
> Everything that Brian said, plus: note that the MCA param that Christoph
> mentioned is specifically for the "sm" (shared memory) transport. Each
> transport has their own set of MCA params (e.g., mca_bt
Everything that Brian said, plus: note that the MCA param that Christoph
mentioned is specifically for the "sm" (shared memory) transport. Each
transport has their own set of MCA params (e.g., mca_btl_tcp_eager_limit, and
friends).
On Dec 16, 2013, at 3:19 PM, "Barrett, Brian W" wrote:
> Si
Siddhartha -
Christoph mentioned how to change the cross-over for shared memory, but it's
really per-transport (so you'd have to change it for your off-node transport as
well). That's all in the FAQ you mentioned, so hopefully you can take it from
there. Note that, in general, moving the eage
Hi Jeff
A question, for the benefit of OMPI 1.6.5 users (stable-version die
hards like us here).
When fixes like Ake's are applied to a stable version,
do they make it to the (1.6.5) tarball or to some other code base?
How innocuous would it be not to apply the the typo fix
caught by Ake, and
Fixed -- thanks!
(I confirmed that it's not an issue in the 1.7 series, too)
On Dec 16, 2013, at 1:36 PM, Ake Sandgren wrote:
> Hi!
>
> Not sure if this has been caught already or not, but there is a typo in
> opal/memoryhooks/memory.h in 1.6.5.
>
> #ifndef OPAL_MEMORY_MEMORY_H
> #define OP
Hi!
Not sure if this has been caught already or not, but there is a typo in
opal/memoryhooks/memory.h in 1.6.5.
#ifndef OPAL_MEMORY_MEMORY_H
#define OPAl_MEMORY_MEMORY_H
Note the lower case "l" in the define.
/Åke S.
Thanks Christoph.
I should have looked into the FAQ section on MCA params setting @ :
http://www.open-mpi.org/faq/?category=tuning#available-mca-params
Thanks again,
-- Siddhartha
On 16 December 2013 02:41, Christoph Niethammer wrote:
> Hi Siddhartha,
>
> MPI_Send/Recv in Open MPI implements b
Hi Siddhartha,
MPI_Send/Recv in Open MPI implements both protocols and chooses based on the
message size which one to use.
You can use the mca parameter "btl_sm_eager_limit" to modify the behaviour.
Here the corresponding info obtained from the ompi_info tool:
"btl_sm_eager_limit" (current valu
11 matches
Mail list logo