Hi,
I am now ok with the env. var. Pretty simple to set and get into the
code to pack the messages.
About tests, it is so dependent on the cluster, OpenMPI itself and the
model, this way is not an industrial way of tuning the computation. But
the env. var. is a good workaround.
Thanks again
Jeff Squyres wrote:
On Dec 16, 2010, at 5:14 AM, Mathieu Gontier wrote:
We have lead some tests and the option btl_sm_eager_limit has a positive consequence on the performance. Eugene, thank you for your links.
Good!
Just be aware of the tradeoff you're making: space for
On Dec 16, 2010, at 5:14 AM, Mathieu Gontier wrote:
> We have lead some tests and the option btl_sm_eager_limit has a positive
> consequence on the performance. Eugene, thank you for your links.
Good!
Just be aware of the tradeoff you're making: space for time.
> Now, to offer a good support t
Does the env. var. works to overload it:
export OMPI_MCA_btl_sm_eager_limit=40960
In that case, I can deal with it.
On 12/16/2010 11:14 AM, Mathieu Gontier wrote:
Hi all,
We have lead some tests and the option btl_sm_eager_limit has a
positive consequence on the performance. Eugene, thank you
Hi all,
We have lead some tests and the option btl_sm_eager_limit has a positive
consequence on the performance. Eugene, thank you for your links.
Now, to offer a good support to our users, we would like to get the
value of this parameters at the runtime. I am aware I can have the value
runn
Mathieu Gontier wrote:
Nevertheless, one can observed some differences between MPICH and
OpenMPI from 25% to 100% depending on the options we are using into
our software. Tests are lead on a single SGI node on 6 or 12
processes, and thus, I am focused on the sm option.
Is it possible to narr
On Monday 06 December 2010 15:03:13 Mathieu Gontier wrote:
> Hi,
>
> A small update.
> My colleague made a mistake and there is no arithmetic performance
> issue. Sorry for bothering you.
>
> Nevertheless, one can observed some differences between MPICH and
> OpenMPI from 25% to 100% depending on
Hi,
A small update.
My colleague made a mistake and there is no arithmetic performance
issue. Sorry for bothering you.
Nevertheless, one can observed some differences between MPICH and
OpenMPI from 25% to 100% depending on the options we are using into our
software. Tests are lead on a singl
Mathieu Gontier wrote:
Dear OpenMPI users
I am dealing with an arithmetic problem. In fact, I have two variants
of my code: one in single precision, one in double precision. When I
compare the two executable built with MPICH, one can observed an
expected difference of performance: 115.7-se
On 2010-12-03, at 8:46AM, Jeff Squyres (jsquyres) wrote:
> Another option to try is to install the openmx drivers on your system and run
> open MPI with mx support. This should be much better perf than tcp.
We've tried this on a big GigE cluster (in fact, Brice Goglin was playing with
it on o
Yes, we have never really optimized open MPI for tcp. That is changing soon,
hopefully.
Regardless, what is the communication pattern of your app? Are you sending a
lot of data frequently? Even the MPICH perf difference is surprising - it
suggests a lot of data xfer, potentially with small m
Dear OpenMPI users
I am dealing with an arithmetic problem. In fact, I have two variants of
my code: one in single precision, one in double precision. When I
compare the two executable built with MPICH, one can observed an
expected difference of performance: 115.7-sec in single precision
aga
12 matches
Mail list logo