I've looked in more detail at the current two MPI_Alltoallv algorithms
and wanted to raise a couple of ideas.
Firstly, the new default "pairwise" algorithm.
* There is no optimisation for sparse/empty messages, compare to the old
basic "linear" algorithm.
* The attached "pairwise-nop" patch add
> program launch by supplying appropriate MCA parameters to orterun (a.k.a.
>>>> mpirun and mpiexec).
>>>>
>>>> There is also a largely undocumented feature of the "tuned" collective
>>>> component where a dynamic rules file can be supplied
ning.
Kind regards,
Hristo
--
Hristo Iliev, Ph.D. -- High Performance Computing RWTH Aachen
University, Center for Computing and Communication
Rechen- und Kommunikationszentrum der RWTH Aachen Seffenter Weg
23,
D 52074 Aachen (Germany)
-Original Message-
From:users-boun...@open-mp
thm performs better.
>>>>>>
>>>>>> You can switch back to the basic linear algorithm by providing the
>>>>>> following MCA parameters:
>>>>>>
>>>>>> mpiexec --mca coll_tuned_use_dynamic_rules 1 --mca
>>>
hen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23, D 52074 Aachen (Germany)
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Number Cruncher
Sent: Wednesday, December 19, 2012 5:31 PM
To: Open MPI Users
Subject: Re: [OMPI users]
lto:users-boun...@open-mpi.org]
> On Behalf Of Number Cruncher
> Sent: Wednesday, December 19, 2012 5:31 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to
> 1.6.1
>
> On 19/12/12 11:08, Paul Kapinos wrote:
> > Did you *reall
on
Rechen- und Kommunikationszentrum der RWTH Aachen
Seffenter Weg 23, D 52074 Aachen (Germany)
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Number Cruncher
Sent: Thursday, November 15, 2012 5:37 PM
To: Open MPI Users
Subject: [O
age-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
On Behalf Of Number Cruncher
Sent: Thursday, November 15, 2012 5:37 PM
To: Open MPI Users
Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1
I've noticed a very significant (100%) slow down f
Sent: Thursday, November 15, 2012 5:37 PM
To: Open MPI Users
Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1
I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of
version 1.6.1.
* This is most noticeable for high-frequency exchanges over 1Gb e
t; Sent: Thursday, November 15, 2012 5:37 PM
> To: Open MPI Users
> Subject: [OMPI users] MPI_Alltoallv performance regression 1.6.0 to 1.6.1
>
> I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of
> version 1.6.1.
> * This is most noticeable for hi
I've noticed a very significant (100%) slow down for MPI_Alltoallv calls
as of version 1.6.1.
* This is most noticeable for high-frequency exchanges over 1Gb ethernet
where process-to-process message sizes are fairly small (e.g. 100kbyte)
and much of the exchange matrix is sparse.
* 1.6.1 releas
11 matches
Mail list logo