Re: [OMPI users] Stable and performant openMPI version for Ubuntu20.04 ?

2021-03-08 Thread Raut, S Biplab via users
Any suggestions or hints on the performance anomalies observed my me ?

By the way, it would be good to know if there is any mechanism/tool to do 
performance comparison between two openMPI versions (let’s say an older 3.1.1 
version vs a new stable release 4.1.0 version) ?

With Regards,
S. Biplab Raut

From: users  On Behalf Of Raut, S Biplab via 
users
Sent: Sunday, March 7, 2021 5:37 PM
To: Gilles Gouaillardet 
Cc: Raut, S Biplab ; Open MPI Users 

Subject: Re: [OMPI users] Stable and performant openMPI version for Ubuntu20.04 
?

[CAUTION: External Email]
Dear Gilles,
Thank you. Please check my replies inline.

First you need to make sure your app is correctly pinned in order to measure 
optimal and stable performances.
If this is flat MPI,
mpirun --bind-to core ...
will do the trick.
On the single-node 128 cores, I use the below options to properly bind/rank the 
MPI processes.
mpirun --map-by core --rank-by core --bind-to core -np 128  


Best/not best is imho a pretty unusable metric.
You first have to check how stable the performances are (by running each app a 
few times), and then share normalized performance
(for example, best = 1, not best = 0.xy)
0.98 is ok-ish, 0.80 is not
The test bench simulation is done for more than 3000 iterations to minimize 
run-to-run variations, and the best/max GLOPS is selected out of all the runs.
This is a standalone benchmark that we usually run. Finally the evaluated 
source code is used in various HPC/scientific codes like NAMD, GROMACS, QE, 
VASP.
I am sorry that I could not provide the absolute performance numbers since I 
would need official approvals.
But, please be assured that the “best” vs “not-best” is actually qualified 
based on actual gap (atleast 10% or more) and not run-to-run variations.


With Regards,
S. Biplab Raut

From: Gilles Gouaillardet 
mailto:gilles.gouaillar...@gmail.com>>
Sent: Sunday, March 7, 2021 4:56 PM
To: Raut, S Biplab mailto:biplab.r...@amd.com>>
Subject: Re: [OMPI users] Stable and performant openMPI version for Ubuntu20.04 
?

[CAUTION: External Email]
Hi,


First you need to make sure your app is correctly pinned in order to measure 
optimal and stable performances.
If this is flat MPI,
mpirun --bind-to core ...
will do the trick.

Best/not best is imho a pretty unusable metric.
You first have to check how stable the performances are (by running each app a 
few times), and then share normalized performance
(for example, best = 1, not best = 0.xy)
0.98 is ok-ish, 0.80 is not

Cheers,

Gilles

On Sun, Mar 7, 2021 at 4:30 AM Raut, S Biplab 
mailto:biplab.r...@amd.com>> wrote:

Dear Gilles and Nathan,

  Thank you for your suggestions.

I have experimented with them and got few questions for you.



My application is open-source FFTW library and its MPI test bench - It makes 
use of distributed/MPI global transpose for a given MPI input problem.

I ran the program on 5 different combinations of OS and openMPI version/flags. 
Please check below few sample cases for which performance vary a lot across 
these 5 combinations.



Single-node 128 MPI ranks

GFLOPS comparison (absolute numbers not provided)



Double-precision complex type 1D array of size

Ubuntu 19.04 + openMPI3.1.1

Ubuntu 20.04 + openMPI4.1.0

Ubuntu 20.04 + openMPI4.1.0 +

Using --mca pml ob1 --mca btl vader,self,

Ubuntu 20.04 + openMPI4.1.0 +

xpmem

Ubuntu 20.04 + openMPI4.1.0 +
xpmem
+

Using --mca pml ob1 --mca btl vader,self,

Comments

390625

Best

Not-best

Not-best

Not-best

Not-best

Option with (Ubuntu 19.04 + openMPI3.1.1) is best

2097152

Not-best

Not-best

Not-best

Not-best

Best

Option with (Ubuntu 20.04 + openMPI4.1.0 + xpmem

+ Using --mca pml ob1 --mca btl vader,self,) is best

4194304

Not-best

Not-best

Not-best

Best

Not-best

Option with (Ubuntu 20.04 + openMPI4.1.0 + xpmem) is best

6400

Not-best

Best

Not-best

Not-best

Not-best

Option with (Ubuntu 20.04 + openMPI4.1.0) is best



My questions are:-

  1.  I was using openMPI3.1.1 on Ubuntu19.04 without “xpmem” and “runtime mca 
vader option”, then why the plain/stock openMPI4.1.0 on Ubuntu20.04 is not 
giving the best performance?
  2.  In most of the cases using “xpmem” library gives the best performance, 
but for few cases “Ubuntu 19.04 + openMPI3.1.1” is best. How to finalize which 
version to use universally?
  3.  I was getting a runtime warning for “xpmem” as mentioned below:-

WARNING: Could not generate an xpmem segment id for this process’ address space.

The vader shared memory BTL will fall back on another single-copy mechanish if 
one is available. This may result in lower performance.

How to resolve this issue?



With Regards,

S. Biplab Raut



-Original Message-
From: users 
mailto:users-boun...@lists.open-mpi.org>> On 
Behalf Of Gilles Gouaillardet via users
Sent: Friday, March 5, 2021 5:58 AM
To: Open MPI Users mailto:users@lists.open-mpi.org>>
Cc: Gilles Gouaillardet 
mailto:gilles.gou

Re: [OMPI users] config: gfortran: "could not run a simple Fortran program"

2021-03-08 Thread Jeff Squyres (jsquyres) via users
What is the exact configure line you used to build Open MPI?  You don't want to 
put CC and CXX in a single quoted token.  For example, do this:

./configure CC=gcc CXX=g++ ...

Don't do this (which is what your previous mail implied you might be doing...?):

./configure "CC=gcc CXX=g++" ...



On Mar 7, 2021, at 9:59 PM, Anthony Rollett via users 
mailto:users@lists.open-mpi.org>> wrote:

I am embarrassed to admit that I really did have a problem with compiling a 
simple Fortran program – because of the upgrade to Catalina!
When I would try to compile w/ gfortran, I would get errors such as “cannot 
find -System”.
I finally found this website which provided a solution (although I edited my 
.bash_profile to make the changes permanent):
https://stackoverflow.com/questions/58278260/cant-compile-a-c-program-on-a-mac-after-upgrading-to-catalina-10-15

This allowed gfortran to compile (and run) and then I was able to configure 
openmpi (4.1.0).
I am still trying to configure openmpi with gcc and g++ (as opposed to using 
clang and c++).

Thanks to all
Tony Rollett



On Mar 7, 2021, at 8:00 PM, Gilles Gouaillardet via users 
mailto:users@lists.open-mpi.org>> wrote:

Anthony,

Did you make sure you can compile a simple fortran program with
gfortran? and gcc?

Please compress and attach both openmpi-config.out and config.log, so
we can diagnose the issue.

Cheers,

Gilles

On Mon, Mar 8, 2021 at 6:48 AM Anthony Rollett via users
mailto:users@lists.open-mpi.org>> wrote:

I am trying to configure v 4.1 with the following, which fails as noted in the 
Subject line.

./configure --prefix=/Users/Shared/openmpi410 \
FC=gfortran CC=clang CXX=c++ --disable-static \
2>&1 | tee openmpi-config.out

On a 2019 MacbookPro with 10.15 (but I had the same problem with 10.14).
Gfortran (and gcc) is from High Performance Computing for OSX

Any clues will be gratefully received! And I apologize if this is a solved 
problem ...
Many thanks, Tony Rollett
PS.  If I try “CC=gcc CXX=g++” then it fails at the C compilation stage.



--
Jeff Squyres
jsquy...@cisco.com