On Wed, 19 May 2021 15:53:50 +0200
Pavel Mezentsev via users <users@lists.open-mpi.org> wrote:

> It took some time but my colleague was able to build OpenMPI and get
> it working with OmniPath, however the performance is quite
> disappointing. The configuration line used was the
> following: ./configure --prefix=$INSTALL_PATH
> --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu
> --enable-shared --with-hwloc=$EBROOTHWLOC --with-psm2
> --with-ofi=$EBROOTLIBFABRIC --with-libevent=$EBROOTLIBEVENT
> --without-orte --disable-oshmem --with-gpfs --with-slurm
> --with-pmix=external --with-libevent=external --with-ompi-pmix-rte
> 
> /usr/bin/srun --cpu-bind=none --mpi=pspmix --ntasks-per-node 1 -n 2
> xenv -L Architecture/KNL -L GCC -L OpenMPI env
> OMPI_MCA_btl_base_verbose="99" OMPI_MCA_mtl_base_verbose="99" numactl
> --physcpubind=1 ./osu_bw ...
...
> # OSU MPI Bandwidth Test v5.7
> # Size      Bandwidth (MB/s)
> 1                       0.05
> 2                       0.10
> 4                       0.20
...
> 2097152               226.93
> 4194304               227.62

I don't know for sure what kind of performance you should get on KNL
with a single rank (but I would expect at least several GB/s) but on
non-KNL OpenMPI gets wirespeed on omnipath (11GB/s) in the above
benchmark.

Note that ib_write_bw runs on Verbs (same as openmpi btl openib). Verbs
is not fast on Omnipath (especially not on KNL).

As far as possible I would suggest PSM2 and disable all the ofi. But
that may not be possible with this specific OpenMPI version (I've only
used it with other versions).

/Peter K

> If I test directly with `ib_write_bw` I get
> #bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]
> MsgRate[Mpps]
> Conflicting CPU frequency values detected: 1498.727000 !=
> 1559.017000. CPU Frequency is not max.
>  65536      5000             2421.04            2064.33
> 0.033029

Reply via email to