Configuring OpenMPI 4.1.0 with GCC 10.2.0 on
Intel(R) Xeon(R) CPU E5-2620 v3, a Haswell processor
that supports AVX2 but not AVX512, resulted in
checking for AVX512 support (no additional flags)... no
checking for AVX512 support (with -march=skylake-avx512)... yes
in "configure" output, and in co
try to
> > the
> > latest nightly snapshot for the v4.1.x branch.
> >
> >
> > Cheers,
> >
> > Gilles
> >
> > On Wed, Feb 10, 2021 at 10:43 PM Max R. Dechantsreiter via users
> > wrote:
> >>
> >> Configuring Op
re only compiled and never executed)
> > >
> > > Then at *runtime*, Open MPI detects the *CPU* capabilities.
> > > In your case, it should not invoke the functions containing AVX512 code.
> > >
> > > That being said, several changes were made to
:48AM +, Jeff Squyres (jsquyres) via
> >> users wrote:
> >>> I think Max did try the latest 4.1 nightly build (from an off-list
> >>> email), and his problem still persisted.
> >>>
> >>> Max: can you describe exactly how Open MPI failed? A
Is there any reason not to configure with
--enable-mpi-threads --enable-progress-threads
for instance a performance penalty if an application
is not using MPI_THREAD_MULTIPLE?
On a VPS I tested my build of hwloc-2.9.2 by running lstopo:
./lstopo
hwloc: Topology became empty, aborting!
Segmentation fault
On a GCP n1-standard-2 a similar build (GCC 12.2 vs. 13.2) seemed to work:
./lstopo
hwloc/nvml: Failed to initialize with nvmlInit(): Driver Not Loaded
Machine (7430MB
in https://github.com/open-mpi/hwloc/issues/new
>
> Brice
>
>
>
> Le 01/08/2023 à 16:17, Max R. Dechantsreiter via users a écrit :
> > On a VPS I tested my build of hwloc-2.9.2 by running lstopo:
> >
> > ./lstopo
> > hwloc: Topology became empty, abort