Hello Reuti,

thank you very much for your interest and the information. 

I found out what is causing this. It is the --enable-static option to 
configure.

With --enable-static a "ldd *so*|grep libstdc++" in openmpi's lib
directory gives 
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002b2b42b28000)
for all .so's in the case of 2.0.2 and a mixture of the system and the
gcc 6.3 libstdc++ in the case of 1.10.6.

Without --enable-static a "ldd *so*|grep libstdc++" shows that for
2.0.2 the libstdc++ is not referenced at all in the
openmpi libraries and for 1.10.6 only the gcc 6.3 provided libstdc++ is
referenced. 

Note: I apparently do not have a libmpi_cxx.so.20,
only the mpicxx wrapper, I suppose because I do not use the 
--enable-mpi-cxx configure option. For the 1.10.6 (--enable-mpi-cxx is
the default) I have a libmpi_cxx.so.1.1.3 which in case of --enable-static 
references
libstdc++.so.6 => /usr/lib64/libstdc++.so.6
and without --enable-static references
libstdc++.so.6 => /cluster/comp/gcc/6.3.0/lib/../lib64/libstdc++.so.6

Testing further by building mpi4py (which is pure C by the way) I see that 
the libsdtc++ is again not referenced at all
if --enable-static was not used during the openmpi 2.0.2 configure step. If
--enable-static had been used during openmpi 2.0.2 configure the mpi4py
libraries contain references to the libstdc++ libraries from the system.
Not using --enable-static during openmpi 1.10.6 configure yields a
mpi4py library which only references the gcc 6.3 libstdc++ (and not a
mixture as with --enable-static).

So the initial question is explained then.

However, is this the expected behaviour with the --enable-static configure
option ? 

Best Regards

Christof

On Fri, Apr 07, 2017 at 10:38:00PM +0200, Reuti wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Hi,
> 
> Am 07.04.2017 um 19:11 schrieb Christof Koehler:
> 
> > […]
> > 
> > On top, all OpenMPI libraries when checked with ldd (gcc 6.3 module
> > still loaded) reference the /usr/lib64/libstdc++.so.6 and not
> > /cluster/comp/gcc/6.3.0/lib64/libstdc++.so.6 which leads to the idea
> > that the OpenMPI installation might be the reason we have the
> > /usr/lib64/libstdc++.so.6 dependency in the mpi4py libraries as well.
> > 
> > What we would like to have is that libstdc++.so.6 resolves to the
> > libstdc++ provided by the gcc 6.3 compiler for the mpi4py, which would be 
> > available in its installation directory, i.e. 
> > /cluster/comp/gcc/6.3.0/lib64/libstdc++.so.6.
> > 
> > So, am I missing options in my OpenMPI build ? Should I explicitely do a
> > ./configure CC=/cluster/comp/gcc/6.3.0/bin/gcc 
> > CXX=/cluster/comp/gcc/6.3.0/bin/g++ ...
> > or similar ? Am I building it correctly with a gcc contained in a
> > separate module anyway ? Or do we have a problem with our ld configuration ?
> 
> I have a default GCC 4.7.2 in the system, and I just prepend PATH and 
> LD_LIBRARY_PATH with the paths to my private GCC 6.2.0 compilation in my home 
> directory by a plain `export`. I can spot:
> 
> $ ldd libmpi_cxx.so.20
> …
>       libstdc++.so.6 => 
> /home/reuti/local/gcc-6.2.0/lib64/../lib64/libstdc++.so.6 (0x00007f184d2e2000)
> 
> So this looks fine (although /lib64/../lib64/ looks nasty). In the library, 
> the RPATH and RUNPATH are set:
> 
> $ readelf -a libmpi_cxx.so.20
> …
>  0x000000000000000f (RPATH)              Library rpath: 
> [/home/reuti/local/openmpi-2.1.0_gcc-6.2.0_shared/lib64:/home/reuti/local/gcc-6.2.0/lib64/../lib64]
>  0x000000000000001d (RUNPATH)            Library runpath: 
> [/home/reuti/local/openmpi-2.1.0_gcc-6.2.0_shared/lib64:/home/reuti/local/gcc-6.2.0/lib64/../lib64]
> 
> Can you check the order in your PATH and LD_LIBRARY_PATH – are they as 
> expected when loading the module?
> 
> - -- Reuti
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - https://gpgtools.org
> 
> iEYEARECAAYFAljn+KkACgkQo/GbGkBRnRq3wwCgkRkiaPyXBdAMHoABmFBfDevu
> ftkAnR3gul9AnZL0qqb8vZg8zjJvIHtR
> =M5Ya
> -----END PGP SIGNATURE-----

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to