Thanks Tim. I'm compiling source units and linking them into an executable. Or perhaps you are talking about how OpenMPI itself is built? Excuse my ignorance...
The source code units are compiled like this: /usr/mpi/intel/openmpi-1.4.3/bin/mpif90 -D_GNU_SOURCE -traceback -align -pad -xHost -falign-functions -fpconstant -O2 -I. -I/usr/mpi/intel/openmpi-1.4.3/include -c ../code/src/main/main.f90 The link step is like this: /usr/mpi/intel/openmpi-1.4.3/bin/mpif90 -D_GNU_SOURCE -traceback -align -pad -xHost -falign-functions -fpconstant -static-intel -o ../bin/<name> <some libraries> -lstdc++ OpenMPI itself was configured like this: ./configure --prefix=/release/cfd/openmpi-intel --without-tm --without-sge --without-lsf --without-psm --without-portals --without-gm --without-elan --without-mx --without-slurm --without-loadleveler --enable-mpirun-prefix-by-default --enable-contrib-no-build=vt --enable-mca-no-build=maffinity --disable-per-user-config-files --disable-io-romio --with-mpi-f90-size=small --enable-static --disable-shared CXX=/appserv/intel/Compiler/11.1/072/bin/intel64/icpc CC=/appserv/intel/Compiler/11.1/072/bin/intel64/icc 'CFLAGS= -O2' 'CXXFLAGS= -O2' F77=/appserv/intel/Compiler/11.1/072/bin/intel64/ifort 'FFLAGS=-D_GNU_SOURCE -traceback -O2' FC=/appserv/intel/Compiler/11.1/072/bin/intel64/ifort 'FCFLAGS=-D_GNU_SOURCE -traceback -O2' 'LDFLAGS= -static-intel' ldd output on the final executable gives: linux-vdso.so.1 => (0x00007fffb77e7000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002b2e2b652000) libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00002b2e2b95e000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b2e2bb6d000) libnsl.so.1 => /lib64/libnsl.so.1 (0x00002b2e2bd72000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b2e2bf8a000) libm.so.6 => /lib64/libm.so.6 (0x00002b2e2c18d000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b2e2c3e4000) libc.so.6 => /lib64/libc.so.6 (0x00002b2e2c600000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b2e2c959000) /lib64/ld-linux-x86-64.so.2 (0x00002b2e2b433000) Do you see anything that suggests I should have been compiling the application and/or OpenMPI with -fPIC? Thanks -----Original Message----- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Tim Prince Sent: Wednesday, September 21, 2011 10:53 AM To: us...@open-mpi.org Subject: EXTERNAL: Re: [OMPI users] Question about compilng with fPIC On 9/21/2011 11:44 AM, Blosch, Edwin L wrote: > Follow-up to a mislabeled thread: "How could OpenMPI (or MVAPICH) affect > floating-point results?" > > I have found a solution to my problem, but I would like to understand the > underlying issue better. > > To rehash: An Intel-compiled executable linked with MVAPICH runs fine; linked > with OpenMPI fails. The earliest symptom I could see was some strange > difference in numerical values of quantities that should be unaffected by MPI > calls. Tim's advice guided me to assume memory corruption. Eugene's advice > guided me to explore the detailed differences in compilation. > > I observed that the MVAPICH mpif90 wrapper adds -fPIC. > > I tried adding -fPIC and -mcmodel=medium to the compilation of the > OpenMPI-linked executable. Now it works fine. I haven't tried without > -mcmodel=medium, but my guess is -fPIC did the trick. > > Does anyone know why compiling with -fPIC has helped? Does it suggest an > application problem or an OpenMPI problem? > > To note: This is an Infiniband-based cluster. The application does pretty > basic MPI-1 operations: send, recv, bcast, reduce, allreduce, gather, gather, > isend, irecv, waitall. There is one task that uses iprobe with MPI_ANY_TAG, > but this task is only involved in certain cases (including this one). > Conversely, cases that do not call iprobe have not yet been observed to > crash. I am deducing that this function is the problem. > If you are making a .so, the included .o files should be built with -fPIC or similar. Ideally, the configure and build tools would enforce this. -- Tim Prince _______________________________________________ users mailing list us...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/users