I assume your first issue is happening because you configured hwloc with cuda support which creates a dependency on libcudart.so. Not sure why that would mess up Open MPI. Can you send me how you configured hwloc?
I am not sure I understand the second issue. Open MPI puts everything in lib even though you may be building for 64 bits. So all of these are fine. -I/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda-6.0RC/lib -Wl,-rpath -Wl,/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda-6.0RC/lib -L/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda-6.0RC/lib Rolf >-----Original Message----- >From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Filippo Spiga >Sent: Friday, February 14, 2014 9:44 AM >To: us...@open-mpi.org >Subject: [OMPI users] Configure issue with/without HWLOC when PGI used >and CUDA support enabled > >Dear Open MPI developers, > >I just want to point to a weird behavior of the configure procedure I >discovered. I am compiling Open MPI 1.7.4 with CUDA support (CUDA 6.0 RC) >and PGI 14.1 > >If I explicitly compile against a self-compiled version of HWLOC (1.8.1) using >this configure line ../configure CC=pgcc CXX=pgCC FC=pgf90 F90=pgf90 -- >prefix=/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda-6.0RC -- >enable-mpirun-prefix-by-default --with-fca=$FCA_DIR --with- >mxm=$MXM_DIR --with-knem=$KNEM_DIR --with-hwloc=/usr/local/Cluster- >Users/fs395/hwlock-1.8.1/gcc-4.4.7_cuda-6.0RC --with- >slurm=/usr/local/Cluster-Apps/slurm --with-cuda=/usr/local/Cluster- >Users/fs395/cuda/6.0-RC > >make fails telling me that it cannot find "-lcudart". > > >If I compile without HWLOC using this configure line: >../configure CC=pgcc CXX=pgCC FC=pgf90 F90=pgf90 -- >prefix=/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda-6.0RC -- >enable-mpirun-prefix-by-default --with-fca=$FCA_DIR --with- >mxm=$MXM_DIR --with-knem=$KNEM_DIR --with-slurm=/usr/local/Cluster- >Apps/slurm --with-cuda=/usr/local/Cluster-Users/fs395/cuda/6.0-RC > >make succeeds and I have Open MPI compiled properly. > >$ mpif90 -show >pgf90 -I/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda- >6.0RC/include -I/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.1_cuda- >6.0RC/lib -Wl,-rpath -Wl,/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi- >14.1_cuda-6.0RC/lib -L/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi- >14.1_cuda-6.0RC/lib -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh - >lmpi $ ompi_info --all | grep btl_openib_have_cuda_gdr > MCA btl: informational "btl_openib_have_cuda_gdr" (current > value: >"true", data source: default, level: 5 tuner/detail, type: bool) > >I wonder why the configure picks up lib instead of lib64. I will test the build >using real codes. > >Cheers, >Filippo > >-- >Mr. Filippo SPIGA, M.Sc. >http://www.linkedin.com/in/filippospiga ~ skype: filippo.spiga > ><Nobody will drive us out of Cantor's paradise.> ~ David Hilbert > >***** >Disclaimer: "Please note this message and any attachments are >CONFIDENTIAL and may be privileged or otherwise protected from disclosure. >The contents are not to be disclosed to anyone other than the addressee. >Unauthorized recipients are requested to preserve this confidentiality and to >advise the sender immediately of any error in transmission." > > >_______________________________________________ >users mailing list >us...@open-mpi.org >http://www.open-mpi.org/mailman/listinfo.cgi/users ----------------------------------------------------------------------------------- This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. -----------------------------------------------------------------------------------