Ralph -- it seems to be picking up "-pthread" from libslurm.la (i.e., outside
of the OMPI tree), which pgcc doesn't seem to like.
Another solution might be to (temporarily?) remove the "-pthread" from
libslurm.la (which is a text file that you can edit). Then OMPI shouldn't pick
up that flag,
If you are running on a Slurm-managed cluster, it won't be happy without
configuring --with-slurm - you won't see the allocation, for one.
Is it just the --with-slurm option that causes the problem? In other words, if
you remove the rest of those options (starting --with-hcoll and going down tha
Hi Jeff, Hi Ake,
removing --with-slurm and keeping --with-hcoll seems to work. The error
disappears at compile time, I have not yet tried to run a job. I can copy
config.log and the make.log is needed.
Cheers,
F
On Mar 11, 2014, at 4:48 PM, Jeff Squyres (jsquyres) wrote:
> On Mar 11, 2014, at
On Mar 11, 2014, at 11:22 AM, Åke Sandgren wrote:
>>> ../configure CC=pgcc CXX=pgCC FC=pgf90 F90=pgf90
>>> --prefix=/usr/local/Cluster-Users/fs395/openmpi-1.7.4/pgi-14.2_cuda-6.0RC
>>> --enable-mpirun-prefix-by-default --with-hcoll=$HCOLL_DIR
>>> --with-fca=$FCA_DIR --with-mxm=$MXM_DIR --wit
On 03/11/2014 04:12 PM, Jeff Squyres (jsquyres) wrote:
I don't see the config.log and make.log attached - can you send all the info
requested here (including config.log and config.out):
http://www.open-mpi.org/community/help/
Can you also send "make V=1" output as well?
On Feb 25, 2014,
I don't see the config.log and make.log attached - can you send all the info
requested here (including config.log and config.out):
http://www.open-mpi.org/community/help/
Can you also send "make V=1" output as well?
On Feb 25, 2014, at 6:22 PM, Filippo Spiga wrote:
> Dear all,
>
> I cam
Dear Ralph,
I still need a workaround to compile using PGI and --with-hcoll. I tried a
night snapshot last week I will try again the latest one and if something
change I will let you know.
Regards,
Filippo
On Feb 26, 2014, at 6:16 PM, Ralph Castain wrote:
> Perhaps you could try the nightly
Perhaps you could try the nightly 1.7.5 tarball? I believe some PGI fixes may
have gone in there
On Feb 25, 2014, at 3:22 PM, Filippo Spiga wrote:
> Dear all,
>
> I came across another small issue while I was compiling Open MPI 1.7.4 using
> PGI 14.2 and building the support for Mellanox Hie
Dear all,
I came across another small issue while I was compiling Open MPI 1.7.4 using
PGI 14.2 and building the support for Mellanox Hierarchical Collectives
(--with-hcoll). Here you how configure Open MPI:
export MXM_DIR=/opt/mellanox/mxm
export KNEM_DIR=$(find /opt -maxdepth 1 -type d -name