le) to
underlying Open MPI's C functionality
Fort mpi_f08 subarrays: no
Java bindings: no
Wrapper compiler rpath: runpath
C compiler: nvc
C compiler absolute:
/stage/opt/NV_hpc_sdk/Linux_x86_64/21.9/compilers/bin/nvc
C compiler family n
f that as did the FC='nvfortran -fPIC' (which is
kludgey).
-Ray Muno
On 9/30/21 8:13 AM, Gilles Gouaillardet via users wrote:
Ray,
there is a typo, the configure option is
--enable-mca-no-build=op-avx
Cheers,
Gilles
- Original Message -
Added -*-enable-mca-no-build=op-avx
o causes the link error to go away, that would be a good start.
Hope this helps, -- bennet
On Wed, Sep 29, 2021 at 12:29 PM Ray Muno via users <mailto:users@lists.open-mpi.org>> wrote:
I did try that and it fails at the same place.
Which version of the nVidia HPC-SDK are y
-fPIC in place, then remake
and see if that also causes the link error to go away, that would be a good start.
Hope this helps, -- bennet
On Wed, Sep 29, 2021 at 12:29 PM Ray Muno via users <mailto:users@lists.open-mpi.org>> wrote:
I did try that and it fails at the s
indicate any major changes.
-Ray Muno
On 9/29/21 10:54 AM, Jing Gong wrote:
Hi,
Before Nvidia persons look into details,pProbably you can try to add the flag "-fPIC" to the
nvhpc compiler likes cc="nvc -fPIC", which at least worke
27;-fPIC' to the CFLAGS, CXXFLAGS, FCFLAGS (maybe not need to all of
those)."
Tried adding these, still fails at the same place.
--
Ray Muno
IT Systems Administrator
e-mail: m...@umn.edu
University of Minnesota
Aerospace Engineering and Mechanics
Thanks, I looked through previous emails here in the user list. I guess I need to subscribe to the
Developers list.
-Ray Muno
On 9/29/21 9:58 AM, Jeff Squyres (jsquyres) wrote:
Ray --
Looks like this is a dup of https://github.com/open-mpi/ompi/issues/8919
<https://github.com/open-mpi/o
mpi/fortran/use-mpi-f08'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory
`/project/muno/OpenMPI/BUILD/4.1.1/ROME/NV-HPC/21.7/ompi'
make: *** [all-recursive] Error 1
--
Ray Muno
IT Systems Administrator
e-mail: m...@umn.edu
University of Minnesota
Aerospace Engineering and Mechanics
users using GCC, PGI, Intel and AOCC compilers with this config. PGI was the only one that
was a challenge to build due to conflicts with HCOLL.
-Ray Muno
On 2/7/20 10:04 AM, Michael Di Domenico via users wrote:
i haven't compiled openmpi in a while, but i'm in the process of
upg
I opened a case with pgroup support regarding this.
We are also using Slurm along with HCOLL.
-Ray Muno
On 1/26/20 5:52 AM, Åke Sandgren via users wrote:
Note that when built against SLURM it will pick up pthread from
libslurm.la too.
On 1/26/20 4:37 AM, Gilles Gouaillardet via users wrote
e takes care of
the issue.
Thank you...
--
Ray Muno
University of Minnesota
or a different version of Open MPI? (ignored)
------
--
Ray Muno
University of Minnesota
As a follow up, the problem was with host name resolution. The error was
introduced, with a change to the Rocks environment, which broke reverse
lookups for host names.
--
Ray Muno
gine PE job
[compute-6-25.local:10810] ERROR: The daemon exited unexpectedly with
status 1.
Establishing /usr/bin/ssh session to host compute-6-25.local ...
--
Ray Muno
Ray Muno wrote:
> Tha give me
How about "That gives me"
>
> PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required
> environment variable: MPIRUN_RANK
> PMGR_COLLECTIVE ERROR: PMGR_COLLECTIVE ERROR: unitialized MPI task:
> Missing required envir
Rolf Vandevaart wrote:
> Ray Muno wrote:
>> Ray Muno wrote:
>>
>>> We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily).
>>> Scheduling is done through SGE. MPI communication is over InfiniBand.
>>>
>>>
>>
>>
Ray Muno wrote:
> We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily).
> Scheduling is done through SGE. MPI communication is over InfiniBand.
>
We also have OpenMPI 1.3 installed and receive similar errors.-
--
Ray Muno
University of Minnesota
S setup that may have caused this but
I have not found where the actual problems lies.
--
Ray Muno
University of Minnesota
.local - daemon did not report back when launched
========
--
Ray Muno
University of Minnesota
John Hearns wrote:
2008/11/19 Ray Muno
Thought I would revisit this one.
We are still having issues with this. It is not clear to me what is leaving
the user files behind in /dev/shm.
This is not something users are doing directly, they are just compiling
their code directly with mpif90
, new jobs do not launch.
--
Ray Muno
ORTE daemon that is silently launched in v1.2 jobs should ensure that
files under /tmp/openmpi-sessions-@ are removed.
On Nov 10, 2008, at 2:14 PM, Ray Muno wrote:
Brock Palen wrote:
on most systems /dev/shm is limited to half the physical ram. Was
the user someone filling up /dev/shm so ther
Jeff Squyres wrote:
On Nov 11, 2008, at 2:54 PM, Ray Muno wrote:
See
http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0.
OK, that tells me lots of things ;-)
Should I be running configure with --with-wrapper-cflags,
--with-wrapper-fflags etc,
set to include
Jeff Squyres wrote:
See
http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0.
OK, that tells me lots of things ;-)
Should I be running configure with --with-wrapper-cflags,
--with-wrapper-fflags etc,
set to include
-i_dynamic
--
Ray Muno
Steve Jones wrote:
Are you adding -i_dynamic to base flags, or something different?
Steve
I brought this up to see if something should be changed with the install,
For now, I am leaving that to users.
--
Ray Muno
Seems strange that OpenMPI built without these being set at all. I could
also compile test codes with the compilers, just not with mpicc and mpif90.
-Ray Muno
Ray Muno wrote:
I updated the LD_LIBRARY_PATH to point to the directories that contain
the installed copies of libimf.so. (this is not something I have not had
to do for other compiler/OpenMpi combinations)
How about...
(this is not something I have had to do for other compiler/OpenMpi
there something I should be doing at OpenMPI configure time to take
care of these issues?
--
Ray Muno
University of Minnesota
Aerospace Engineering
well.
--
Ray Muno
trying to determine why they are left behind.
--
Ray Muno
University of Minnesota
Aerospace Engineering and Mechanics
files, they can run.
--
Ray Muno
University of Minnesota
Aerospace Engineering and Mechanics
PICH2.
>
> This benchmark is run on a AMD dual core, dual opteron processor. Both have
> compiled with default configurations.
>
> The job is run on 2 nodes - 8 cores.
>
> OpenMPI - 25 m 39 s.
> MPICH2 - 15 m 53 s.
>
> Any comments ..?
>
> Thanks,
> Sangamesh
>
-Ray Muno
Aerospace Engineering.
ately I haven't seen the above issue, so I don't
> have a workaround to propose. There are some issues that
> have been fixed with GCC-style inline assembly in the latest
> Sun Studio Express build. Could you try it out?
>
> http://developers.sun.com/sunstudio/
33 matches
Mail list logo