-lexp-EMIN ) )
^
make[2]: *** [HPL_dlamch.o] Error 1
make[2]: Leaving directory `/home/centos/hpl-2.3/src/auxil/impetus03'
make[1]: *** [build_src] Error 2
make[1]: Leaving directory `/home/centos/hpl-2.3'
make: *** [build] Error 2
I don't understand the nature of the problem or why it works with the old
OMPI version and not with the new. Any help or pointer would be appreciated.
Thanks.
AFernandez
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
e and beeps when the older didn't.
Thanks again,
AFernandez
Indeed, you cannot use "try" as a variable name in C++ because it is a
https://en.cppreference.com/w/cpp/keyword.
As already suggested, use a C compiler, or you can replace "try" with "xtry" or
Hello,
I'm performing some tests with OMPIv4. The initial configuration used one
Ethernet port (10 Gibps) but have added a second one (with the same
characteristics). The documentation mentions that the OMPI installation will
try to use as much network capacity as available. However, my tests show
o
agree with the lack of GPU use. If I try with '--mca btl smcuda,' it makes
no difference. I have also tried to specify the program to use host and
device (e.g. mpirun -np 2 ./osu_latency D H) but the same result. I am
probably missing something but not sure where else to look
tRDMA path and gdrcopy is used.
On Fri, Sep 6, 2019 at 7:36 AM Arturo Fernandez via users
mailto:users@lists.open-mpi.org> > wrote:
Josh,
Thank you. Yes, I built UCX with CUDA and gdrcopy support. I also had to
disable numa (--disable-numa) as requested during the installation.
Please disregard my previous question as the PMIX error was triggered by
something else (not sure why ompi_info wasn't outputting any PMIX components
but now it does)
On Nov 23, 2021, 6:01 PM, at 6:01 PM, Arturo Fernandez via users
wrote:
>Hello,
>This is kind of an odd issue as it had not h
Hello,
I upgraded one of the systems to v5.0.1 and have compiled everything
exactly as dozens of previous times with v4. I wasn't expecting any issue
(and the compilations didn't report anything out of ordinary) but running
several apps has resulted in error messages such as:
Backtrace for this er
a better stack trace. Also, valgrind may
help pin down the problem by telling you which memory block is being free'd
here.
Thanks
Joseph
On 1/30/24 07:41, afernandez via users wrote:
Hello,
I upgraded one of the systems to v5.0.1 and have compiled everything >
exactly as dozens of previous
ing free'd
here.
Thanks
Joseph
On 1/30/24 07:41, afernandez via users wrote:
Hello,
I upgraded one of the systems to v5.0.1 and have compiled everything >
exactly as dozens of previous times with v4. I wasn't expecting any > issue
(and the compilations didn't report anything ou
r that causes the crash?
How many nodes and MPI tasks are needed in order to evidence the crash?
Cheers,
Gilles
On Wed, Jan 31, 2024 at 10:09 PM afernandez via users mailto:users@lists.open-mpi.org> > wrote:
Hello Joseph,
Sorry for the delay but I didn't know if I was missing something yeste
o, are you able to craft a reproducer that causes the crash?
How many nodes and MPI tasks are needed in order to evidence the crash?
Cheers,
Gilles
On Wed, Jan 31, 2024 at 10:09 PM afernandez via users mailto:users@lists.open-mpi.org> > wrote:
Hello Joseph,
Sorry for the delay but I didn
11 matches
Mail list logo