o, are you able to craft a reproducer that causes the crash?
How many nodes and MPI tasks are needed in order to evidence the crash?
Cheers,
Gilles
On Wed, Jan 31, 2024 at 10:09 PM afernandez via users mailto:users@lists.open-mpi.org> > wrote:
Hello Joseph,
Sorry for the delay but I didn
r that causes the crash?
How many nodes and MPI tasks are needed in order to evidence the crash?
Cheers,
Gilles
On Wed, Jan 31, 2024 at 10:09 PM afernandez via users mailto:users@lists.open-mpi.org> > wrote:
Hello Joseph,
Sorry for the delay but I didn't know if I was missing something yeste
ing free'd
here.
Thanks
Joseph
On 1/30/24 07:41, afernandez via users wrote:
Hello,
I upgraded one of the systems to v5.0.1 and have compiled everything >
exactly as dozens of previous times with v4. I wasn't expecting any > issue
(and the compilations didn't report anything ou
a better stack trace. Also, valgrind may
help pin down the problem by telling you which memory block is being free'd
here.
Thanks
Joseph
On 1/30/24 07:41, afernandez via users wrote:
Hello,
I upgraded one of the systems to v5.0.1 and have compiled everything >
exactly as dozens of previous
Hello,
I upgraded one of the systems to v5.0.1 and have compiled everything
exactly as dozens of previous times with v4. I wasn't expecting any issue
(and the compilations didn't report anything out of ordinary) but running
several apps has resulted in error messages such as:
Backtrace for this er
Please disregard my previous question as the PMIX error was triggered by
something else (not sure why ompi_info wasn't outputting any PMIX components
but now it does)
On Nov 23, 2021, 6:01 PM, at 6:01 PM, Arturo Fernandez via users
wrote:
>Hello,
>This is kind of an odd issue as it had not h
AFernandez
Joshua Ladd wrote
Did you build UCX with CUDA support (--with-cuda) ?
Josh
On Thu, Sep 5, 2019 at 8:45 PM AFernandez via users < users@lists.open-mpi.org
<mailto:users@lists.open-mpi.org> > wrote:
Hello OpenMPI Team,
I'm trying to use CUDA-aware OpenMP
Hello OpenMPI Team,
I'm trying to use CUDA-aware OpenMPI but the system simply ignores the GPU
and the code runs on the CPUs. I've tried different software but will focus
on the OSU benchmarks (collective and pt2pt communications). Let me provide
some data about the configuration of the system:
-
Hello,
I'm performing some tests with OMPIv4. The initial configuration used one
Ethernet port (10 Gibps) but have added a second one (with the same
characteristics). The documentation mentions that the OMPI installation will
try to use as much network capacity as available. However, my tests show