Sangan, The issue should have been fixed in Open MPI 5.0.6.
Anyway, are you certain Open MPI is not GPU aware and this is not cmake/GROMACS that failed to detect it? What if you "configure" GROMACS with cmake -DGMX_FORCE_GPU_AWARE_MPI=ON ... If the problem persists, please open an issue at https://github.com/open-mpi/ompi/issues and do provide the required information. Cheers, Gilles On Sun, Mar 30, 2025 at 12:08 AM Sangam B <forum....@gmail.com> wrote: > Hi, > > OpenMPI-5.0.5 or 5.0.6 versions fail with following error during > "make" stage of the build procedure: > > In file included from ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.h:51, > from ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.c:13: > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.h: In function > ‘ompi_mtl_ofi_context_progress’: > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi_request.h:19:5: warning: > implicit declaration of function ‘container_of’ > [-Wimplicit-function-declaration] > 19 | container_of((_ptr_ctx), struct ompi_mtl_ofi_request_t, ctx) > | ^~~~~~~~~~~~ > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.h:152:27: note: in expansion of > macro ‘TO_OFI_REQ’ > 152 | ofi_req = > TO_OFI_REQ(ompi_mtl_ofi_wc[i].op_context); > | ^~~~~~~~~~ > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi_request.h:19:30: error: > expected expression before ‘struct’ > 19 | container_of((_ptr_ctx), struct ompi_mtl_ofi_request_t, ctx) > | ^~~~~~ > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.h:152:27: note: in expansion of > macro ‘TO_OFI_REQ’ > 152 | ofi_req = > TO_OFI_REQ(ompi_mtl_ofi_wc[i].op_context); > | ^~~~~~~~~~ > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi_request.h:19:30: error: > expected expression before ‘struct’ > 19 | container_of((_ptr_ctx), struct ompi_mtl_ofi_request_t, ctx) > | ^~~~~~ > ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.h:200:19: note: in expansion of > macro ‘TO_OFI_REQ’ > 200 | ofi_req = TO_OFI_REQ(error.op_context); > | ^~~~~~~~~~ > make[2]: *** [Makefile:1603: mtl_ofi.lo] Error 1 > > OpenMPI-5.0.7 surpasses this error, but it is not able build cuda [GPU > DIRECT] & ofi support: > > Gromacs applications complains that it is not able to detect Cuda Aware > MPI: > > GPU-aware MPI was not detected, will not use direct GPU communication. > Check the GROMACS install guide for recommendations for GPU-aware support. > If you are certain about GPU-aware support in your MPI library, you can > force its use by setting the GMX_FORCE_GPU_AWARE_MPI environment variable. > > OpenMPI is configured like this: > > '--disable-opencl' '--with-slurm' '--without-lsf' > '--without-opencl' > > '--with-cuda=/opt/nvidia/hpc_sdk/Linux_x86_64/25.1/cuda/12.6' > '--without-rocm' > '--with-knem=/opt/knem-1.1.4.90mlnx3' > > '--with-xpmem=/sw/openmpi/5.0.7/g133cu126_ubu2404/xpmem/2.7.3/' > > '--with-xpmem-libdir=/sw/openmpi/5.0.7/g133cu126_ubu2404/xpmem/2.7.3//lib' > > '--with-ofi=/sw/openmpi/5.0.7/g133cu126_ubu2404/ofi/2.0.0/c126g25xu118' > > '--with-ofi-libdir=/sw/openmpi/5.0.7/g133cu126_ubu2404/ofi/2.0.0/c126g25xu118/lib' > '--enable-mca-no-build=btl-usnic' > > Can somebody help me to build a successful cuda aware openmpi here? > > Thanks > > To unsubscribe from this group and stop receiving emails from it, send an > email to users+unsubscr...@lists.open-mpi.org. > To unsubscribe from this group and stop receiving emails from it, send an email to users+unsubscr...@lists.open-mpi.org.