Hi,

I am trying to compile Quantum ESPRESSO 7.4 for GPUs. I am using NVHPC 23.9 
with CUDA 12.2. I am using CMake 3.29.6 to configure the code. The machine is 
running AlmaLinux.

Most pieces seem straightforward but when CMake gets to LAXlib it messes up:


-- Looking for cusolverDnZhegvdx

-- Looking for cusolverDnZhegvdx - not found

CMake Error at LAXlib/CMakeLists.txt:32 (message):

  The version of CUDAToolkit chosen by the PGI/NVHPC compiler internally

  doesn't contain cusolverDnZhegvdx.  cuSOLVER features used by LAXLib are

  only supported since CUDAToolkit 10.1 release.  Use a newer compiler or

  select a newer CUDAToolkit internal to the PGI/NVHPC compiler.

The CUDAToolkit I am using is clearly newer than 10.1. I have checked that the 
cusolver library exists and contains cusolverDnZhegvdx. Nevertheless LAXlib 
does not seem to be able to find it. Note that LAXlib/CMakeLists.txt still uses 
check_function_exists which has been deprecated (although it should still work 
in this case). The CMake command is:


cmake -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR \

      -DCMAKE_PREFIX_PATH=$SCRATCH_DIR \

      -DCMAKE_BUILD_TYPE=RELWITHDEBINFO \

      -DCMAKE_C_COMPILER=$G_CC \

      -DCMAKE_CXX_COMPILER=$G_CXX \

      -DCMAKE_Fortran_COMPILER=$G_FC \

      -DCMAKE_Fortran_COMPILER_ID=NVHPC \

      -DCMAKE_Fortran_COMPILER_VERSION=23.9 \

      -DNVFORTRAN_CUDA_VERSION=12.2 \

      -DOpenACC_C_FLAGS="-acc=gpu" \

      -DMPI_C_COMPILER=$M_CC \

      -DMPI_CXX_COMPILER=$M_CXX \

      -DMPI_Fortran_COMPILER=$M_FC \

      -DMPIEXEC_EXECUTABLE=$M_EXE \

      -DQE_ENABLE_PLUGINS="gipaw" \

      -DQE_ENABLE_LIBXC=ON \

      -DLIBXC_ROOT=$SCRATCH_DIR/libxc-6.1.0 \

      -DQE_ENABLE_HDF5=OFF \

      -DQE_ENABLE_FOX=ON \

      -DQE_ENABLE_CUDA=ON \

      -DQE_FFTW_VENDOR=FFTW3 \

      
-DFFTW3_LIBRARIES="<path>/x86_64-linux/lib/libcufftw.so:<path>/x86_64-linux/lib/libcufft.so"
 \

      -DFFTW3_INCLUDE_DIRS=<path>/include/cufftw.h \

      -DBLA_VENDOR=NVHPC \

      -H. -Bbuild


In addition:

G_CC = nvcc
G_CXX = nvc++
G_FC = nvfortran
M_CC = mpicc
M_CXX = mpicxx
M_FC = mpif90
M_EXE = mpiexec

Where the MPI installation is OpenMPI 5.0.3. Does anyone have any insights into 
how to get around this issue?

Thanks in advance,

     Hubertus


Hubertus van Dam (er/he, er/him, sein/his)
Universität Duisburg-Essen
Zentrum für Informations- und Mediendienste (ZIM)
Raum SH 209
HPC Consultant
hubertus.van...@uni-due.de
www.linkedin.com/in/HuubVanDam
orcid.org/0000-0002-0876-3294
[image001.png]

_______________________________________________________________________________
The Quantum ESPRESSO Foundation stands in solidarity with all civilians 
worldwide who are victims of terrorism, military aggression, and indiscriminate 
warfare.
--------------------------------------------------------------------------------
Quantum ESPRESSO is supported by MaX (www.max-centre.eu)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Reply via email to