Ok.

I went back to look at your logs.

1. Errors in config.log are normal and expected.  You can ignore them.  The 
configure script is probing your system, so finding things that *don't* work is 
normal and expected.

2. The error you're seeing is because the Verbs network API has moved on since 
Open MPI v1.10.  I.e., the Verbs library in RHEL 7.6 is a significantly later 
version that existed in Open MPI v1.10 timeframe, and it's just not compatible. 
 Hence, the "openib" BTL plugin is failing to compile.  You can avoid building 
the openib BTL plugin by passing --enable-no-build=btl:openib on the configure 
command line, but you'll lose network performance if you have an InfiniBand, 
iWARP, or RoCE-based network.  If you're already just running over TCP, you 
won't notice any difference.

-----

All that being said, I'd be very surprised if WRF only supports Open MPI v1.10; 
the v1.10 series dates back to 2015.  That's ancient in terms of computer 
software -- it would be amazing if WRF didn't support later versions of Open 
MPI.



On Oct 30, 2019, at 8:47 PM, Qianjin Zheng 
<qianjin.zh...@hotmail.com<mailto:qianjin.zh...@hotmail.com>> wrote:

Hi Jeff,

Thank you for your quick reply.
I think my WRF-chem model needs the openmpi v1.10.1 library. Recently, my 
service got upgrade include openmpi v1.10.1. After that, I cannot run my model 
any more under the high version of openmpi. Thus, I tried to install the 
openmpi v1.10.1 by myself.

Regards,
Qianjin
________________________________
From: Jeff Squyres (jsquyres) <jsquy...@cisco.com<mailto:jsquy...@cisco.com>>
Sent: Wednesday, October 30, 2019 7:40 PM
To: Open MPI User's List 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
Cc: Qianjin Zheng <qianjin.zh...@hotmail.com<mailto:qianjin.zh...@hotmail.com>>
Subject: Re: [OMPI users] Configure Error for installation of openmpi-1.10.1

v1.10.x is pretty ancient.  Is there any chance you can update to 4.0.2?  
That's the latest version (and it has significantly better MPI_THREAD_MULITPLE 
support).


On Oct 30, 2019, at 8:36 PM, Qianjin Zheng via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:

I tried to install openmpi-1.10.1 but I am not able to configure and install it.

Here is how I install OpenMPI
./configure --prefix=$PATH/LIBRARIES/openmpi --enable-static 
--enable-mpi-thread-multiple --without-usnic --enable-mpi-cxx CC=gcc CXX=g++ 
FC=gfortran FCFLAGS=-m64 F77=gfortran FFLAGS=-m64
make all
make install

My Operating system/version:
3.10.0-957.21.2.el7.x86_64

Computer hardware:
NAME="Red Hat Enterprise Linux Server" VERSION="7.6 (Maipo)" ID="rhel" 
ID_LIKE="fedora" VARIANT="Server" VARIANT_ID="server" VERSION_ID="7.6" 
PRETTY_NAME="Red Hat Enterprise Linux" ANSI_COLOR="0;31" 
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.6:GA:server" 
HOME_URL="https://www.redhat.com/"; 
BUG_REPORT_URL="https://bugzilla.redhat.com/"; REDHAT_BUGZILLA_PRODUCT="Red Hat 
Enterprise Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.6 
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" 
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"

When I configure, the first error I got as below:
gcc: error: unrecognized command line option '-V' gcc: fatal error: no input 
files compilation terminated. configure:5981: $? = 1 configure:5970: gcc 
-qversion >&5 gcc: error: unrecognized command line option '-qversion'; did you 
mean '--version'? gcc: fatal error: no input files compilation terminated.
Other error messages are as follow:
conftest.c:10:10: fatal error: ac_nonexistent.h: No such file or directory 
#include <ac_nonexistent.h> ^~~~~~~~~~~~~~~~~~ compilation terminated. 
configure:6497: $? = 1 configure: failed program was: | /* confdefs.h */


when I looked at the config.log file it was not clear to me. I have googled it 
but I did not find useful information. Would anyone please kindly give me some 
suggestions on this issue? I attach some files used for the compilation.
The configure log: https://www.dropbox.com/s/5zgrh80yuut1y0e/config.log?dl=0
The make log: https://www.dropbox.com/s/2qoznk7s8cp8txz/make_log.txt?dl=0


Thanks again and best regards,
Qianjin


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>

Reply via email to