PS: Also, as I mentioned in my first message, disabling
shared libraries may also be a source of headaches.
Why do you need Open MPI to be purely static libraries?
What if you try to build Open MPI again using
the configuration defaults (enable shared, disable static)?
Make sure you cleanup the old build (make distclean, or just
start fresh from the OMPI tarball.)



On 09/30/2013 03:44 PM, Gus Correa wrote:
Hi Damiano


OpenFOAM may have something funny in the Makefiles, perhaps?

Make sure you set the PATH and LD_LIBRARY_PATH right.

A suggestion. Try compiling something VERY SIMPLE with
mpif90.
Say:
my_test.f90:

program my_test
print *, 'This is my test.'
end program my_test

$ /path/to/mpif90 -o my_test my_test.f90

If you want to add MPI_Init, MPI_Finalize, etc to the code,
fine, but serial is OK, just checking if the mpif90 wrapper works.

If this works, OpenFOAM is to blame, not Open MPI.
My guess is still something may be messy with the environment
variables, and how they are passed to OpenFOAM (which I don't
know and don't use, sorry).

My two cents,
Gus Correa


On 09/30/2013 01:48 PM, Damiano Natali wrote:
Hi Gus, first of all thank you very much for you help. I really
appreciate!

Then you are right, I have OpenFOAM so 'which mpif90' addresses to
another installation that probably wasn't meant to have f90 bindings.
However, when I compile my f90 code I use absolute path.

Even when I am in the /bin directory of the ompi, ./ompi_info says that
f90 bindings are ok, but ./mpif90 complains about f90 supports.

I suspect there must be another issue.

Thanks again,
Damiano


2013/9/30 Gus Correa <g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu>>

Hi Damiano

Did you setup your PATH and LD_LIBRARY_PATH
to point to your OMPI installation?
I.e. to:
/home/damiano/fortran/openmpi-__1.6.5/installation/bin
and
/home/damiano/fortran/openmpi-__1.6.5/installation/lib

Some OS distributions, commercial compilers, and other software,
come with "extra" OMPI installations, which can be ahead of
your path.
"which mpif90" will tell you what you are actually using.

For what it is worth, disabling shared libraries
at configure time may be challenging.

I hope this helps,
Gus Correa


On 09/30/2013 11:58 AM, Damiano Natali wrote:

Dear list,

I'm trying to install openMPI on a Linux 64-bit OpenSuse machine
with
the following lines

./configure FC=gfortran
--prefix=/home/damiano/__fortran/openmpi-1.6.5/__installation/
--disable-shared --enable-static --with-mpi-f90-size=medium
--enable-mpi-f90 cflags=-m64 cxxflags=-m64 fflags=-m64 fcflags=-m64
make -j4 all
make install

everything goes on nicely and I end up with an installation
folder with
a bin subfolder. However, when I try to launch the mpif90
compiler the
error

------------------------------__------------------------------__--------------

Unfortunately, this installation of Open MPI was not compiled with
Fortran 90 support. As such, the mpif90 compiler is non-functional.
------------------------------__------------------------------__--------------


is still there. The output of the ompi_info is

Configured architecture: x86_64-unknown-linux-gnu
Configure host: caillou.dicat.unige.it
<http://caillou.dicat.unige.it>
<http://caillou.dicat.unige.it__>

Configured by: damiano
Configured on: Mon Sep 30 17:17:39 CEST 2013
Configure host: caillou.dicat.unige.it
<http://caillou.dicat.unige.it>
<http://caillou.dicat.unige.it__>

Built by: damiano
Built on: Mon Sep 30 17:26:12 CEST 2013
Built host: caillou.dicat.unige.it
<http://caillou.dicat.unige.it>
<http://caillou.dicat.unige.it__>

C bindings: yes
C++ bindings: yes
Fortran77 bindings: yes (all)
Fortran90 bindings: yes
Fortran90 bindings size: medium
C compiler: gcc
C compiler absolute: /usr/bin/gcc
C compiler family name: GNU
C compiler version: 4.7.1
C++ compiler: g++
C++ compiler absolute: /usr/bin/g++
Fortran77 compiler: gfortran
Fortran77 compiler abs: /usr/bin/gfortran
Fortran90 compiler: gfortran
Fortran90 compiler abs: /usr/bin/gfortran
C profiling: yes
C++ profiling: yes
Fortran77 profiling: yes
Fortran90 profiling: yes
C++ exceptions: no
Thread support: posix (MPI_THREAD_MULTIPLE: no,
progress: no)
Sparse Groups: no
Internal debug support: no
MPI interface warnings: no
MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
libltdl support: yes
Heterogeneous support: no
mpirun default --prefix: no
MPI I/O support: yes
MPI_WTIME support: gettimeofday
Symbol vis. support: yes
Host topology support: yes
MPI extensions: affinity example
FT Checkpoint support: no (checkpoint thread: no)
VampirTrace support: yes
MPI_MAX_PROCESSOR_NAME: 256
MPI_MAX_ERROR_STRING: 256
MPI_MAX_OBJECT_NAME: 64
MPI_MAX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
MCA backtrace: execinfo (MCA v2.0, API v2.0,
Component v1.6.4)
MCA memory: linux (MCA v2.0, API v2.0, Component
v1.6.4)
MCA paffinity: hwloc (MCA v2.0, API v2.0, Component
v1.6.4)
MCA carto: auto_detect (MCA v2.0, API v2.0,
Component
v1.6.4)
MCA carto: file (MCA v2.0, API v2.0, Component
v1.6.4)
MCA shmem: mmap (MCA v2.0, API v2.0, Component
v1.6.4)
MCA shmem: posix (MCA v2.0, API v2.0, Component
v1.6.4)
MCA shmem: sysv (MCA v2.0, API v2.0, Component
v1.6.4)
MCA maffinity: first_use (MCA v2.0, API v2.0,
Component v1.6.4)
MCA maffinity: hwloc (MCA v2.0, API v2.0, Component
v1.6.4)
MCA timer: linux (MCA v2.0, API v2.0, Component
v1.6.4)
MCA installdirs: env (MCA v2.0, API v2.0, Component
v1.6.4)
MCA installdirs: config (MCA v2.0, API v2.0,
Component v1.6.4)
MCA sysinfo: linux (MCA v2.0, API v2.0, Component
v1.6.4)
MCA hwloc: hwloc132 (MCA v2.0, API v2.0,
Component v1.6.4)
MCA dpm: orte (MCA v2.0, API v2.0, Component
v1.6.4)
MCA pubsub: orte (MCA v2.0, API v2.0, Component
v1.6.4)
MCA allocator: basic (MCA v2.0, API v2.0, Component
v1.6.4)
MCA allocator: bucket (MCA v2.0, API v2.0,
Component v1.6.4)
MCA coll: basic (MCA v2.0, API v2.0, Component
v1.6.4)
MCA coll: hierarch (MCA v2.0, API v2.0,
Component v1.6.4)
MCA coll: inter (MCA v2.0, API v2.0, Component
v1.6.4)
MCA coll: self (MCA v2.0, API v2.0, Component
v1.6.4)
MCA coll: sm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA coll: sync (MCA v2.0, API v2.0, Component
v1.6.4)
MCA coll: tuned (MCA v2.0, API v2.0, Component
v1.6.4)
MCA io: romio (MCA v2.0, API v2.0, Component
v1.6.4)
MCA mpool: fake (MCA v2.0, API v2.0, Component
v1.6.4)
MCA mpool: rdma (MCA v2.0, API v2.0, Component
v1.6.4)
MCA mpool: sm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA pml: bfo (MCA v2.0, API v2.0, Component
v1.6.4)
MCA pml: csum (MCA v2.0, API v2.0, Component
v1.6.4)
MCA pml: ob1 (MCA v2.0, API v2.0, Component
v1.6.4)
MCA pml: v (MCA v2.0, API v2.0, Component v1.6.4)
MCA bml: r2 (MCA v2.0, API v2.0, Component
v1.6.4)
MCA rcache: vma (MCA v2.0, API v2.0, Component
v1.6.4)
MCA btl: self (MCA v2.0, API v2.0, Component
v1.6.4)
MCA btl: sm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA btl: tcp (MCA v2.0, API v2.0, Component
v1.6.4)
MCA topo: unity (MCA v2.0, API v2.0, Component
v1.6.4)
MCA osc: pt2pt (MCA v2.0, API v2.0, Component
v1.6.4)
MCA osc: rdma (MCA v2.0, API v2.0, Component
v1.6.4)
MCA iof: hnp (MCA v2.0, API v2.0, Component
v1.6.4)
MCA iof: orted (MCA v2.0, API v2.0, Component
v1.6.4)
MCA iof: tool (MCA v2.0, API v2.0, Component
v1.6.4)
MCA oob: tcp (MCA v2.0, API v2.0, Component
v1.6.4)
MCA odls: default (MCA v2.0, API v2.0,
Component v1.6.4)
MCA ras: cm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA ras: loadleveler (MCA v2.0, API v2.0,
Component
v1.6.4)
MCA ras: slurm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA ras: gridengine (MCA v2.0, API v2.0,
Component v1.6.3)
MCA rmaps: load_balance (MCA v2.0, API v2.0,
Component
v1.6.4)
MCA rmaps: rank_file (MCA v2.0, API v2.0,
Component v1.6.4)
MCA rmaps: resilient (MCA v2.0, API v2.0,
Component v1.6.4)
MCA rmaps: round_robin (MCA v2.0, API v2.0,
Component
v1.6.4)
MCA rmaps: seq (MCA v2.0, API v2.0, Component
v1.6.4)
MCA rmaps: topo (MCA v2.0, API v2.0, Component
v1.6.4)
MCA rml: oob (MCA v2.0, API v2.0, Component
v1.6.4)
MCA routed: binomial (MCA v2.0, API v2.0,
Component v1.6.4)
MCA routed: cm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA routed: direct (MCA v2.0, API v2.0,
Component v1.6.4)
MCA routed: linear (MCA v2.0, API v2.0,
Component v1.6.4)
MCA routed: radix (MCA v2.0, API v2.0, Component
v1.6.4)
MCA routed: slave (MCA v2.0, API v2.0, Component
v1.6.4)
MCA plm: rsh (MCA v2.0, API v2.0, Component
v1.6.4)
MCA plm: slurm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA filem: rsh (MCA v2.0, API v2.0, Component
v1.6.4)
MCA errmgr: default (MCA v2.0, API v2.0,
Component v1.6.4)
MCA ess: env (MCA v2.0, API v2.0, Component
v1.6.4)
MCA ess: hnp (MCA v2.0, API v2.0, Component
v1.6.4)
MCA ess: singleton (MCA v2.0, API v2.0,
Component v1.6.4)
MCA ess: slave (MCA v2.0, API v2.0, Component
v1.6.4)
MCA ess: slurm (MCA v2.0, API v2.0, Component
v1.6.4)
MCA ess: slurmd (MCA v2.0, API v2.0,
Component v1.6.4)
MCA ess: tool (MCA v2.0, API v2.0, Component
v1.6.4)
MCA grpcomm: bad (MCA v2.0, API v2.0, Component
v1.6.4)
MCA grpcomm: basic (MCA v2.0, API v2.0, Component
v1.6.4)
MCA grpcomm: hier (MCA v2.0, API v2.0, Component
v1.6.4)
MCA notifier: command (MCA v2.0, API v1.0,
Component v1.6.4)
MCA notifier: syslog (MCA v2.0, API v1.0,
Component v1.6.4)

as far as I know, the f90 bindings seem to be configured
properly. What
can be wrong?

Thanks you for your attention,
Damiano

--
Damiano Natali
mail damiano.nat...@gmail.com <mailto:damiano.nat...@gmail.com>
<mailto:damiano.natali@gmail.__com
<mailto:damiano.nat...@gmail.com>>
skype damiano.natali




_________________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/__mailman/listinfo.cgi/users
<http://www.open-mpi.org/mailman/listinfo.cgi/users>


_________________________________________________
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/__mailman/listinfo.cgi/users
<http://www.open-mpi.org/mailman/listinfo.cgi/users>




--
Damiano Natali
mail damiano.nat...@gmail.com <mailto:damiano.nat...@gmail.com>
skype damiano.natali




_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to