Hi Gustavo,

Here is the output of :
barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
gfortran -I/usr/lib64/mpi/gcc/openmpi/include -pthread
-I/usr/lib64/mpi/gcc/openmpi/lib64 -L/usr/lib64/mpi/gcc/openmpi/lib64
-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynamic
-lnsl -lutil -lm -ldl

This points to gfortran.

I do see what you are saying about the 1.4.2 and 1.4.4 components.
I'm not sure why that is, but there seems to be some conflict with the
existing openmpi, before recently installed 1.4.4 and trying to install
with ifort.


On Wed, Dec 14, 2011 at 12:43 PM, Gustavo Correa <g...@ldeo.columbia.edu>wrote:

> How about the output of this?
>
> /opt/openmpi/intel/bin/mpif90 -showme
>
> Anyway, something seems to be wrong with your OpenMPI installation.
> Just read the output of your ompi_info in your email below.
> You will see that the OpenMPI version is 1.4.4.
> However, most components are version 1.4.2.
> Do you agree?
>
> I would download the OpenMPI 1.4.4 tarball again and start fresh.
> Untar the tarball in a brand new directory, don't overwrite old stuff.
> Also, every time your OpenMPI build fails, or if you want to change
> compilers
> [say from gfortran to ifort],
> do a 'make distclean' to cleanup any leftovers of previous builds,
> and change the destination directory in --prefix= , to install in a
> different location.
>
> I hope this helps,
> Gus Correa
>
> On Dec 14, 2011, at 12:21 PM, Micah Sklut wrote:
>
> > Hi Gustav,
> >
> > I did read Price's email:
> >
> > When I do "which mpif90", i get:
> > /opt/openmpi/intel/bin/mpif90
> > which is the desired directory/binary
> >
> > As I mentioned, the config log file indicated it was using ifort, and
> had no mention of gfortran.
> > Below is the output from ompi_info. It shows reference to the correct
> ifort compiler. But, yet the mpif90 compiler, still yeilds a gfortran
> compiler.
> > -->
> > barells@ip-10-17-153-123:~> ompi_info
> >                  Package: Open MPI barells@ip-10-17-148-204 Distribution
> >                 Open MPI: 1.4.4
> >    Open MPI SVN revision: r25188
> >    Open MPI release date: Sep 27, 2011
> >                 Open RTE: 1.4.4
> >    Open RTE SVN revision: r25188
> >    Open RTE release date: Sep 27, 2011
> >                     OPAL: 1.4.4
> >        OPAL SVN revision: r25188
> >        OPAL release date: Sep 27, 2011
> >             Ident string: 1.4.4
> >                   Prefix: /usr/lib64/mpi/gcc/openmpi
> >  Configured architecture: x86_64-unknown-linux-gnu
> >           Configure host: ip-10-17-148-204
> >            Configured by: barells
> >            Configured on: Wed Dec 14 14:22:43 UTC 2011
> >           Configure host: ip-10-17-148-204
> >                 Built by: barells
> >                 Built on: Wed Dec 14 14:27:56 UTC 2011
> >               Built host: ip-10-17-148-204
> >               C bindings: yes
> >             C++ bindings: yes
> >       Fortran77 bindings: yes (all)
> >       Fortran90 bindings: yes
> >  Fortran90 bindings size: small
> >               C compiler: gcc
> >      C compiler absolute: /usr/bin/gcc
> >             C++ compiler: g++
> >    C++ compiler absolute: /usr/bin/g++
> >       Fortran77 compiler: ifort
> >   Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
> >       Fortran90 compiler: ifort
> >   Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
> >              C profiling: yes
> >            C++ profiling: yes
> >      Fortran77 profiling: yes
> >      Fortran90 profiling: yes
> >           C++ exceptions: no
> >           Thread support: posix (mpi: no, progress: no)
> >            Sparse Groups: no
> >   Internal debug support: no
> >      MPI parameter check: runtime
> > Memory profiling support: no
> > Memory debugging support: no
> >          libltdl support: yes
> >    Heterogeneous support: no
> >  mpirun default --prefix: no
> >          MPI I/O support: yes
> >        MPI_WTIME support: gettimeofday
> > Symbol visibility support: yes
> >    FT Checkpoint support: no  (checkpoint thread: no)
> >            MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component
> v1.4.2)
> >            MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
> >                MCA carto: auto_detect (MCA v2.0, API v2.0, Component
> v1.4.2)
> >                MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2)
> >            MCA maffinity: first_use (MCA v2.0, API v2.0, Component
> v1.4.2)
> >                MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2)
> >          MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2)
> >          MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2)
> >            MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2)
> >            MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2)
> >                   MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2)
> >                MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2)
> >                MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2)
> >                MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA pml: v (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA btl: self (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA btl: udapl (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.2)
> >                 MCA odls: default (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.2)
> >                MCA rmaps: load_balance (MCA v2.0, API v2.0, Component
> v1.4.2)
> >                MCA rmaps: rank_file (MCA v2.0, API v2.0, Component
> v1.4.2)
> >                MCA rmaps: round_robin (MCA v2.0, API v2.0, Component
> v1.4.2)
> >                MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.2)
> >                MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.2)
> >               MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA ess: env (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA ess: singleton (MCA v2.0, API v2.0, Component
> v1.4.2)
> >                  MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.2)
> >                  MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.2)
> >              MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.2)
> >              MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.2)
> >
> >
> >
> >
> > On Wed, Dec 14, 2011 at 12:11 PM, Gustavo Correa <g...@ldeo.columbia.edu>
> wrote:
> > Hi Micah
> >
> > Did you read Tim Prince's email to you?  Check it out.
> >
> > Best thing is to set your environment variables [PATH, LD_LIBRARY_PATH,
> intel setup]
> > in your initialization file, .profile/.bashrc or .[t]cshrc.
> >
> > What is the output of 'ompi_info'? [From your ifort-built OpenMPI.]
> > Does it show ifort or gfortran?
> >
> > I hope this helps,
> > Gus Correa
> >
> > On Dec 14, 2011, at 11:21 AM, Micah Sklut wrote:
> >
> > > Thanks for your thoughts,
> > >
> > > It would certainly appear that it is a PATH issue, but I still haven't
> figured it out.
> > >
> > > When I type the ifort command, ifort does run.
> > > The intel path is in my PATH and is the first directory listed.
> > >
> > > Looking at the configure.log, there is nothing indicating use or
> mentioning of "gfortran".
> > >
> > > gfortran is in the /usr/bin directory, which is in the PATH as well.
> > >
> > > Any other suggestions of things to look for?
> > >
> > > Thank you,
> > >
> > > On Wed, Dec 14, 2011 at 11:05 AM, Gustavo Correa <
> g...@ldeo.columbia.edu> wrote:
> > > Hi Micah
> > >
> > > Is  ifort in your PATH?
> > > If not, the OpenMPI configure script will use any fortran compiler it
> finds first, which may be gfortran.
> > > You need to run the Intel compiler startup script before you run the
> OpenMPI configure.
> > > The easy thing to do is to source the Intel script inside your
> .profile/.bashrc or .[t]cshrc file.
> > > I hope this helps,
> > >
> > > Gus Correa
> > >
> > > On Dec 14, 2011, at 9:49 AM, Micah Sklut wrote:
> > >
> > > > Hi All,
> > > >
> > > > I have installed openmpi for gfortran, but am now attempting to
> install openmpi as ifort.
> > > >
> > > > I have run the following configuration:
> > > > ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort
> FC=ifort
> > > >
> > > > The install works successfully, but when I run
> /opt/openmpi/intel/bin/mpif90, it runs as gfortran.
> > > > Oddly, when I am user: root, the same mpif90 runs as ifort.
> > > >
> > > > Can someone please alleviate my confusion as to why I mpif90 is not
> running as ifort?
> > > >
> > > > Thank you for your suggestions,
> > > >
> > > > --
> > > > Micah
> > > >
> > > >
> > > > _______________________________________________
> > > > users mailing list
> > > > us...@open-mpi.org
> > > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> > >
> > > _______________________________________________
> > > users mailing list
> > > us...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> > >
> > >
> > >
> > > --
> > > Micah Sklut
> > >
> > >
> > > _______________________________________________
> > > users mailing list
> > > us...@open-mpi.org
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> >
> > --
> > Micah Sklut
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Micah Sklut

Reply via email to