On 05/16/2014 07:09 PM, Ben Lash wrote:
The $PATH and $LD_LIBRARY_PATH seem to be correct, as does module list.
I will try to hear back from our particular cluster people, otherwise I
will try using the latest version. This is old government software,
significant parts are written in fortran77 for example, typically
upgrading to a new version breaks it. It was looking for mpich, hence
the link, but a long time ago I gave it openmpi instead as recommended
and that worked, so I suppose it's less persnickety about the mpi
version than some other things. The most current version installed
is openmpi/1.6.5-intel(default). Thanks again.


We have code here that has been recompiled (some with modifications, some not) with OpenMPI since 1.2.8 with no problems. MPI is a standard, both OpenMPI and MPICH follow the standard (except perhaps for very dusty corners or latest greatest MPI 3 features).
If your code compiled with 1.4.4, it should (better) do with 1.6.5.
Fortran77 shouldn't be an issue.

I agree, the PATH and LD_LIBRARY_PATH point to the "retired" directory.
Many things may have happened, though, say, the "retired" directory may not be complete, or may not have been installed on all cluster nodes, or (if not really re-installed) probably points to the original (pre-retirement) directories that no longer exist.
Rather than sorting this out, I think you have a better shot using
Open MPI 1.6.5.
Just load the module and try to recompile the code.
(Probably just
module swap openmpi/1.4.4-intel openmpi/1.6.5-intel)

You may need to tweak with the Makefile, if it hardwires
the MPI wrappers/binary location, or the library and include paths.
Some do, some don't.

Gus Correa


[bl10@login2 ~]$ echo $PATH
/home/bl10/rlib/deps/bin:/opt/apps/netcdf/4.1.3/bin:/opt/apps/netcdf/4.1.3/deps/hdf5/1.8.7/bin:/opt/apps/openmpi/retired/1.4.4-intel/bin:/opt/apps/pgi/11.7/linux86-64/11.7/bin:/opt/apps/python3/3.2.1/bin:/opt/apps/intel/2013.1.039/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/ibutils/bin:/opt/apps/moab/current/bin:/projects/dsc1/apps/cmaq/deps/ioapi-kiran/3.1/bin:/home/bl10/bin

[bl10@login2 ~]$ echo $LD_LIBRARY_PATH
/home/bl10/rlib/deps/lib:/projects/dsc1/apps/cmaq/deps/netcdf/4.1.3-intel/lib:/opt/apps/netcdf/4.1.3/lib:/opt/apps/netcdf/4.1.3/deps/hdf5/1.8.7/lib:/opt/apps/netcdf/4.1.3/deps/szip/2.1/lib:/opt/apps/openmpi/retired/1.4.4-intel/lib:/opt/apps/intel/2011.0.013/mkl/lib/intel64:/opt/apps/intel/2013.1.039/mkl/lib/intel64:/opt/apps/intel/2013.1.039/lib/intel64

[bl10@login2 ~]$ module list
Currently Loaded Modulefiles:
   1) intel/2013.1.039      2) python3/3.2.1         3) pgi/11.7
      4) openmpi/1.4.4-intel   5) netcdf/4.1.3
[bl10@login2 ~]$




On Fri, May 16, 2014 at 5:46 PM, Gus Correa <g...@ldeo.columbia.edu
<mailto:g...@ldeo.columbia.edu>> wrote:

    On 05/16/2014 06:26 PM, Ben Lash wrote:

        I'm not sure I have the ability to implement a different module
        management system, I am using a university cluster. We have a module
        system, and I am beginning to suspect that maybe it wasn't updated
        during the upgrade. I have
        module list
        ..other modules....openmpi/1.4.4
        Perhaps this is still trying to go to the old source location?
        How would
        I check? Is there an easy way around it if it is wrong? Thanks
        again!


    Most likely the module openmpi/1.4.4 is out of date.
    You can check it with
    echo $PATH
    If it doesn't point to the "retired" directory, then it is probably
    out of date.

    Why don't you try to recompile the code
    with the current Open MPI installed in the cluster?

    module avail
    will show everyting, and you can pick the latest, load it,
    and try to recompile the program with that.

    Gus Correa


        On Fri, May 16, 2014 at 5:07 PM, Maxime Boissonneault
        <maxime.boissonneault@__calculquebec.ca
        <mailto:maxime.boissonnea...@calculquebec.ca>
        <mailto:maxime.boissonneault@__calculquebec.ca
        <mailto:maxime.boissonnea...@calculquebec.ca>>> wrote:

             Instead of using the outdated and not maintained Module
        environment,
             why not use Lmod :
        https://www.tacc.utexas.edu/__tacc-projects/lmod
        <https://www.tacc.utexas.edu/tacc-projects/lmod>

             It is a drop-in replacement for Module environment that
        supports all
             of their features and much, much more, such as :
             - module hierarchies
             - module properties and color highlighting (we use it to
        higlight
             bioinformatic modules or tools for example)
             - module caching (very useful for a parallel filesystem
        with tons of
             modules)
             - path priorities (useful to make sure personal modules take
             precendence over system modules)
             - export module tree to json

             It works like a charm, understand both TCL and Lua modules
        and is
             actively developped and debugged. There are litteraly new
        features
             every month or so. If it does not do what you want, odds
        are that
             the developper will add it shortly (I've had it happen).

             Maxime

             Le 2014-05-16 17:58, Douglas L Reeder a écrit :

                 Ben,

                 You might want to use module (source forge) to manage
            paths to
                 different mpi implementations. It is fairly easy to set
            up and
                 very robust for this type of problem. You would remove
            contentious
                 application paths from you standard PATH and then use
            module to
                 switch them in and out as needed.

                 Doug Reeder
                 On May 16, 2014, at 3:39 PM, Ben Lash <b...@rice.edu
            <mailto:b...@rice.edu>
                 <mailto:b...@rice.edu <mailto:b...@rice.edu>>> wrote:

                     My cluster has just upgraded to a new version of
                MPI, and I'm
                     using an old one. It seems that I'm having trouble
                compiling due
                     to the compiler wrapper file moving (full error here:
                http://pastebin.com/EmwRvCd9)
                     "Cannot open configuration file

                
/opt/apps/openmpi/1.4.4-intel/__share/openmpi/mpif90-wrapper-__data.txt"

                     I've found the file on the cluster at

                  
/opt/apps/openmpi/retired/1.4.__4-intel/share/openmpi/mpif90-__wrapper-data.txt
                     How do I tell the old mpi wrapper where this file is?
                     I've already corrected one link to mpich ->
                     /opt/apps/openmpi/retired/1.4.__4-intel/, which is
                in the software
                     I'm trying to recompile's lib folder
                     (/home/bl10/CMAQv5.0.1/lib/__x86_64/ifort). Thanks
                for any ideas. I
                     also tried changing $pkgdatadir based on what I
                read here:
                
http://www.open-mpi.org/faq/?__category=mpi-apps#default-__wrapper-compiler-flags
                
<http://www.open-mpi.org/faq/?category=mpi-apps#default-wrapper-compiler-flags>


                     Thanks.

                     --Ben L
                     _________________________________________________
                     users mailing list
                us...@open-mpi.org <mailto:us...@open-mpi.org>
                <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>
                http://www.open-mpi.org/__mailman/listinfo.cgi/users
                <http://www.open-mpi.org/mailman/listinfo.cgi/users>




                 _________________________________________________
                 users mailing list
            us...@open-mpi.org <mailto:us...@open-mpi.org>
              <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>
            http://www.open-mpi.org/__mailman/listinfo.cgi/users
            <http://www.open-mpi.org/mailman/listinfo.cgi/users>



             --
             ------------------------------__---
             Maxime Boissonneault
             Analyste de calcul - Calcul Québec, Université Laval
             Ph. D. en physique


             _________________________________________________
             users mailing list
        us...@open-mpi.org <mailto:us...@open-mpi.org>
        <mailto:us...@open-mpi.org <mailto:us...@open-mpi.org>>

        http://www.open-mpi.org/__mailman/listinfo.cgi/users
        <http://www.open-mpi.org/mailman/listinfo.cgi/users>




        --
        --Ben L


        _________________________________________________
        users mailing list
        us...@open-mpi.org <mailto:us...@open-mpi.org>
        http://www.open-mpi.org/__mailman/listinfo.cgi/users
        <http://www.open-mpi.org/mailman/listinfo.cgi/users>


    _________________________________________________
    users mailing list
    us...@open-mpi.org <mailto:us...@open-mpi.org>
    http://www.open-mpi.org/__mailman/listinfo.cgi/users
    <http://www.open-mpi.org/mailman/listinfo.cgi/users>




--
--Ben L


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to