On 08/20/2012 11:39 AM, Noam Bernstein wrote:
On Aug 20, 2012, at 11:12 AM, David Warren wrote:

The biggest issue you may have is that gnu fortran does not support all the 
fortran constructs that all the others do. Most fortrans have supported the 
standard plus the DEC extentions. Gnu fortran does not quite get all the 
standards.Intel fortran does support them all, and I believe that portland 
group and absoft may also.
In my experience most recent versions of gfortran (at least 4.5, maybe earlier)
support  about as large a set of standards as anything else (with the exception 
of a
few  F2003 things, but then again, (almost) no one supports those 
comprehensively).
Definitely all of F95 + approved extensions.  Non-standard extensions (DEC,
Cray Pointers pre F2003) are another matter - I don't know about those.

                                                                                
                        Noam


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Hi Bill

I think gfortran supports 'Cray pointers'.
From the quite old gfortran 4.1.2 man page:

       -fcray-pointer
Enables the Cray pointer extension, which provides a C-like pointer.

My recollection is that it also supports some DEC extensions,
particularly those related to 'Cray pointers' [LOC, etc], but I may be wrong.

If the code is F77 with some tidbits of C++, you probably don't need to worry about gfortran having all the
F90/95/3003/2008 features.
You could try to simply adjust your Makefile to point to the OpenMPI compiler wrappers,
i.e., F77=mpif77 [or FC depending on the Makefile]
and CXX=mpicxx [or whatever macro/variable your Makefile uses the C++ compiler]. Using the compiler wrappers you don't need to specify library or include directories, and life becomes much easier.
If the Makefile somehow forces you to specify these things,
find out what libraries and includes you really need by looking at the output of these commands:
mpif77 --show
mpicxx --show
You could try this just for kicks, It may work out of the box, as Jeff suggested, if the program is really portable.

You may need to use full paths [or tweak with your PATH to point right] to the OpenMPI compiler wrappers,
in case there are various different MPI flavors installed in your cluster.
Likewise when you launch the program with mpiexec, make sure it points to the OpenMPI flavor you want.
Mixing different MPIs is a common source of frustration.

Make sure your OpenMPI was built with the underlying Gnu compilers, and that the F77 and C++ interface were built
[you must have the mpif77 and mpicxx wrappers at least].
Otherwise, it is easy to build OpenMPI from source, with support for your cluster's bells and whistles
[e.g. Infinband/OFED, Torque or SGE resource managers].

I hope this helps,
Gus Correa

On 08/20/2012 10:02 AM, Jeff Squyres wrote:
On Aug 19, 2012, at 12:11 PM, Bill Mulberry wrote:

I have a large program written in FORTRAN 77 with a couple of routines
written in C++.  It has MPI commands built into it to run on a large scale
multiprocessor IBM systems.  I am now having the task of transferring this
program over to a cluster system.  Both the multiprocessor and cluster
system has linux hosted on them.  The Cluster system has GNU FORTRAN and GNU
C compilers on it.  I am told the Cluster has openmpi.  I am wondering if
anybody out there has had to do the same task and if so what I can expect
from this.  Will I be expected to make some big changes, etc.?  Any advice
will be appreciated.
MPI and Fortran are generally portable, meaning that if you wrote a correct MPI 
Fortran application, it should be immediately portable to a new system.

That being said, many applications are accidentally/inadvertently not correct.  
For example, when you try to compile your application on a Linux cluster with 
Open MPI, you'll find that you accidentally used a Fortran construct that was 
specific to IBM's Fortran compiler and is not portable.  Similarly, when you 
run the application, you may find that inadvertently you used an implicit 
assumption for IBM's MPI implementation that isn't true for Open MPI.

...or you may find that everything just works, and you can raise a toast to the 
portability gods.

I expect that your build / compile / link procedure may change a bit from the old system to the new 
system.  In Open MPI, you should be able to use "mpif77" and/or "mpif90" to 
compile and link everything.  No further MPI-related flags are necessary (no need to -I to specify 
where mpif.h is located, no need to -lmpi, ...etc.).


Reply via email to