er) is used. This leads to
crash if an uninitialized value is used.
The problem exists in version 1.4.1.
Michael
This discussion started getting into an interesting question: ABI
standardization for portability by language. It makes sense to have ABI
standardization for portability of objects across environments. At the same
time it does mean that everyone follows the exact same recipe for low level
implement
ered a similar problem and is able to help me. I
would be really grateful.
Thanks,
Michael
___
Dipl.-Ing. Michael Mauersberger<mailto:michael.mauersber...@tu-dresden.de>
Tel. +49 351 463 38099 | Fax +49 351 463 37263
Marschnerstraße 30,
Maybe you have an idea why it didn't work with those private variables? But
- well, if not there would not be a problem any more (although I don't know
why). ;)
Best regards
Michael
______
Dipl.-Ing. Michael Mauersberger
michael.
er before.
Any suggestions what is going wrong here?
Best regards and thanks for any help!
Michael
openib,sm,self
did help!!! Right now I am not able to check the performance results as
the cluster is busy with jobs so I cannot
compare with the old benchmark results.
Thanks for help!
Michael
Ralph Castain schrieb:
Your command line may have just come across with a typo, but something
isn
.html
Submission Link: http://edas.info/newPaper.php?c=7364
IMPORTANT DATES
March 12 - Abstract submission due
June 8- Full paper submission
July 14 - Acceptance notification
August 3 - Camera-ready version due
August 25-28 - Conference
CHAIR
Michael Alexander (
ing to generate 64-bit
output, which would make sense if the flag to generate 32-bit output
never made it to the linker.
Michael
--
Michael Jennings
Linux Systems and Cluster Admin
UNIX and Cluster Computing Group
Hi everybody,
I try to compile openmpi with intel compiler on ubuntu 9.04.
I compiled openmpi on Redhat and os x many times and I could always find a
problem. But the error that I'm getting now, gives me no clues where to even
search for the problem.
my config line is a follows:
./configure CC=
Joe
________
From: users-boun...@open-mpi.org on behalf of Michael Kuklik
Sent: Tue 5/26/2009 7:05 PM
To: us...@open-mpi.org
Subject: [OMPI users] problem with installing openmpi with intel compiler
onubuntu
Hi everybody,
I try to compile openmpi with intel compile
e: text/plain; charset="iso-8859-1"
MK,
Hmm.. What if you put CC=/usr/local/intel/Compiler/11.0/083/bin/intel64/icc
on the build line.
Joe
____
From: users-boun...@open-mpi.org on behalf of Michael Kuklik
Sent: Wed 5/27/2009 5:05 PM
To: us...@open-mpi.org
Subje
would like to contribute are welcome.
Regards, Michael
No problem. I'll try to keep up to date with releases. If there's any
straightforward to compile example/benchmark, I'd be happy to add that too.
Michael
Jeff Squyres wrote:
Excellent -- many thanks for your efforts!
Be aware that Open MPI v1.0.2 is brewing. Check ou
Using OpenMPI 1.0.1 compiled with g95 on OS X (same problem on Debian
Linux with g95, I have not tested other compilers yet)
mpif90 spawn.f90 -o spawn
In file spawn.f90:35
MPI_COMM_WORLD, slavecomm, MPI_ERRCODES_IGNORE, ierr )
1
Err
MPI_INFO_NULL, 0, MPI_COMM_WORLD,
slavecomm, &
MPI_ERRCODES_IGNORE, ierr )
and everything should work just fine.
Just as a test I did this, no effect. The error remains.
Michael
george.
PS: Use vim and the force will be with you. You ha
can continue to "USE MPI" in order to check my interface errors
quickly as I move forward on my project.
Michael
backported to the 1.0.x
series, I assume the hesitation comes from the possibility of
creating additional errors in the process.
Michael
On Mar 4, 2006, at 9:29 AM, Jeff Squyres wrote:
Michael --
Sorry for the delay in replying.
Many thanks for your report! You are exactly right -- our types are
Sarge on Operton built using "./
configure --with-gnu-ld F77=pgf77 FFLAGS=-fastsse FC=pgf90 FCFLAGS=-
fastsse" with PG 6.1.
Are these diagnostic messages of errors in OpenMPI 1.1r9212 or
related to errors in my test code?
Is this information helpful for development purposes?
Michael.
On Mar 7, 2006, at 3:23 PM, Michael Kluskens wrote:
Per the mpi_comm_spawn issues with the 1.0.x releases I started using
1.1r9212, with my sample code I'm getting a messages of
[-:13327] mca: base: component_find: unable to open: dlopen(/usr/
local/lib/openmpi/mca_pml_teg.so, 9): Symbo
physical interfaces and two ip addresses, all the rest have two
physical interfaces and one ip address.
I have not tested throughput to see if I choose the best type of
bonding, but the choices were clear enough.
michael.
I see responses to noncritical parts of my discussion but not the
following, is it a known issue, a fixed issue, or we don't want to
discuss it issue?
Michael
On Mar 7, 2006, at 4:39 PM, Michael Kluskens wrote:
The following errors/warnings also exist when running my spawn test
on a
l" to remove the previous installation
before installing a new version.
Michael
ps. I've had to use 1.1 because of bugs in the 1.0.x series that will
not be fixed.
On Mar 4, 2006, at 9:29 AM, Jeff Squyres wrote:
I'm hesitant to put these fixes in the 1.0.x series simply bec
I have identified what I think is the issue described below.
Even though the default prefix is /usr/local, r9336 only works for me
if I use
./configure --prefix=/usr/local
Michael
On Mar 20, 2006, at 11:49 AM, Michael Kluskens wrote:
Building Open MPI 1.1a1r9xxx on a PowerMac G4 running
mgr_base_stage_gate.c at line 276
Michael
spawn.f90 ---
program main
USE MPI
implicit none
! include 'mpif.h'
integer :: ierr,size,rank,child
integer (kind=MPI_ADDRESS_KIND) :: universe_size
integer :: status(MPI_STATUS_SIZE)
logical :: flag
integer :: ans(0:2
On Mar 20, 2006, at 7:22 PM, Brian Barrett wrote:
On Mar 20, 2006, at 6:10 PM, Michael Kluskens wrote:
I have identified what I think is the issue described below.
Even though the default prefix is /usr/local, r9336 only works for me
if I use
./configure --prefix=/usr/local
Thank you for
uption" errors (uses
MPI_SPAWN and posted previously); the basic MPI difference between my
test program and the real program is massive amount of data being
distributed via BCAST and SEND/RECV.
Michael
I have Absoft version 8.2a installed on my OS X 10.4.5 system and in
order to do some testing I was trying to build OpenMPI 1.1a1r9364
with it and got the following funny result:
*** Fortran 77 compiler
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether f95 acce
On Mar 23, 2006, at 9:28 PM, Brian Barrett wrote:
On Mar 23, 2006, at 5:32 PM, Michael Kluskens wrote:
I have Absoft version 8.2a installed on my OS X 10.4.5 system and
in order to do some testing I was trying to build OpenMPI
1.1a1r9364 with it and got the following funny result
OpenMPI has to
work on other systems not using OpenMPI.
Michael
x27;t find
anything close to what you said below in the four books I have. Page
236 of Using MPI-2 shows the correct use of MPI_ROOT but no
explanation of why.
Michael
On Mar 27, 2006, at 10:21 AM, Edgar Gabriel wrote:
MPI_ROOT is required for the rooted operations of the inter-
communicator
c
to build OpenMPI, link to
your OpenMPI library, and run? I got some hints from the install
process that this may be possible using either libtool and/or
environmental variables LIBDIR, LD_LIBRARY_PATH, LD_RUN_PATH, and/or
compile/link options of -Wl,--rpath -Wl,LIBDIR.
Michael
On Mar 28, 2006, at 1:22 PM, Brian Barrett wrote:
On Mar 27, 2006, at 8:26 AM, Michael Kluskens wrote:
On Mar 23, 2006, at 9:28 PM, Brian Barrett wrote:
On Mar 23, 2006, at 5:32 PM, Michael Kluskens wrote:
I have Absoft version 8.2a installed on my OS X 10.4.5 system and
in order to do
XMPI is a GUI debugger that works with LAM/MPI.
Is there anything similar that works with OpenMPI?
Michael
case a client is powered down
or different architectures. Simplified my life greatly.
Michael
You need to confirm that /etc/bashrc is actually being read in that
environment, bash is a little different on which files get read
depending on whether you login interactively or not.
Also, I don't think ~/.bashrc is read on a noninteractive login.
Michael
On Apr 10, 2006, at 1:
MPI_SPAWN and MPI_BCAST (most vendors MPI's
can't run this test case).
Michael
parent.f90
Description: Binary data
child.f90
Description: Binary data
[host:00258] [0,0,0] ORTE_ERROR_LOG: Not found in file base/
oob_base_xcast.c at line 108
[host:00258] [0,0,0] ORTE_ERROR_LOG: Not found in file base/
rmgr_base_stage_gate.c at line 276
child 0 of 1: Receiving 17 from parent
Maximum user memory allocated: 0
Michael
Michael Kluskens wrote
and not OpenMPI. OpenMPI and MPICH are
two totally separate open source MPI implementations.
Michael
Quoting Brian Barrett :
On Apr 17, 2006, at 4:42 AM, Shekhar Tyagi wrote:
I tried your command but i guess its not working, there is a
warning and then
nothing much happens, the command
Getting warnings like:
WARNING: *** Fortran 77 alignment for INTEGER (1) does not match
WARNING: *** Fortran 90 alignment for INTEGER (4)
WARNING: *** OMPI-F90-CHECK macro needs to be updated!
same for LOGICAL, REAL, COMPLEX, INTEGER*2, INTEGER*4, INTEGER*8, etc.
I believe these are new within
using:
call MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_UNIVERSE_SIZE, &
universe_size, flag, ierr)
integer :: ierr
integer (kind=MPI_ADDRESS_KIND) :: universe_size
logical :: flag
This compiled and worked as of version 9427.
Michael
l *flag,
MPI_Fint *ierr));
On Apr 20, 2006, at 2:24 PM, Michael Kluskens wrote:
Error in:
openmpi-1.1a3r9663/ompi/mpi/f90/mpi-f90-interfaces.h
subroutine MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag,
ierr)
include 'mpif.h'
integer, intent(in) :: comm
integer, intent
This problem still exists in OpenMPI 1.1a3r9704 (Apr 24, 2006), I
reported it for 9663 (Apr 20, 2006).
Michael
On Apr 21, 2006, at 12:32 AM, Jeff Squyres (jsquyres) wrote:
You're correct on all counts.
I've corrected the .h.sh script in the trunk and will get the correct
XSL (!
76
as well as the other bug I reported a couple minutes ago.
Michael
described below and then re-making.
Michael
On Apr 25, 2006, at 9:58 AM, Jeff Squyres (jsquyres) wrote:
I apologize for the delay (and I actually do greatly appreciate your
reminders!). I made a change on the trunk back when I replied; I'm
waiting for my resident F90 expert to give
Could I/we have a translation of what "trivial, small, medium, large"
means to the end user?
I for one don't read the docs every week with new 1.1 alpha tests.
Michael
On Apr 25, 2006, at 10:12 AM, Jeff Squyres (jsquyres) wrote:
-Original Message-
From: users-boun.
ample with
this in. In either case, I get the same results regardless.
Background from previous discussion on this follows. It will cost me
less to test new versions of Open MPI handling this than work around
this issue in my project.
Michael
On Mar 2, 2006, at 1:55 PM, Ralph Castain
1.2 configure --help; however, the 1.2 README disagrees with the 1.2
configure --help (betting the latter is more correct, maybe).
Michael
On Apr 25, 2006, at 1:16 PM, Jeff Squyres (jsquyres) wrote:
How's this (from the new README):
- The Fortran 90 MPI bindings can now be built in o
Open MPI 1.2a1r9704
Summary: configure with --with-mpi-f90-size=large and then make.
/bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No such file or
directory
I doubt this one is system specific
---
my details:
Building OpenMPI 1.2a1r9704 with g95 (Apr 23 2006) on OS X 10.4.6 using
./c
child calling COMM_FREE
child calling FINALIZE
child exiting
Maximum user memory allocated: 0
child starting
parent: Calling MPI_BCAST with btest = 17 . child = 3
child 0 of 1: Parent 3
child 0 of 1: Receiving 17 from parent
child calling COMM_FREE
child calling FINALIZE
Michael
On Apr 25, 200
I made another test and the problem does not occur with --with-mpi-
f90-size=medium.
Michael
On Apr 26, 2006, at 11:50 AM, Michael Kluskens wrote:
Open MPI 1.2a1r9704
Summary: configure with --with-mpi-f90-size=large and then make.
/bin/sh: line 1: ./scripts/mpi_allgather_f90.f90.sh: No
I've done yet another test and found the identical problem exists
with openmpi-1.1a3r9704.
Michael
On Apr 26, 2006, at 8:38 PM, Jeff Squyres (jsquyres) wrote:
Ok, I am investigating -- I think I know what the problem is, but the
guy who did the bulk of the F90 work in OMPI is out trav
ntercomm
integer, intent(in) :: high
integer, intent(out) :: newintercomm
integer, intent(out) :: ierr
end subroutine ${procedure}
EOF
}
start MPI_Intercomm_merge small
output_162 MPI_Intercomm_merge
end MPI_Intercomm_merge
-
Michael
ps. MPI_Comm_get_attr is fixed in both these versions.
I've noticed that I can't just fix this myself, very bad things
happened to the merged communicator, so this is not a trivial fix I
gather.
Michael
On Apr 30, 2006, at 12:16 PM, Michael Kluskens wrote:
MPI_Intercomm_Merge( intercomm, high, newintracomm, ier )
None of the bo
checking if FORTRAN compiler supports integer(selected_int_kind
(2))... yes
checking size of FORTRAN integer(selected_int_kind(2))... unknown
configure: WARNING: *** Problem running configure test!
configure: WARNING: *** See config.log for details.
configure: error: *** Cannot continue.
Source
"./configure
F77=f95 FC=f95" (not sure exactly which versions of Open MPI as I
have been using several versions). Absoft version 8 compilers are
incompatible with GCC 4.0, you have to switch back to GCC 3.3 using
gcc_select.
Michael
in tomorrow and see if I can create an f77
version or use the mpi.h file and see if I can get a clear difference
and I'll compare against MPICH2 but someone else should look into
this issue.
Michael
On May 1, 2006, at 11:57 PM, Jeff Squyres (jsquyres) wrote:
I just fixed the INTERCOMM
s
but it can be switched to the include file and fixed format source
code (.f) and should compile with both f90 and f77 compilers. I have
not written a C test code.
Michael
mpif90 parent4.f90 -o parent4
mpif90 child4.f90 -o child4
parent startup: 0 of 1
a child starting
parent spawned
The latest release of openMPI is installed into /usr/local on the 05 May release of PK. Any nifty
examples showing usage would be welcome for a future release.
PK home: http://pareto.uab.es/mcreel/ParallelKnoppix/
Regards, Michael
sor xserve running OS X Server
10.4.6. I have xcode 2.2 installed which gives me gcc 3.3 and 4.01
installed.
For OS X and g95 installed from the web site use:
./configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs
You can use "./configure F77=g95 FC=g95 " only if you install g95 via
fink.
Michael
I think I moved to OpenMPI 1.1 and 1.2 alphas because of problems
with spawn and OpenMPI 1.0.1 & 1.0.2.
You may wish to test building 1.1 and seeing if that solves your
problem.
Michael
On May 24, 2006, at 1:48 PM, Jens Klostermann wrote:
I did the following run
rom 1.0.1.
You absolutely have to:
cd openmpi1.0.1
sudo make uninstall
cd ../openmpi1.0.2
sudo make install
I have had no trouble in the past with PGF90 version 6.1-3 and
OpenMPI 1.1a on a dual Operton 1.4 GHz machine running Debian Linux.
Michael
On May 24, 2006, at 7:43 PM, Tom Rosmond wrot
e not studied this
issue closely enough.
Below is my solution for the generating scripts for MPI_Gather for
F90 (also switched to --with-f90-max-array-dim=2). It might be
acceptable to reduce the combinations to just equal or one dimension
less (00,01,11,12,22) but I pushed the limits of my s
Found serious issue for the f90 interfaces for --with-mpi-f90-
size=large
Consider
call MPI_REDUCE(MPI_IN_PLACE,sumpfi,sumpfmi,MPI_INTEGER,MPI_SUM,
0,allmpi,ier)
Error: Generic subroutine 'mpi_reduce' at (1) is not consistent with
a specific subroutine interface
sumpfi is an integer
My mistake:
MPI_IN_PLACE is a "double complex" so the scripts below need to be
fixed to reflect that.
I don't know if the latest tarball for tonight contains these or
other fixes that I have been looking at today.
Michael
On May 30, 2006, at 6:18 PM, Michael Kluskens
On Jun 1, 2006, at 12:42 PM, Jeff Squyres (jsquyres) wrote:
Blast. As usual, Michael is right -- we didn't account for
MPI_IN_PLACE
in the "large" F90 interface. We've opened ticket #39 on this:
https://svn.open-mpi.org/trac/ompi/ticket/39
I'm inclined to
t in those locations).
Michael
On Jun 1, 2006, at 4:41 PM, Brock Palen wrote:
What are these "small" and "large" modules? What would they provide?
Brock
On Jun 1, 2006, at 4:30 PM, Jeff Squyres ((jsquyres)) wrote:
Michael --
You're right again. Thanks for keeping
an integer array normally,
but this constant is permitted by the standard.
This is with OpenMPI 1.2a1r10186, I can check the details of the
scripts and generated files next week for whatever is the latest
version. But odds are this has not been spotted.
Michael
006
Camera-ready due: September 20, 2006
Conference: December 1-4, 2006
CHAIR
Michael Alexander (chair), WU Vienna, Austria
Geyong Min (co-chair), University of Bradford, UK
Gudula Ruenger (co-chair), Chemnitz University of Technology, Germany
PROGRAM COMMITTEE
Franck Cappello, CNRS-Universi
On Jun 9, 2006, at 12:33 PM, Brian W. Barrett wrote:
On Thu, 8 Jun 2006, Michael Kluskens wrote:
call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier)
1
Error: Generic subroutine 'mpi_waitall' at
X with SGI MPI library.
Michael
ses I asked for.
The master node has an internal ip of 10.0.0.0 and the second node
has an ip of 10.0.0.1 and a name of "node02" and "node2"
I've been unable to find a file that contains only the name of my
second node and not the others.
I'm currently running OpenMPI 1.2a1r10297.
Michael
ocesses I asked for.
The master node has an internal ip of 10.0.0.1 and the second node
has an ip of 10.0.0.2 and a name of "node02" and "node2"
I've been unable to find a file that contains only the name of my
second node and not the others.
I'm currently running OpenMPI 1.2a1r10297.
Michael
uninstalled it)
and you may find components dated from when you installed OpenMPI 1.0.2.
Michael
On Jun 26, 2006, at 4:34 PM, Benjamin Landsteiner wrote:
Hello all,
I recently upgraded to v1.1 of Open MPI and ran into a problem on my
head node that I can't seem to solve. Upon
uses than things could be interesting (Intel
compiler using FPATH for include files).
Michael
On Jun 26, 2006, at 6:28 PM, Patrick Jessee wrote:
Michael,
Hello. Thanks for the response. We do clean configures and makes
under /tmp, and install in a completely separate area so I don
ES
July 17, 2006 - Abstract submissions due
Paper submission due: August 4, 2006
Acceptance notification: September 1, 2006
Camera-ready due: September 20, 2006
Conference: December 1-4, 2006
CHAIR
Michael Alexander (chair), WU Vienna, Austria
Geyong Min (co-chair), University of Bradford,
The second question is should I see both gm & mx, or only one or the
other.
Michael
default
hostfile a long time ago. This needs to be documented in a couple
more places, obviously I read about it once or twice and then never
saw that buried documentation again in related documentation.
Michael
s point, I restarted my machine. Not sure if it's
necessary or not.
8. Go back to the v1.1 directory. Type 'make clean', then
reconfigure, then recompile and reinstall
9. Things should work now.
Thank you Michael,
~Ben
++
Benjamin Landsteiner
lands...@stolaf.edu
lf
sm - ... via shared memory to other processes
tcp - ... via tcp
openib - ... via Infiniband OpenIB stack
gm & mx - ... via Myrinet GM/MX
mvapi - ... via Infiniband Mellanox Verbs
If you launch your process so that four processes are on a node then
those would use shared memory to communicate.
Michael
using the following:
./configure --with-gnu-ld F77=pgf77 FFLAGS=-fastsse FC=pgf90 FCFLAGS=-
fastsse
"--with-gnu-ld " might be important.
Michael
to the client nodes. I'd rather fix both and all other
environmental variables with one fix so my test case is simply to use
openmpi to run hostname.
Before I started on this again I'd like to know if anyone has made
more progress than I have.
Michael
ary).
If this turns out to be all that is needed then is it possible for
OpenMPI to autodetect when it is running under LSF and then use
lsgrun instead of rsh/ssh?
Michael
On Aug 29, 2006, at 7:01 PM, Jeff Squyres wrote:
That's somewhat odd. I have very little experience with LSF, bu
out static
libraries that I missed in the documentation.
In order to use static libraries is it required to also include
components in libraries?
Michael
GLVL = 1
MPIDIR = /usr/local
MPILIB =
INTFACE = -DAdd_
F77 = mpif77
CC = mpicc
Michael
rors until it crashes on the Complex AMX test (which is
after the Integer Sum test).
System configuration: Debian 3.1r3 on dual opteron, gcc 3.3.5, Intel
ifort 9.1.032.
On Oct 3, 2006, at 2:44 AM, Åke Sandgren wrote:
On Mon, 2006-10-02 at 18:39 -0400, Michael Kluskens wrote:
OpenMPI, BLACS,
ollowing line.
TRANSCOMM = -DCSameF77
#
-------
# TRANSCOMM =
Michael
ps. I have successfully tested MPICH2 1.0.4p1 with BLACS 1.1p3 on the
same machine with same compilers.
On Oct 3, 2006, at 12:14 PM, Jeff Squyres wrote:
Thanks Michael -- I've updated ticket 356
ror message ***
3 additional processes aborted (not shown)
Michael
3a1r11962 and second best against
1.1.1 -- my lack of experience with patch likely confused the issue.
Michael
On Oct 6, 2006, at 12:04 AM, Jeff Squyres wrote:
On 10/5/06 2:42 PM, "Michael Kluskens" wrote:
System: BLACS 1.1p3 on Debian Linux 3.1r3 on dual-opteron, gcc 3.3.5,
Intel ifort 9.0.32 all tests with 4 processors (comments below)
Good. Can you expand on what you mean by &q
.1r3, compilers are gcc 3.3.5 and Intel
ifort 9.0.32 (that is four processors).
OpenMPI 1.1.2rc3
C test: 44 seconds
Fortran test: 44 seconds
OpenMPI 1.3a1r12069
C test: 44 seconds
Fortran test: 44 seconds
MPICH2 1.0.4p1
C test: 53 seconds
F test: 59 seconds
Michael
ile source code.
Michael
combination of any MPI and any scheduler
is the following:
mpirun -np 2 hostname
Michael
ddr:0xe3
*** End of error message ***
2 additional processes aborted (not shown)
What is the text before this?
Michael
Fortran MPI programmer that could
use with-mpi-f90-size=large and have arrays in MPI_Gather that are of
different dimensions.
Michael
Details below (edited)
Look at limitations of the following:
--with-mpi-f90-size=large
(medium + all MPI functions with 2 choice buffers, but onl
and I have only one installation of each type of
compiler (g95, Intel, PGI, Absoft).
Michael
and I have only one installation of each type of
compiler (g95, Intel, PGI, Absoft).
Michael
27;t
work with 1.3. I've tried various fixes for my patches and I don't
have a solution like I have for MPI_Gather.
Michael
Consider
call MPI_REDUCE
(MPI_IN_PLACE,sumpfi,sumpfmi,MPI_INTEGER,MPI_SUM, 0,allmpi,ier)
Error: Generic subroutine 'mpi_reduce' at
I have tested for the MPI_ABORT problem I was seeing and it appears
to be fixed in the trunk.
Michael
On Oct 28, 2006, at 8:45 AM, Jeff Squyres wrote:
Sorry for the delay on this -- is this still the case with the OMPI
trunk?
We think we finally have all the issues solved with MPI_ABORT on
Attached is a patch to deal with these two issues as applied against
OpenMPI-1.3a1r12364.
Michael
diff -ru openmpi-1.3a1r12364-orig/ompi/mpi/f90/scripts/mpi-f90-interfaces.h.sh
openmpi-1.3a1r12364/ompi/mpi/f90/scripts/mpi-f90-interfaces.h.sh
--- openmpi-1.3a1r12364-orig/ompi/mpi/f90
releases are for the 1.3 branch.
Michael
101 - 200 of 389 matches
Mail list logo