I'm attempting to move to OpenMPI from another MPICH-derived
implementation. I compiled openmpi 1.2.6 using the following configure:
./configure --build=x86_64-redhat-linux-gnu
--host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu
--program-prefix= --prefix=/usr/mpi/pathscale/openmpi
Brock Palen wrote:
On all MPI's I have always used there was only MPI
use mpi;
Please excuse my admittedly gross ignorance of all things Fortran but
why does "include 'mpif.h'" work but "use mpi" does not? When I try the
"use mpi" method I get errors like:
$ mpif90 -c cart.f
call mp
The real problem is that it looks like we have a bug in our F90
bindings. :-( We have the "periods" argument typed as an integer
array, when it really should be a logical array. Doh!
Ahhh ha! I checked the manpage vs the user's code but I didn't check the
OpenMPI code. I can confirm that
Ashley Pittman wrote:
Nothing to do with fortran but I think I'm right in saying a lot of
these command line options aren't needed, you simply set --prefix and
the rest of the options default to be relative to that.
Ya, I stole it from the OFED rpmbuild log. I wanted to reproduce exactly
what
I saw your comment regarding Pathscale compiled OMPI and thought I'd
bring discussion over here. I'm attempting to reproduce the bug
described in ticket 1326[1]. Using 1.2.6 (plus the MPI_CART_GET patch)
with the 3.2 compiler. I'm using a hello++.cc actually written by Jeff
and co.
It seems t
we might be running different OS's. I'm running RHEL 4U4
CentOS 5.2 here
There is a post on the PG forums[1] that claims it is a bug in the OMPI
1.3.3 configure script. I couldn't find any reference on the
openmpi-users or openmpi-devel lists. Is there a fix for the configure
script floating around.
It seems more like a PGI problem to me. pgcc (v10.0) can't compile the
> pgcc v9 has problem to compile the above test program:
Not for me (pgcc v9.0-3):
$ cat c.c
#include
int
main ()
{
struct foo {int a, b;}; size_t offset = offsetof(struct foo, b);
return 0;
}
$ pgcc -V
pgcc 9.0-3 64-bit target on x86-64 Linux -tp gh-64
Copyright 1989-2000, The Portland Group
There appears to be a workaround posted on the forum[1].
I applied that "fix" but noticed no differences. Perhaps Jeff Squyres
can add some insight?
Thanks
Scott
[1] http://www.pgroup.com/userforum/viewtopic.php?p=6114
I've been using OMPI 1.2.6 tightly integrated with Grid Engine for a bit
now and it works great. However, I'm running into a problem running jobs
from an interactive session (qlogin). I tried just doing "mpirun -np N
/path/to/binary" where N > # of cpus per node but OMPI will just
oversubscribe
Reuti wrote:
qlogin will create a completely fresh bash, which is not aware of
running under SGE. Although you could set the SGE_* variables by hand,
it's easier to use an interactive session with:
In the past we'd source some sge script and SLOTS, TMPDIR, etc were
populated.
$ qrsh -pe or
Reuti wrote:
What do you mean by "in the past" - you upgraded SGE from version x to
version y? You can still source
///.1/environment.
Sorry, you are right, this hasn't changed. By in the past I meant before
we started using OMPI (and SGE with tight integration).
There is nothing stopping
12 matches
Mail list logo