On Jul 4, 2007, at 8:21 PM, Graham Jenkins wrote:
I'm using the openmpi-1.1.1-5.el5.x86_64 RPM on a Scientific Linux 5
cluster, with no installed HCAs. And a simple MPI job submitted to
that
cluster runs OK .. except that it issues messages for each node
like the
one shown below. Is there
I'm using the openmpi-1.1.1-5.el5.x86_64 RPM on a Scientific Linux 5
cluster, with no installed HCAs. And a simple MPI job submitted to that
cluster runs OK .. except that it issues messages for each node like the
one shown below. Is there some way I can supress these, perhaps by an
appropriat
From: Jeff Squyres
Can you be a bit more specific than "it dies"? Are you talking about
mpif90/mpif77, or your app?
Sorry, tuspid me. When executing mpif90 or mpif77 I have a segfault and it
doesn't compile. I've tried both with or without input (i.e., giving it
something to compile or ju
On Jul 3, 2007, at 9:41 PM, Ricardo Reis wrote:
I've compiled openmpi 1.2.1 till 1.2.3 with the intel compiler,
versions, 9.1 and 10. and everytime I try to compile something with
mpif90 or mpif77 it justs dies. any sugestions for me to look at?
I've tried to strace it but can't figure any
Bummer. I know there's some kind of issue with the "noac" NFS value
(no attribute caching; when I enabled that on my cluster, my entire
cluster got veeeryyy slowww with regards to NFS).
We unfortunately don't have much ROMIO expertise here on the OMPI
list; we pretty much import it