Konstantin --
This problem has been fixed on the trunk; it will probably take us a
few days to get it committed on the release branch (v1.0), but it
will definitely be included in the upcoming v1.0.2.
Would you mind trying a nightly trunk snapshot to ensure that we have
fixed the problem?
Hi,
It looks like it could be a problem with either using gcc 4.x or with
the fortran you are using. I have compiled OpenMPI on 10.4.4. using
the compilers:
myhost$ g77 -v
Reading specs from /usr/local/lib/gcc/powerpc-apple-darwin7.9.0/3.4.4/
specs
Configured with: ../gcc/configure --ena
Hello Everyone:
Thanks to Brian Barrett's help, I was able to get Open MPI working
with Xgrid using two dual 2.5 GHz PowerMacs. I can submit HP Linpack
jobs fine and get all four CPUs cranking, but I'm having problems
with the applications that I really want to run, MrBayes and GROMACS
(t
I'm trying to configure OPEN MPI 1.0.1 under MAC OS X 10.4.4 and was not
successful. Attached are the output from './configure --prefix=/usr/local'
and the configure log flle in a tarball. Any help is appreciated!
David
* Correspondence *
configure_out.tar.bz2
Description: Binary data
Hi,
I have installed Migrate 2.1.2, but it fails to run on open-MPI (it
does run on LAM-MPI : see end of mail)
my system is Suse 10 on Athlon X2
hostfile : localhost slots=2 max_slots=2
I tried different commands :
1. does not start : error message :
***
Konstantin --
I am able to replicate your error. Let me look into it and get back
to you.
On Feb 1, 2006, at 12:16 PM, Konstantin Kudin wrote:
Hi,
Here is an update. The code crashes only when it is launched by
mpirun, and the actual piece of code where it happens is this:
IF ( io
Greetings Konstantin.
Many thanks for this report. Another user submitted almost the same
issue earlier today (poor performance of Open MPI 1.0.x collectives;
see http://www.open-mpi.org/community/lists/users/2006/02/0558.php).
Let me provide an additional clarification on Galen's reply:
Hello Konstantin,
By using coll_basic_crossover 8 you are forcing all of your
benchmarks to use the basic collectives, which offer poor
performance. I ran the skampi Alltoall benchmark with the tuned
collectives I get the following results which seem to scale quite
well, when I have a bit