Re: [O-MPI users] forrtl: severe (39): error during read, unit 5, file /dev/ptmx - OpenMPI 1.0.2

2006-02-03 Thread Jeff Squyres
Konstantin -- This problem has been fixed on the trunk; it will probably take us a few days to get it committed on the release branch (v1.0), but it will definitely be included in the upcoming v1.0.2. Would you mind trying a nightly trunk snapshot to ensure that we have fixed the problem?

Re: [O-MPI users] Configuring OPEN MPI 1.0.1 MAC OS X 10.4.4

2006-02-03 Thread Brian Granger
Hi, It looks like it could be a problem with either using gcc 4.x or with the fortran you are using. I have compiled OpenMPI on 10.4.4. using the compilers: myhost$ g77 -v Reading specs from /usr/local/lib/gcc/powerpc-apple-darwin7.9.0/3.4.4/ specs Configured with: ../gcc/configure --ena

[O-MPI users] Xgrid and Open-MPI

2006-02-03 Thread Warner Yuen
Hello Everyone: Thanks to Brian Barrett's help, I was able to get Open MPI working with Xgrid using two dual 2.5 GHz PowerMacs. I can submit HP Linpack jobs fine and get all four CPUs cranking, but I'm having problems with the applications that I really want to run, MrBayes and GROMACS (t

[O-MPI users] Configuring OPEN MPI 1.0.1 MAC OS X 10.4.4

2006-02-03 Thread Xiaoning (David) Yang
I'm trying to configure OPEN MPI 1.0.1 under MAC OS X 10.4.4 and was not successful. Attached are the output from './configure --prefix=/usr/local' and the configure log flle in a tarball. Any help is appreciated! David * Correspondence * configure_out.tar.bz2 Description: Binary data

[O-MPI users] problem running Migrate with open-MPI

2006-02-03 Thread Andy Vierstraete
Hi, I have installed Migrate 2.1.2, but it fails to run on open-MPI (it does run on LAM-MPI : see end of mail) my system is Suse 10 on Athlon X2 hostfile : localhost slots=2 max_slots=2 I tried different commands : 1. does not start : error message : ***

Re: [O-MPI users] forrtl: severe (39): error during read, unit 5, file /dev/ptmx - OpenMPI 1.0.2

2006-02-03 Thread Jeff Squyres
Konstantin -- I am able to replicate your error. Let me look into it and get back to you. On Feb 1, 2006, at 12:16 PM, Konstantin Kudin wrote: Hi, Here is an update. The code crashes only when it is launched by mpirun, and the actual piece of code where it happens is this: IF ( io

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-03 Thread Jeff Squyres
Greetings Konstantin. Many thanks for this report. Another user submitted almost the same issue earlier today (poor performance of Open MPI 1.0.x collectives; see http://www.open-mpi.org/community/lists/users/2006/02/0558.php). Let me provide an additional clarification on Galen's reply:

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-03 Thread Galen M. Shipman
Hello Konstantin, By using coll_basic_crossover 8 you are forcing all of your benchmarks to use the basic collectives, which offer poor performance. I ran the skampi Alltoall benchmark with the tuned collectives I get the following results which seem to scale quite well, when I have a bit