Re: [O-MPI users] tcsh 'Unmatched ".' error on localhost

2006-02-02 Thread Jeff Squyres
Yowza -- what silly typos. Fixed on the trunk; will be committed on the release branch tomorrow. Thanks again! On Feb 1, 2006, at 4:52 PM, Glenn Morris wrote: Using v1.0.1, with tcsh as user login shell, trying to mpirun a job on the localhost that involves tcsh produces an error from tc

Re: [O-MPI users] mpirun tcsh LD_LIBRARY_PATH problem

2006-02-02 Thread Jeff Squyres
Excellent point. Hardly elegant, but definitely no portability issues there -- so I like it better. Many thanks! On Jan 31, 2006, at 7:09 PM, Glenn Morris wrote: Jeff Squyres wrote: After sending this reply, I thought about this issue a bit more -- do you have any idea how portable the e

Re: [O-MPI users] mpirun tcsh LD_LIBRARY_PATH problem

2006-02-02 Thread Glenn Morris
Jeff Squyres wrote: > Excellent point. Hardly elegant, but definitely no portability > issues there -- so I like it better. Last word on this trivial issue I promise - if you don't want two copies added to L_L_P, you could use a temporary variable, e.g.: tcsh -c 'if ( "$?LD_LIBRARY_PATH" == 1

[O-MPI users] A few benchmarks

2006-02-02 Thread Glen Kaukola
Hi everyone, I recently took Open MPI (1.0.2a4) for a spin and thought you all might like to see how it's currently stacking up against MPICH (1.2.7p1). The benchmark I used was the EPA's CMAQ (Community Multiscale Air Quality) model. Now bear in mind my results aren't completely scientific

Re: [O-MPI users] A few benchmarks

2006-02-02 Thread David Gunter
I would like see more of such results. In particular it would be nice to see a comparison of OpenMPI to the newer MPICH2. Thanks, Glen. -david -- David Gunter CCN-8: HPC Environments - Parallel Tools On Feb 2, 2006, at 6:55 AM, Glen Kaukola wrote: Hi everyone, I recently took Open MPI (

Re: [O-MPI users] A few benchmarks

2006-02-02 Thread Carsten Kutzner
Hi Glen, what setup have you used for doing the benchmarks? I mean, what type of Ethernet switch, which network cards, which linux kernel. I am asking because it looks weird to me that the 4 CPU OpenMPI job is taking longer than the 2 CPU job, and that the 8 CPU job is faster again. Maybe the netw

Re: [O-MPI users] Bug in C++ bindings

2006-02-02 Thread Brian Granger
OK, Thanks for looking into this. Brian On Feb 1, 2006, at 8:05 AM, Brian Barrett wrote: On Jan 31, 2006, at 5:47 PM, Brian Granger wrote: I am compiling a C++ program that uses the Open-MPI c++ bindings. I think there is a bug in the constants.h and/or mpicxx.cc files. The file constants.

Re: [O-MPI users] Configuring process startup on OS X

2006-02-02 Thread Brian Granger
Brian, Excellent. This definitely gives me enough information to get going. I will give feeback as I try it out. Brian On Jan 30, 2006, at 5:44 AM, Brian Barrett wrote: On Jan 29, 2006, at 6:09 PM, Brian Granger wrote: I have compiled and installed OpenMPI on Mac OS X. As I undertstan

Re: [O-MPI users] A few benchmarks

2006-02-02 Thread George Bosilca
Glen, Thanks for the spending time benchmarking OpenMPI and for sending us the feedback. We know we have some issues on the 1.0.2 version, more precisely with the collective communications. We just look inside the CMAQ code, and there are a lot of reduce and Allreduce. As it look like the collecti

Re: [O-MPI users] does btl_openib work ?

2006-02-02 Thread Galen M. Shipman
Hi Jean, I just noticed that you are running Quad proc nodes and are using: bench1 slots=4 max-slots=4 in your machine file and you are running the benchmark using only 2 processes via: mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines - np 2 PMB-MPI1 By using slots=4 y

[O-MPI users] Open-MPI all-to-all performance

2006-02-02 Thread Konstantin Kudin
Hi all, Please see the attached file for a detailed report of Open-MPI performance. Any fixes in the pipeline for that? Konstantin __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com repor

Re: [O-MPI users] Open-MPI all-to-all performance

2006-02-02 Thread Konstantin Kudin
Hi all, There seem to have been problems with the attachement. Here is the report: I did some tests of Open-MPI version 1.0.2a4r8848. My motivation was an extreme degradation of all-to-all MPI performance on 8 cpus (ran like 1 cpu). At the same time, MPICH 1.2.7 on 8 cpus runs more like on 4 (

Re: [O-MPI users] does btl_openib work ?

2006-02-02 Thread Jean-Christophe Hugly
On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote: > By using slots=4 you are telling Open MPI to put the first 4 > processes on the "bench1" host. > Open MPI will therefore use shared memory to communicate between the > processes not Infiniband. Well, actually not, unless I'm mistaken

Re: [O-MPI users] does btl_openib work ?

2006-02-02 Thread Jean-Christophe Hugly
On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote: > Is it possible for you to get a stack trace where this is hanging? > > You might try: > > > mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np > 2 -d xterm -e gdb PMB-MPI1 > > I did that, and when it was hanging

Re: [O-MPI users] does btl_openib work ?

2006-02-02 Thread Galen M. Shipman
Hi Jean, I suspect the problem may be in the bcast, ompi_coll_tuned_bcast_intra_basic_linear. Can you try the same run using mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np 2 -mca coll self,basic -d xterm -e gdb PMB-MPI1 This will use the basic collectives and may i