Re: [O-MPI users] mpirun --prefix

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 7:24 PM, Anthony Chan wrote: How about this -- an ISV asked me for a similar feature a little while ago: if mpirun is invoked with an absolute pathname, then use that base directory (minus the difference from $bindir) as an option to an implicit --prefix. (your suggestion may

Re: [O-MPI users] mpirun --prefix

2006-01-04 Thread Anthony Chan
Hi Jeff, On Wed, 4 Jan 2006, Jeff Squyres wrote: > Anthony -- > > I'm really sorry; we just noticed this message today -- it got lost > in the post-SC recovery/holiday craziness. :-( I understand. :) > > Your request is fairly reasonable, but I wouldn't want to make it the > default behavior.

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 5:05 PM, Tom Rosmond wrote: Thanks for the quick reply. I ran my tests with a hostfile with cedar.reachone.com slots=4 I clearly misunderstood the role of the 'slots' parameter, because when I removed it, OPENMPI slightly outperformed LAM, which I assume it should. Thanks f

Re: [O-MPI users] mpirun --prefix

2006-01-04 Thread Jeff Squyres
Anthony -- I'm really sorry; we just noticed this message today -- it got lost in the post-SC recovery/holiday craziness. :-( Your request is fairly reasonable, but I wouldn't want to make it the default behavior. Specifically, I can envision some scenarios where it might be problematic

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Tom Rosmond
Thanks for the quick reply. I ran my tests with a hostfile with cedar.reachone.com slots=4 I clearly misunderstood the role of the 'slots' parameter, because when I removed it, OPENMPI slightly outperformed LAM, which I assume it should. Thanks for the help. Tom Brian Barrett wrote: On Jan

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Patrick Geoffray
Hi Tom, users-requ...@open-mpi.org wrote: I am pretty sure that LAM exploits the fact that the virtual processors are all sharing the same memory, so communication is via memory and/or the PCI bus of the system, while my OPENMPI configuration doesn't exploit this. Is this a reasonable diagnos

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Brian Barrett
On Jan 4, 2006, at 4:24 PM, Tom Rosmond wrote: I have been using LAM-MPI for many years on PC/Linux systems and have been quite pleased with its performance. However, at the urging of the LAM-MPI website, I have decided to switch to OPENMPI. For much of my preliminary testing I work on a si

[O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Tom Rosmond
Hello: I have been using LAM-MPI for many years on PC/Linux systems and have been quite pleased with its performance. However, at the urging of the LAM-MPI website, I have decided to switch to OPENMPI. For much of my preliminary testing I work on a single processor workstation (see the attache

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Jeff Squyres
On Jan 4, 2006, at 2:08 PM, Anthony Chan wrote: Either my program quits without writing the logfile (and without complaining) or it crashes in MPI_Finalize. I get the message "33 additional processes aborted (not shown)". This is not MPE error message. If the logging crashes in MPI_Finalize

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Anthony Chan
On Wed, 4 Jan 2006, Carsten Kutzner wrote: > On Tue, 3 Jan 2006, Anthony Chan wrote: > > > MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the > > number of processes. Could you explain what difficulty or error > > message you encountered when using >32 processes ? > > Either

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Graham E Fagg
Thanks Carsten, I have started updating my jumpshot so will let you know as soon as I have some ideas on whats going on. G. ps. I am going offline now for 2 days while travelling On Wed, 4 Jan 2006, Carsten Kutzner wrote: Hi Graham, here are the all-to-all test results with the modification

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Carsten Kutzner
Hi Graham, here are the all-to-all test results with the modification to the decision routine you suggested yesterday. Now the routine behaves nicely for 128 and 256 float messages on 128 CPUs! For the other sizes one probably wants to keep the original algorithm, since it is faster there. However

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Carsten Kutzner
On Tue, 3 Jan 2006, Anthony Chan wrote: > MPE/MPE2 logging (or clog/clog2) does not impose any limitation on the > number of processes. Could you explain what difficulty or error > message you encountered when using >32 processes ? Either my program quits without writing the logfile (and without

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Carsten Kutzner
Hi Peter, We have HP ProCurve 2848 GigE switches here (48 port). The problem is more severe the more nodes (=ports) are involved. It starts to show up at 16 ports for a limited range of message sizes and gets really bad for 32 nodes. The switch has a 96 Gbit/s backplane and should therefore be abl

Re: [O-MPI users] Performance of all-to-all on Gbit Ethernet

2006-01-04 Thread Peter Kjellström
Hello Carsten, Have you considered the possibility that this is the effect of a non-optimal ethernet switch? I don't know how many nodes you need to reproduce it on or if you even have physical access (and opportunity) but popping in another decent 16-port switch for a testrun might be interest