Yowza -- what silly typos.
Fixed on the trunk; will be committed on the release branch
tomorrow. Thanks again!
On Feb 1, 2006, at 4:52 PM, Glenn Morris wrote:
Using v1.0.1, with tcsh as user login shell, trying to mpirun a job on
the localhost that involves tcsh produces an error from tc
Excellent point. Hardly elegant, but definitely no portability
issues there -- so I like it better.
Many thanks!
On Jan 31, 2006, at 7:09 PM, Glenn Morris wrote:
Jeff Squyres wrote:
After sending this reply, I thought about this issue a bit more --
do you have any idea how portable the e
Jeff Squyres wrote:
> Excellent point. Hardly elegant, but definitely no portability
> issues there -- so I like it better.
Last word on this trivial issue I promise - if you don't want two
copies added to L_L_P, you could use a temporary variable, e.g.:
tcsh -c 'if ( "$?LD_LIBRARY_PATH" == 1
Hi everyone,
I recently took Open MPI (1.0.2a4) for a spin and thought you all might
like to see how it's currently stacking up against MPICH (1.2.7p1). The
benchmark I used was the EPA's CMAQ (Community Multiscale Air Quality)
model.
Now bear in mind my results aren't completely scientific
I would like see more of such results. In particular it would be
nice to see a comparison of OpenMPI to the newer MPICH2.
Thanks, Glen.
-david
--
David Gunter
CCN-8: HPC Environments - Parallel Tools
On Feb 2, 2006, at 6:55 AM, Glen Kaukola wrote:
Hi everyone,
I recently took Open MPI (
Hi Glen,
what setup have you used for doing the benchmarks? I mean,
what type of Ethernet switch, which network cards, which
linux kernel. I am asking because it looks weird to me that
the 4 CPU OpenMPI job is taking longer than the 2 CPU job,
and that the 8 CPU job is faster again. Maybe the netw
OK, Thanks for looking into this.
Brian
On Feb 1, 2006, at 8:05 AM, Brian Barrett wrote:
On Jan 31, 2006, at 5:47 PM, Brian Granger wrote:
I am compiling a C++ program that uses the Open-MPI c++ bindings. I
think there is a bug in the constants.h and/or mpicxx.cc files.
The file constants.
Brian,
Excellent. This definitely gives me enough information to get
going. I will give feeback as I try it out.
Brian
On Jan 30, 2006, at 5:44 AM, Brian Barrett wrote:
On Jan 29, 2006, at 6:09 PM, Brian Granger wrote:
I have compiled and installed OpenMPI on Mac OS X. As I
undertstan
Glen,
Thanks for the spending time benchmarking OpenMPI and for sending us the
feedback. We know we have some issues on the 1.0.2 version, more precisely
with the collective communications. We just look inside the CMAQ code, and
there are a lot of reduce and Allreduce. As it look like the collecti
Hi Jean,
I just noticed that you are running Quad proc nodes and are using:
bench1 slots=4 max-slots=4
in your machine file and you are running the benchmark using only 2
processes via:
mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -
np 2 PMB-MPI1
By using slots=4 y
Hi all,
Please see the attached file for a detailed report of Open-MPI
performance.
Any fixes in the pipeline for that?
Konstantin
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
repor
Hi all,
There seem to have been problems with the attachement. Here is the
report:
I did some tests of Open-MPI version 1.0.2a4r8848. My motivation was
an extreme degradation of all-to-all MPI performance on 8 cpus (ran
like 1 cpu). At the same time, MPICH 1.2.7 on 8 cpus runs more like on
4 (
On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote:
> By using slots=4 you are telling Open MPI to put the first 4
> processes on the "bench1" host.
> Open MPI will therefore use shared memory to communicate between the
> processes not Infiniband.
Well, actually not, unless I'm mistaken
On Thu, 2006-02-02 at 15:19 -0700, Galen M. Shipman wrote:
> Is it possible for you to get a stack trace where this is hanging?
>
> You might try:
>
>
> mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
> 2 -d xterm -e gdb PMB-MPI1
>
>
I did that, and when it was hanging
Hi Jean,
I suspect the problem may be in the bcast,
ompi_coll_tuned_bcast_intra_basic_linear. Can you try the same run using
mpirun -prefix /opt/ompi -wdir `pwd` -machinefile /root/machines -np
2 -mca coll self,basic -d xterm -e gdb PMB-MPI1
This will use the basic collectives and may i
15 matches
Mail list logo