George Bosilca wrote:
Glen,
Thanks for the spending time benchmarking OpenMPI and for sending us the
feedback. We know we have some issues on the 1.0.2 version, more precisely
with the collective communications. We just look inside the CMAQ code, and
there are a lot of reduce and Allreduce. As i
Glen,
Thanks for the spending time benchmarking OpenMPI and for sending us the
feedback. We know we have some issues on the 1.0.2 version, more precisely
with the collective communications. We just look inside the CMAQ code, and
there are a lot of reduce and Allreduce. As it look like the collecti
Hi Glen,
what setup have you used for doing the benchmarks? I mean,
what type of Ethernet switch, which network cards, which
linux kernel. I am asking because it looks weird to me that
the 4 CPU OpenMPI job is taking longer than the 2 CPU job,
and that the 8 CPU job is faster again. Maybe the netw
I would like see more of such results. In particular it would be
nice to see a comparison of OpenMPI to the newer MPICH2.
Thanks, Glen.
-david
--
David Gunter
CCN-8: HPC Environments - Parallel Tools
On Feb 2, 2006, at 6:55 AM, Glen Kaukola wrote:
Hi everyone,
I recently took Open MPI (