Dear openmpi-ers,

I lately installed openmpi to run OpenFOAM 1.5 on our myrinet cluster. I
saw great performence improvements compared to openmpi 1.2.6, however it
is still little behind the commerical HPMPI.
Are there further tipps for fine-tuning the parameters to run mpirun
with for this special application? From my experience the MX-ML should
be the quicker one so I currently use:

mpirun --mca mtl mx --mca pml cm ...


as given the FAQ.

I also thing that processor affinity might be worth trying, I will do
this. Some other tipps? Are there special reasons why HPMPI still
outperforms openMPI for this kind of tasks? Thanks and regards.

BastiL

Reply via email to