Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-17 Thread Dugenoux Albert
Hello. As I promised, I send you results about different simulations and parameters according to the MPI options :TEST | DESCRIPTION   SHARING        MPI | WITH PBS | ELAPSE TIME 1ST ITERATION 1      Node 2  

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-11 Thread Dugenoux Albert
ome >> natural boundary in your model - perhaps with 8 processes/node you wind up >> with more processes that cross the node boundary, further increasing the >> communication requirement. >> >> Do things continue to get worse if you use all 4 nodes with 6 processes/no

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Dugenoux Albert
__ De : Ralph Castain À : Dugenoux Albert ; Open MPI Users Envoyé le : Mardi 10 juillet 2012 16h47 Objet : Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi I suspect it mostly reflects communication patterns. I don't know anything about Saturne, but shared memory is a

[OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Dugenoux Albert
Hi.   I have recently built a cluster upon a Dell PowerEdge Server with a Debian 6.0 OS. This server is composed of 4 system board of 2 processors of hexacores. So it gives 12 cores per system board. The boards are linked with a local Gbits switch.   In order to parallelize the software Code Sa