Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-17 Thread Dugenoux Albert
Hello. As I promised, I send you results about different simulations and parameters according to the MPI options :TEST | DESCRIPTION   SHARING        MPI | WITH PBS | ELAPSE TIME 1ST ITERATION 1      Node 2  

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-11 Thread Yvan Fournier
On Jul 10, 2012, at 7:31 AM, Dugenoux Albert wrote: > > Hi. > ? > I have recently built a cluster upon a Dell PowerEdge Server with a Debian > 6.0 OS. This server is composed of > 4 system board of 2 processors of hexacores. So it gives 12 cores?per system > board. > The boards are linked with

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-11 Thread Gus Correa
s Correa *À :* Open MPI Users *Envoyé le :* Mercredi 11 juillet 2012 0h51 *Objet :* Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi On 07/10/2012 05:31 PM, Jeff Squyres wrote: > +1. Also, not all Ethernet switches are created equal -- > particularly commodity 1GB Ether

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-11 Thread Dugenoux Albert
Envoyé le : Mercredi 11 juillet 2012 0h51 Objet : Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi On 07/10/2012 05:31 PM, Jeff Squyres wrote: > +1.  Also, not all Ethernet switches are created equal -- > particularly commodity 1GB Ethernet switches. > I've

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Gus Correa
On 07/10/2012 05:31 PM, Jeff Squyres wrote: +1. Also, not all Ethernet switches are created equal -- particularly commodity 1GB Ethernet switches. I've seen plenty of crappy Ethernet switches rated for 1GB that could not reach that speed when under load. Are you perhaps belittling my dear $43

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Gus Correa
hance something by tuning mca parameters ? *De :* Ralph Castain *À :* Dugenoux Albert ; Open MPI Users *Envoyé le :* Mardi 10 juillet 2012 16h47 *Objet :* Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi I suspect it mostly reflects communication patterns. I don't kno

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Jeff Squyres
+1. Also, not all Ethernet switches are created equal -- particularly commodity 1GB Ethernet switches. I've seen plenty of crappy Ethernet switches rated for 1GB that could not reach that speed when under load. On Jul 10, 2012, at 10:47 AM, Ralph Castain wrote: > I suspect it mostly reflect

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread David Warren
g mca parameters ? *De :* Ralph Castain *À :* Dugenoux Albert ; Open MPI Users *Envoyé le :* Mardi 10 juillet 2012 16h47 *Objet :* Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi I suspect it mostly reflects communication patterns. I don't know anything about Saturne,

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Dugenoux Albert
__ De : Ralph Castain À : Dugenoux Albert ; Open MPI Users Envoyé le : Mardi 10 juillet 2012 16h47 Objet : Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi I suspect it mostly reflects communication patterns. I don't know anything about Saturne, but shared memory is a

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Ralph Castain
I suspect it mostly reflects communication patterns. I don't know anything about Saturne, but shared memory is a great deal faster than TCP, so the more processes sharing a node the better. You may also be hitting some natural boundary in your model - perhaps with 8 processes/node you wind up wi

[OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-10 Thread Dugenoux Albert
Hi.   I have recently built a cluster upon a Dell PowerEdge Server with a Debian 6.0 OS. This server is composed of 4 system board of 2 processors of hexacores. So it gives 12 cores per system board. The boards are linked with a local Gbits switch.   In order to parallelize the software Code Sa