Hello.
As I promised, I send you results about different simulations and parameters
according
to the MPI options :TEST | DESCRIPTION
SHARING MPI | WITH PBS | ELAPSE TIME 1ST ITERATION
1 Node 2
On Jul 10, 2012, at 7:31 AM, Dugenoux Albert wrote:
>
> Hi.
> ?
> I have recently built a cluster upon a Dell PowerEdge Server with a Debian
> 6.0 OS. This server is composed of
> 4 system board of 2 processors of hexacores. So it gives 12 cores?per system
> board.
> The boards are linked with
s Correa
*À :* Open MPI Users
*Envoyé le :* Mercredi 11 juillet 2012 0h51
*Objet :* Re: [OMPI users] Bad parallel scaling using Code Saturne with
openmpi
On 07/10/2012 05:31 PM, Jeff Squyres wrote:
> +1. Also, not all Ethernet switches are created equal --
> particularly commodity 1GB Ether
Envoyé le : Mercredi 11 juillet 2012 0h51
Objet : Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi
On 07/10/2012 05:31 PM, Jeff Squyres wrote:
> +1. Also, not all Ethernet switches are created equal --
> particularly commodity 1GB Ethernet switches.
> I've
On 07/10/2012 05:31 PM, Jeff Squyres wrote:
+1. Also, not all Ethernet switches are created equal --
particularly commodity 1GB Ethernet switches.
I've seen plenty of crappy Ethernet switches rated for 1GB
that could not reach that speed when under load.
Are you perhaps belittling my dear $43
hance
something by tuning mca parameters ?
*De :* Ralph Castain
*À :* Dugenoux Albert ; Open MPI Users
*Envoyé le :* Mardi 10 juillet 2012 16h47
*Objet :* Re: [OMPI users] Bad parallel scaling using Code Saturne
with openmpi
I suspect it mostly reflects communication patterns. I don't kno
+1. Also, not all Ethernet switches are created equal -- particularly
commodity 1GB Ethernet switches. I've seen plenty of crappy Ethernet switches
rated for 1GB that could not reach that speed when under load.
On Jul 10, 2012, at 10:47 AM, Ralph Castain wrote:
> I suspect it mostly reflect
g mca parameters ?
*De :* Ralph Castain
*À :* Dugenoux Albert ; Open MPI Users
*Envoyé le :* Mardi 10 juillet 2012 16h47
*Objet :* Re: [OMPI users] Bad parallel scaling using Code Saturne
with openmpi
I suspect it mostly reflects communication patterns. I don't know
anything about Saturne,
__
De : Ralph Castain
À : Dugenoux Albert ; Open MPI Users
Envoyé le : Mardi 10 juillet 2012 16h47
Objet : Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi
I suspect it mostly reflects communication patterns. I don't know anything
about Saturne, but shared memory is a
I suspect it mostly reflects communication patterns. I don't know anything
about Saturne, but shared memory is a great deal faster than TCP, so the more
processes sharing a node the better. You may also be hitting some natural
boundary in your model - perhaps with 8 processes/node you wind up wi
Hi.
I have recently built a cluster upon a Dell PowerEdge Server with a Debian 6.0
OS. This server is composed of
4 system board of 2 processors of hexacores. So it gives 12 cores per system
board.
The boards are linked with a local Gbits switch.
In order to parallelize the software Code Sa
11 matches
Mail list logo