Hello.
As I promised, I send you results about different simulations and parameters
according
to the MPI options :TEST | DESCRIPTION
SHARING MPI | WITH PBS | ELAPSE TIME 1ST ITERATION
1 Node 2
ome
>> natural boundary in your model - perhaps with 8 processes/node you wind up
>> with more processes that cross the node boundary, further increasing the
>> communication requirement.
>>
>> Do things continue to get worse if you use all 4 nodes with 6 processes/no
__
De : Ralph Castain
À : Dugenoux Albert ; Open MPI Users
Envoyé le : Mardi 10 juillet 2012 16h47
Objet : Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi
I suspect it mostly reflects communication patterns. I don't know anything
about Saturne, but shared memory is a
Hi.
I have recently built a cluster upon a Dell PowerEdge Server with a Debian 6.0
OS. This server is composed of
4 system board of 2 processors of hexacores. So it gives 12 cores per system
board.
The boards are linked with a local Gbits switch.
In order to parallelize the software Code Sa