That's a first step. My question was more related to the process overlay on the
cores. If the MPI implementation place one process per node, then rank k and
rank k+1 will always be on separate node, and the communications will have to
go over IB. In the opposite if the MPI implementation places
Yes, there is definitely only 1 process per core with both MPI
implementations.
Thanks, G.
Le 20/12/2010 20:39, George Bosilca a écrit :
Are your processes places the same way with the two MPI implementations?
Per-node vs. per-core ?
george.
On Dec 20, 2010, at 11:14 , Gilbert Grosdi
Rob,
Thanks for the analysis. I have used your suggestions, but am still
frustrated by what I am seeing. I too have run my tests on single node
systems, and here is what I have done:
1. I modified the 'writeb' script to essentially mimic the example in
section 7.9.3 of Vol 2 of MPI: The Comple
Are your processes places the same way with the two MPI implementations?
Per-node vs. per-core ?
george.
On Dec 20, 2010, at 11:14 , Gilbert Grosdidier wrote:
> Bonjour,
>
> I am now at a loss with my running of OpenMPI (namely 1.4.3)
> on a SGI Altix cluster with 2048 or 4096 cores, runnin
Bonjour,
I am now at a loss with my running of OpenMPI (namely 1.4.3)
on a SGI Altix cluster with 2048 or 4096 cores, running over Infiniband.
After fixing several rather obvious failures with Ralph, Jeff and John
help,
I am now facing the bottom of this story since :
- there are no more obv
Hi,
I am now ok with the env. var. Pretty simple to set and get into the
code to pack the messages.
About tests, it is so dependent on the cluster, OpenMPI itself and the
model, this way is not an industrial way of tuning the computation. But
the env. var. is a good workaround.
Thanks again