Hi,

If I understand you correctly the most suitable way to do it is by paffinity
that we have in Open MPI 1.3 and the trank.
how ever usually OS is distributing processes evenly between sockets by it
self.

There still no formal FAQ due to a multiple reasons but you can read how to
use it in the attached scratch ( there were few name changings of the
params, so check with ompi_info )

shared memory is used between processes that share same machine, and openib
is used between different machines ( hostnames ), no special mca params are
needed.

Best Regards
Lenny,




On Sun, Oct 19, 2008 at 10:32 AM, Gilbert Grosdidier <gro...@mail.cern.ch>wrote:

>  Working with a CellBlade cluster (QS22), the requirement is to have one
> instance of the executable running on each socket of the blade (there are 2
> sockets). The application is of the 'domain decomposition' type, and each
> instance is required to often send/receive data with both the remote blades
> and
> the neighbor socket.
>
>  Question is : which specification must be used for the mca btl component
> to force 1) shmem type messages when communicating with this neighbor
> socket,
> while 2) using openib to communicate with the remote blades ?
> Is '-mca btl sm,openib,self' suitable for this ?
>
>  Also, which debug flags could be used to crosscheck that the messages are
> _actually_ going thru the right channel for a given channel, please ?
>
>  We are currently using OpenMPI 1.2.5 shipped with RHEL5.2 (ppc64).
> Which version do you think is currently the most optimised for these
> processors and problem type ? Should we go towards OpenMPI 1.2.8 instead ?
> Or even try some OpenMPI 1.3 nightly build ?
>
>  Thanks in advance for your help,                  Gilbert.
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Attachment: RANKS_FAQ.doc
Description: MS-Word document

Reply via email to