You can easily map on a blade an application to run 
on CPU 0 (resp 1) while using memory banks relevant to CPU 0 (resp 1) with:

numactl --cpubind=0 --membind=0 app ...
(resp numactl --cpubind=1 --membind=1 app ...)

 Hope this helps,          Gilbert.

On Mon, 27 Oct 2008, Lenny Verkhovsky wrote:

> can you update me with the mapping or the way to get it from the OS on the
> Cell.
> 
> thanks
> 
> On Thu, Oct 23, 2008 at 8:08 PM, Mi Yan <mi...@us.ibm.com> wrote:
> 
> > Lenny,
> >
> > Thanks.
> > I asked the Cell/BE Linux Kernel developer to get the CPU mapping :) The
> > mapping is fixed in current kernel.
> >
> > Mi
> > [image: Inactive hide details for "Lenny Verkhovsky"
> > <lenny.verkhov...@gmail.com>]"Lenny Verkhovsky" <
> > lenny.verkhov...@gmail.com>
> >
> >
> >
> >     *"Lenny Verkhovsky" <lenny.verkhov...@gmail.com>*
> >             Sent by: users-boun...@open-mpi.org
> >
> >             10/23/2008 01:52 PM Please respond to
> >             Open MPI Users <us...@open-mpi.org>
> >
> >
> > To
> >
> > "Open MPI Users" <us...@open-mpi.org>
> > cc
> >
> >
> > Subject
> >
> > Re: [OMPI users] Working with a CellBlade cluster
> > According to *
> > https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3*<https://svn.open-mpi.org/trac/ompi/milestone/Open%20MPI%201.3>very
> >  soon,
> > but you can download trunk version 
> > *http://www.open-mpi.org/svn/*<http://www.open-mpi.org/svn/>and check if it 
> > works for you.
> >
> > how can you check mapping CPUs by OS , my cat /proc/cpuinfo shows very
> > little info
> > # cat /proc/cpuinfo
> > processor : 0
> > cpu : Cell Broadband Engine, altivec supported
> > clock : 3200.000000MHz
> > revision : 48.0 (pvr 0070 3000)
> > processor : 1
> > cpu : Cell Broadband Engine, altivec supported
> > clock : 3200.000000MHz
> > revision : 48.0 (pvr 0070 3000)
> > processor : 2
> > cpu : Cell Broadband Engine, altivec supported
> > clock : 3200.000000MHz
> > revision : 48.0 (pvr 0070 3000)
> > processor : 3
> > cpu : Cell Broadband Engine, altivec supported
> > clock : 3200.000000MHz
> > revision : 48.0 (pvr 0070 3000)
> > timebase : 26666666
> > platform : Cell
> > machine : CHRP IBM,0793-1RZ
> >
> >
> >
> > On Thu, Oct 23, 2008 at 3:00 PM, Mi Yan 
> > <*mi...@us.ibm.com*<mi...@us.ibm.com>>
> > wrote:
> >
> >    Hi, Lenny,
> >
> >    So rank file map will be supported in OpenMPI 1.3? I'm using
> >    OpenMPI1.2.6 and did not find parameter "rmaps_rank_file_".
> >    Do you have idea when OpenMPI 1.3 will be available? OpenMPI 1.3 has
> >    quite a few features I'm looking for.
> >
> >    Thanks,
> >
> >    Mi
> >    [image: Inactive hide details for "Lenny Verkhovsky"
> >    <lenny.verkhov...@gmail.com>]"Lenny Verkhovsky" <*
> >    lenny.verkhov...@gmail.com* <lenny.verkhov...@gmail.com>>
> >
> >
> >
> >
> >       *"Lenny Verkhovsky" 
> > <**lenny.verkhov...@gmail.com*<lenny.verkhov...@gmail.com>
> >                         *>*
> >                         Sent by: 
> > *users-boun...@open-mpi.org*<users-boun...@open-mpi.org>
> >
> >                         10/23/2008 05:48 AM
> >
> > Please respond to
> > Open MPI Users <*us...@open-mpi.org* <us...@open-mpi.org>>
> >  To
> >
> > "Open MPI Users" <*us...@open-mpi.org* <us...@open-mpi.org>>cc
> > Subject
> >
> > Re: [OMPI users] Working with a CellBlade cluster
> >
> >
> >    Hi,
> >
> >
> >    If I understand you correctly the most suitable way to do it is by
> >    paffinity that we have in Open MPI 1.3 and the trank.
> >    how ever usually OS is distributing processes evenly between sockets by
> >    it self.
> >
> >    There still no formal FAQ due to a multiple reasons but you can read
> >    how to use it in the attached scratch ( there were few name changings of 
> > the
> >    params, so check with ompi_info )
> >
> >    shared memory is used between processes that share same machine, and
> >    openib is used between different machines ( hostnames ), no special mca
> >    params are needed.
> >
> >    Best Regards
> >    Lenny,
> >
> >
> >     On Sun, Oct 19, 2008 at 10:32 AM, Gilbert Grosdidier <*
> >    gro...@mail.cern.ch* <gro...@mail.cern.ch>> wrote:
> >       Working with a CellBlade cluster (QS22), the requirement is to have
> >          one
> >          instance of the executable running on each socket of the blade
> >          (there are 2
> >          sockets). The application is of the 'domain decomposition' type,
> >          and each
> >          instance is required to often send/receive data with both the
> >          remote blades and
> >          the neighbor socket.
> >
> >          Question is : which specification must be used for the mca btl
> >          component
> >          to force 1) shmem type messages when communicating with this
> >          neighbor socket,
> >          while 2) using openib to communicate with the remote blades ?
> >          Is '-mca btl sm,openib,self' suitable for this ?
> >
> >          Also, which debug flags could be used to crosscheck that the
> >          messages are
> >          _actually_ going thru the right channel for a given channel,
> >          please ?
> >
> >          We are currently using OpenMPI 1.2.5 shipped with RHEL5.2
> >          (ppc64).
> >          Which version do you think is currently the most optimised for
> >          these
> >          processors and problem type ? Should we go towards OpenMPI 1.2.8
> >          instead ?
> >          Or even try some OpenMPI 1.3 nightly build ?
> >
> >          Thanks in advance for your help, Gilbert.
> >
> >          _______________________________________________
> >          users mailing list*
> >          **us...@open-mpi.org* <us...@open-mpi.org>*
> >          
> > **http://www.open-mpi.org/mailman/listinfo.cgi/users*<http://www.open-mpi.org/mailman/listinfo.cgi/users>
> >        *(See attached file: RANKS_FAQ.doc)*
> >    _______________________________________________
> >
> >    users mailing list*
> >    **us...@open-mpi.org* <us...@open-mpi.org>
> >    
> > *http://www.open-mpi.org/mailman/listinfo.cgi/users*<http://www.open-mpi.org/mailman/listinfo.cgi/users>
> >
> >    _______________________________________________
> >    users mailing list*
> >    **us...@open-mpi.org* <us...@open-mpi.org>*
> >    
> > **http://www.open-mpi.org/mailman/listinfo.cgi/users*<http://www.open-mpi.org/mailman/listinfo.cgi/users>
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> 

-- 
*---------------------------------------------------------------------*
  Gilbert Grosdidier                 gilbert.grosdid...@in2p3.fr
  LAL / IN2P3 / CNRS                 Phone : +33 1 6446 8909
  Faculté des Sciences, Bat. 200     Fax   : +33 1 6446 8546
  B.P. 34, F-91898 Orsay Cedex (FRANCE)
 ---------------------------------------------------------------------

Reply via email to