Sure:

$ ompi_info --param hwloc all -l 9
…..
               MCA hwloc: parameter "hwloc_base_cpu_set" (current value: "",
                          data source: default, level: 9 dev/all, type:
                          string)
                          Comma-separated list of ranges specifying logical
                          cpus allocated to this job [default: none]



> On Dec 22, 2014, at 1:29 PM, Saliya Ekanayake <esal...@gmail.com> wrote:
> 
> Thank you and one last question. Is it possible to avoid a core and instruct 
> OMPI to use only the other cores?
> 
> On Mon, Dec 22, 2014 at 2:08 PM, Ralph Castain <r...@open-mpi.org 
> <mailto:r...@open-mpi.org>> wrote:
> 
>> On Dec 22, 2014, at 10:45 AM, Saliya Ekanayake <esal...@gmail.com 
>> <mailto:esal...@gmail.com>> wrote:
>> 
>> Hi Ralph,
>> 
>> Yes the report bindings show the correct binding as expected for the 
>> processes. The doubt I am having is, say I spawn a thread within my process. 
>> If I don't specify affinity for it, is it possible for it to get scheduled 
>> to run in a core outside that of the process? 
> 
> It shouldn’t, unless you deliberately unbind it.
> 
>> 
>> Second question is, does MPI provides an API such that I can retrieve the 
>> binding info from program to take decisions on setting thread affinity?
> 
> Nothing specifically in the standard, no. There has been some discussion on 
> this list about ways of getting the info, though they all involve a 
> collective operation. I’m working on an MPI extension for OMPI to access it 
> as each proc already has binding/location info for every proc in the job - 
> just no MPI standard way of providing it to you.
> 
> 
>> 
>> Thank you,
>> Saliya
>> 
>> On Mon, Dec 22, 2014 at 1:18 PM, Ralph Castain <r...@open-mpi.org 
>> <mailto:r...@open-mpi.org>> wrote:
>> FWIW: it looks like we are indeed binding to core if PE is set, so if you 
>> are seeing something different, then we may have a bug somewhere.
>> 
>> If you add —report-bindings to your cmd line, you should see where we bound 
>> the procs - does that look correct?
>> 
>> 
>>> On Dec 22, 2014, at 9:49 AM, Ralph Castain <r...@open-mpi.org 
>>> <mailto:r...@open-mpi.org>> wrote:
>>> 
>>> They will be bound to whatever level you specified - I believe by default 
>>> we bind to socket when mapping by socket. If you want them bound to core, 
>>> you might need to add —bind-to core.
>>> 
>>> I can take a look at it - I *thought* we had reset that to bind-to core 
>>> when PE=N was specified, but maybe that got lost.
>>> 
>>> 
>>>> On Dec 22, 2014, at 8:32 AM, Saliya Ekanayake <esal...@gmail.com 
>>>> <mailto:esal...@gmail.com>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I've been using --map-by socket:PE=N, where N is used to control the 
>>>> number of cores a proc gets mapped to. Does this also guarantee that a 
>>>> proc is bound to N cores in the socket? I am asking this because I see 
>>>> some threads spawned by the process run outside the given N cores in the 
>>>> socket.
>>>> 
>>>> Is this expected or I guess I am missing some binding parameter here? 
>>>> Also, is there some documentation on these different choices? Are the 
>>>> options in [1] available in current release?
>>>> 
>>>> [1] 
>>>> http://www.slideshare.net/jsquyres/open-mpi-explorations-in-process-affinity-eurompi13-presentation
>>>>  
>>>> <http://www.slideshare.net/jsquyres/open-mpi-explorations-in-process-affinity-eurompi13-presentation>
>>>> 
>>>> Thank you,
>>>> Saliya
>>>> 
>>>> -- 
>>>> Saliya Ekanayake
>>>> Ph.D. Candidate | Research Assistant
>>>> School of Informatics and Computing | Digital Science Center
>>>> Indiana University, Bloomington
>>>> Cell 812-391-4914 <tel:812-391-4914>
>>>> http://saliya.org 
>>>> <http://saliya.org/>_______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
>>>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>> Link to this post: 
>>>> http://www.open-mpi.org/community/lists/users/2014/12/26051.php 
>>>> <http://www.open-mpi.org/community/lists/users/2014/12/26051.php>
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2014/12/26054.php 
>> <http://www.open-mpi.org/community/lists/users/2014/12/26054.php>
>> 
>> 
>> 
>> -- 
>> Saliya Ekanayake
>> Ph.D. Candidate | Research Assistant
>> School of Informatics and Computing | Digital Science Center
>> Indiana University, Bloomington
>> Cell 812-391-4914 <tel:812-391-4914>
>> http://saliya.org 
>> <http://saliya.org/>_______________________________________________
>> users mailing list
>> us...@open-mpi.org <mailto:us...@open-mpi.org>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
>> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2014/12/26056.php 
>> <http://www.open-mpi.org/community/lists/users/2014/12/26056.php>
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org <mailto:us...@open-mpi.org>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
> <http://www.open-mpi.org/mailman/listinfo.cgi/users>
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/12/26057.php 
> <http://www.open-mpi.org/community/lists/users/2014/12/26057.php>
> 
> 
> 
> -- 
> Saliya Ekanayake
> Ph.D. Candidate | Research Assistant
> School of Informatics and Computing | Digital Science Center
> Indiana University, Bloomington
> Cell 812-391-4914
> http://saliya.org 
> <http://saliya.org/>_______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/12/26058.php

Reply via email to