Afraid I can only reiterate: we don't support binding of individual threads
to cores at this time. You can use bind-to-socket to constrain all threads
from a process to a socket, so they can at least use those cores - but the
threads will move around between the cores in that socket, and more threads
than cores will cause contention issues.

I have no idea why you are seeing binding when you don't request it. I've
never heard of that happening, and know of no mechanism inside OMPI that
would allow it.

Sorry....please let us know if you find out anything more regarding the
behavior.

On Thu, Jul 29, 2010 at 11:24 AM, David Akin <nospa...@gmail.com> wrote:

> I use the taskset command, or just use 'top' and watch the performance.
>
>
> On Thu, Jul 29, 2010 at 12:02 PM, Ralph Castain <r...@open-mpi.org> wrote:
>
>> I don't see anything in your code that would bind, but I also don't see
>> anything that actually tells you whether or not your are bound. It appears
>> that MPI_Get_processor_name is simply returning the name of the node as
>> opposed to the name/id of any specific core. How do you know what core the
>> thread is actually executing on?
>>
>>
>>
>> This would tell you if your code was bouncing between cores.
>>
>> On Jul 29, 2010, at 10:43 AM, David Akin wrote:
>>
>> Sorry for the confusion. What I need is for all OpenMP threads to *not*
>> stay on one core. I *would* rather each OpenMP thread to run on a separate
>> core. Is it my example code? My gut reaction is no because I can manipulate
>> (somewhat) the cores the threads are assigned by adding -bysocket
>> -bind-to-socket to mpirun.
>>
>> On Thu, Jul 29, 2010 at 10:08 AM, Terry Dontje 
>> <terry.don...@oracle.com>wrote:
>>
>>>  Ralph Castain wrote:
>>>
>>>
>>>  On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote:
>>>
>>>  Ralph Castain wrote:
>>>
>>> How are you running it when the threads are all on one core?
>>>
>>> If you are specifying --bind-to-core, then of course all the threads will 
>>> be on one core since we bind the process (not the thread). If you are 
>>> specifying -mca mpi_paffinity_alone 1, then the same behavior results.
>>>
>>> Generally, if you want to bind threads, the only way to do it is with a 
>>> rank file. We -might- figure out a way to provide an interface for 
>>> thread-level binding, but I'm not sure about that right now. As things 
>>> stand, OMPI has no visibility into the fact that your app spawned threads.
>>>
>>>
>>>
>>>
>>>  Huh???  That's not completely correct.  If you have a multiple socket
>>> machine you could to -bind-to-socket -bysocket and spread the processes that
>>> way.  Also couldn't you use the -cpus-per-proc with -bind-to-core to get a
>>> process to bind to a non-socket amount of cpus?
>>>
>>>
>>>  Yes, you could do bind-to-socket, though that still constrains the
>>> threads to only that one socket. What was asked about here was the ability
>>> to bind-to-core at the thread level, and that is something OMPI doesn't
>>> support.
>>>
>>>  Sorry I did not get that constraint.  So to be clear what is being
>>> asked is to have the ability to bind a processes threads to specific cores.
>>> If so then to the letter of what that means I agree you cannot do that.
>>>
>>> However, what may be the next best thing is to specify binding of a
>>> process to a group of resources.  That's essentially what my suggestion
>>> above is doing.
>>>
>>> I do agree with Ralph that once you start overloading the socket with
>>> more threads then it can handle problems will ensue.
>>>
>>> --td
>>>
>>>
>>>
>>> This is all documented in the mpirun manpage.
>>>
>>> That being said, I also am confused, like Ralph, as to why no options is
>>> causing your code bind.  Maybe add a --report-bindings to your mpirun line
>>> to see what OMPI thinks it is doing in this regard?
>>>
>>>
>>>  This is a good suggestion - I'm beginning to believe that the binding is
>>> happening in the user's app and not OMPI.
>>>
>>>
>>>
>>> --td
>>>
>>> --td
>>>
>>> On Jul 28, 2010, at 5:47 PM, David Akin wrote:
>>>
>>>
>>>
>>>  All,
>>> I'm trying to get the OpenMP portion of the code below to run
>>> multicore on a couple of 8 core nodes.
>>>
>>> Good news: multiple threads are being spawned on each node in the run.
>>> Bad news: each of the threads only runs on a single core, leaving 7
>>> cores basically idle.
>>> Sorta good news: if I provide a rank file I get the threads running on
>>> different cores within each node (PITA.
>>>
>>> Here's the first lines of output.
>>>
>>> /usr/mpi/gcc/openmpi-1.4-qlc/bin/mpirun -host c005,c006 -np 2 -rf
>>> rank.file -x OMP_NUM_THREADS=4 hybrid4.gcc
>>>
>>> Hello from thread 2 out of 4 from process 1 out of 2 on c006.local
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=2
>>> Hello from thread 3 out of 4 from process 1 out of 2 on c006.local
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=3
>>> Hello from thread 1 out of 4 from process 1 out of 2 on c006.local
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=1
>>> Hello from thread 1 out of 4 from process 0 out of 2 on c005.local
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=1
>>> Hello from thread 3 out of 4 from process 0 out of 2 on c005.local
>>> Hello from thread 2 out of 4 from process 0 out of 2 on c005.local
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=3
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=2
>>> Hello from thread 0 out of 4 from process 0 out of 2 on c005.local
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=0
>>> Hello from thread 0 out of 4 from process 1 out of 2 on c006.local
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=0
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=3
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=2
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=0
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=3
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=3
>>> another parallel region:       name:c005.local MPI_RANK_ID=0 OMP_THREAD_ID=2
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=0
>>> another parallel region:       name:c006.local MPI_RANK_ID=1 OMP_THREAD_ID=1
>>> .
>>> .
>>> .
>>>
>>> Here's the simple code:
>>> #include <stdio.h>
>>> #include "mpi.h"
>>> #include <omp.h>
>>>
>>> int main(int argc, char *argv[]) {
>>>  int numprocs, rank, namelen;
>>>  char processor_name[MPI_MAX_PROCESSOR_NAME];
>>>  int iam = 0, np = 1;
>>>  char name[MPI_MAX_PROCESSOR_NAME];   /* MPI_MAX_PROCESSOR_NAME ==
>>> 128         */
>>>  int O_ID;                            /* OpenMP thread ID
>>>         */
>>>  int M_ID;                            /* MPI rank ID
>>>         */
>>>  int rtn_val;
>>>
>>>  MPI_Init(&argc, &argv);
>>>  MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
>>>  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>>>  MPI_Get_processor_name(processor_name, &namelen);
>>>
>>>  #pragma omp parallel default(shared) private(iam, np,O_ID)
>>>  {
>>>    np = omp_get_num_threads();
>>>    iam = omp_get_thread_num();
>>>    printf("Hello from thread %d out of %d from process %d out of %d on 
>>> %s\n",
>>>           iam, np, rank, numprocs, processor_name);
>>>    int i=0;
>>>    int j=0;
>>>    double counter=0;
>>>    for(i =0;i<99999999;i++)
>>>            {
>>>             O_ID = omp_get_thread_num();          /* get OpenMP
>>> thread ID                 */
>>>             MPI_Get_processor_name(name,&namelen);
>>>             rtn_val = MPI_Comm_rank(MPI_COMM_WORLD,&M_ID);
>>>             printf("another parallel region:       name:%s
>>> MPI_RANK_ID=%d OMP_THREAD_ID=%d\n", name,M_ID,O_ID);
>>>             for(j = 0;j<999999999;j++)
>>>              {
>>>               counter=counter+i;
>>>              }
>>>            }
>>>
>>>  }
>>>
>>>  MPI_Finalize();
>>>
>>> }
>>> _______________________________________________
>>> users mailing 
>>> listusers@open-mpi.orghttp://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>  _______________________________________________
>>> users mailing 
>>> listusers@open-mpi.orghttp://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>> --
>>> <Mail Attachment.gif>
>>>   Terry D. Dontje | Principal Software Engineer
>>> Developer Tools Engineering | +1.650.633.7054
>>>  Oracle * - Performance Technologies*
>>>  95 Network Drive, Burlington, MA 01803
>>> Email terry.don...@oracle.com
>>>
>>>     _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>> ------------------------------
>>>
>>> _______________________________________________
>>> users mailing 
>>> listusers@open-mpi.orghttp://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>> --
>>> [image: Oracle]
>>>
>>>   Terry D. Dontje | Principal Software Engineer
>>> Developer Tools Engineering | +1.650.633.7054
>>>  Oracle * - Performance Technologies*
>>>  95 Network Drive, Burlington, MA 01803
>>> Email terry.don...@oracle.com
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>  _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
> David Akin, RHCE
> Sr. Systems Analyst and HPC Roadie
> OU Supercomputing Center for Education & Research (OSCER)
> University of Oklahoma Information Technology
> david.a...@ou.edu
> 405-598-7685
>
> Don't forget - Aug 8-14 @ OU: Intermediate Parallel Programming & Cluster
> Computing Free Workshop
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to