Hmmm...well, the info is there. There is an envar OMPI_COMM_WORLD_LOCAL_SIZE 
which tells you how many procs are on this node. If you tell your proc how many 
cores (or hwthreads) to use, it would be a simple division to get what you want.

You could also detect the number of cores or hwthreads via a call to hwloc, but 
I don't know if you want to link that deep and MPI doesn't have a function for 
that purpose. Could be that OpenMP provides a call for that purpose?


On Sep 12, 2014, at 7:22 AM, JR Cary <c...@txcorp.com> wrote:

> 
> 
> On 9/12/14, 7:27 AM, Tim Prince wrote:
>> 
>> On 9/12/2014 6:14 AM, JR Cary wrote:
>>> This must be a very old topic.
>>> 
>>> I would like to run mpi with one process per node, e.g., using
>>> -cpus-per-rank=1.  Then I want to use openmp inside of that.
>>> But other times I will run with a rank on each physical core.
>>> 
>>> Inside my code I would like to detect which situation I am in.
>>> Is there an openmpi api call to determine that?
>>> 
>> omp_get_num_threads() should work.  Unless you want to choose a different 
>> non-parallel algorithm for this case, a single thread omp parallel region 
>> works fine.
>> You should soon encounter cases where you want intermediate choices, such as 
>> 1 rank per CPU package and 1 thread per core, even if you stay away from 
>> platforms with more than 12 cores per CPU.
> 
> I may not understand, so I will try to ask in more detail.
> 
> Suppose I am running on a four-core processor (and my code likes one thread 
> per core).
> 
> In case 1 I do
> 
>   mpiexec -np 2 myexec
> 
> and I want to know that each mpi process should use 2 threads.
> 
> If instead I did
> 
>   mpiexec -np 4 myexec
> 
> I want to know that each mpi process should use one thread.
> 
> Will omp_get_num_threads() should return a different value for those two 
> cases?
> 
> Perhaps I am not invoking mpiexec correctly. 
> I use MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &threadSupport), and 
> regardless
> of what how I invoke mpiexec (-n 1, -n 2, -n 4), I see 2 openmp processes 
> and 1 openmp threads (have not called omp_set_num_threads).
> When I run serial, I see 8 openmp processes and 1 openmp threads.
> So I must be missing an arg to mpiexec?
> 
> This is a 4-core haswell with hyperthreading to get 8.
> 
>   
> 
> Thx.....
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/09/25322.php

Reply via email to