Hi,
I see in *mca_coll_sm_comm_query()* of *ompi/mca/coll/sm/coll_sm_module.c*
that al allreduce and bcast have shared memory implementations.
Is there a way to know if this implementation is being used when running my
program that calls these collectives?
Thank you,
Saliya
--
Saliya Ekanayake
You may want to look at the —oversubscribe mpirun option.
If you want more control, you can consider making a rankfile where you place
explicitly processes.
Aurélien
> Le 29 juin 2016 à 11:50, Jason Maldonis a écrit :
>
> Hi everyone,
>
> I am having trouble developing a complicated paral
Hi everyone,
I am having trouble developing a complicated parallelization algorithm with
MPI and I'm hoping for some tips (I am using OpenMPI 1.10.2). I posted the
latest problem I ran into on Stack Overflow and got a response from someone
saying they don't think it is possible to do the spawn all
The OMP_PROC_BIND=CLOSE approach works, except it will bind threads to 1
hardware thread only (when HT is present). For example, doing the following
to run 2 procs per node each with 4 threads, the thread affinity info
(queried through sched_getaffinity()) comes out as below.
mpirun -np 2 --map-by
Thank you, Ralph and Gilles.
I didn't know about the OMPI_COMM_WORLD_LOCAL_RANK variable. Essentially,
this means I should be able to wrap my application call in a shell script
and have mpirun invoke that. Then within the script I can query this
variable and set correct OMP env variable, correct?