Gus Correa wrote:

Ralph Castain wrote:

On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:

Gustavo Correa wrote:

Dear OpenMPI Pros

Is there an MCA parameter that would do the same as the mpiexec switch '-bind-to-core'?
I.e., something that I could set up not in the mpiexec command line,
but for the whole cluster, or for an user, etc.

In the past I used '-mca mpi mpi_paffinity_alone=1'.


Must be a typo here - the correct command is '-mca mpi_paffinity_alone 1'

But that was before '-bind-to-core' came along.
However, my recollection of some recent discussions here in the list
is that the latter would not do the same as '-bind-to-core',
and that the recommendation was to use '-bind-to-core' in the mpiexec command line.


Just to be clear: mpi_paffinity_alone=1 still works and will cause the same behavior as bind-to-core.


A little awkward, but how about

        --bycore                rmaps_base_schedule_policy  core
        --bysocket              rmaps_base_schedule_policy  socket
        --bind-to-core          orte_process_binding        core
        --bind-to-socket        orte_process_binding        socket
        --bind-to-none          orte_process_binding        none

_______________________________________________


Thank you Ralph and Eugene

Ralph, forgive me the typo in the previous message, please.
Equal sign inside the openmpi-mca-params.conf file,
but no equal sign on the mpiexec command line, right?

I am using OpenMPI 1.4.3
I inserted the line
"mpi_paffinity_alone = 1"
in my openmpi-mca-params.conf file, following Ralph's suggestion
that it is equivalent to '-bind-to-core'.

However, now when I do "ompi_info -a",
the output shows the non-default value 1 twice in a row,
then later it shows the default value 0 again!
Please see the output enclosed below.

I am confused.

1) Is this just a glitch in ompi_info,
or did mpi_paffinity_alone get reverted to zero?

2) How can I increase the verbosity level to make sure I have processor
affinity set (i.e. that the processes are bound to cores/processors)?

Just a quick answer on 2). The FAQ http://www.open-mpi.org/faq/?category=tuning#using-paffinity-v1.4 (or "man mpirun" or "mpirun --help") mentions --report-bindings.

If this is on a Linux system with numactl, you can also try "mpirun ... numactl --show".

##########

ompi_info -a

...

MCA mpi: parameter "mpi_paffinity_alone" (current value: "1", data source: file [/home/soft/openmpi/1.4.3/gnu-intel/etc/openmpi-mca-params.conf], synonym of: opal_paffinity_alone) If nonzero, assume that this job is the only (set of) process(es) running on each node and bind processes to processors, starting with processor ID 0

MCA mpi: parameter "mpi_paffinity_alone" (current value: "1", data source: file [/home/soft/openmpi/1.4.3/gnu-intel/etc/openmpi-mca-params.conf], synonym of: opal_paffinity_alone) If nonzero, assume that this job is the only (set of) process(es) running on each node and bind processes to processors, starting with processor ID 0

...

[ ... and after 'mpi_leave_pinned_pipeline' ...]

MCA mpi: parameter "mpi_paffinity_alone" (current value: "0", data source: default value) If nonzero, assume that this job is the only (set of) process(es) running on each node and bind processes to processors, starting with processor ID 0

...
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to