Thanks, Ralph, The .../etc/mca-params.conf doesn't want the shell version with the export and OMPI_MCA_ prefix, does it?
$ tail -3 $MPI_HOME/etc/openmpi-mca-params.conf # See "ompi_info --param all all" for a full listing of Open MPI MCA # parameters available and their default values. orte_hetero_nodes=1 Yes, it appears the man page may have been outdated, as ompi_info -a shows: MCA hwloc: parameter "hwloc_base_binding_policy" (current value: "", data source: default, level: 9 dev/all, type: string) Policy for binding processes [none | hwthread | core (default) | l1cache | l2cache | l3cache | socket | numa | board] (supported qualifiers: overload-allowed,if-supported) and the default is hwloc_base_binding_policy=core, I believe. Thanks, again, and sorry to be dense. -- bennet On Mon, Jan 11, 2016 at 9:39 AM, Ralph Castain <r...@open-mpi.org> wrote: > For the 1.10 series, putting "export > OMPI_MCA_hwloc_base_binding_policy=none” into your default MCA param file > will solve the problem. I believe that is true for all of the 1.8 series as > well, and suspect the man page for 1.8.2 was simply out-of-date. You could > verify that if you are using something that old. > > > >> On Jan 11, 2016, at 5:32 AM, Bennet Fauber <ben...@umich.edu> wrote: >> >> We have an issue with binding to cores with some applications and the >> default causes issues. We would, therefore, like to set the >> equivalent of >> >> mpirun --bind-to none >> >> globally. I tried search for combinations of 'openmpi global >> settings', 'site settings', and the like on the web and ended up >> several times at >> >> https://www.open-mpi.org/faq/?category=sysadmin#sysadmin-mca-params >> >> That makes it look very much like MCA parameters are for network >> settings; see, specifically, section 4. What are MCA Parameters? Why >> would I set them? >> >> At some point, though, the mpirun man page, >> https://www.open-mpi.org/doc/v1.8/man1/mpirun.1.php, where at the end >> of the section titled, Mapping, Ranking, and Binding: Oh My!, it says: >> >> ----------------------------- >> Process binding can also be set with MCA parameters. Their usage is >> less convenient than that of mpirun options. On the other hand, MCA >> parameters can be set not only on the mpirun command line, but >> alternatively in a system or user mca-params.conf file or as >> environment variables, as described in the MCA section below. Some >> examples include: >> >> mpirun option MCA parameter key value >> >> --map-by core rmaps_base_mapping_policy core >> . . . . >> --bind-to none hwloc_base_binding_policy none >> ----------------------------- >> >> Am I correct in interpreting this to mean that, if I >> >> export OMPI_MCA_hwloc_base_binding_policy=none >> >> from the module file, the default binding will be 'none'? >> >> Equivalently, if I add a line to /ompi/install/path/etc/mca-params.conf >> >> ----- >> hwloc_base_binding_policy = none >> ----- >> >> that would do the same? >> >> The web version of the man page is for 1.8.8, and it agrees with the >> installed man page for our 1.8.7. However, it appears that our system >> man page for mpirun(1) for OpenMPI 1.8.2 has slightly different >> parameters. Specifically, >> >> Process binding can also be set with MCA parameters. Their usage is >> less convenient than that of mpirun options. On the other hand, MCA >> parameters can be set not only on the mpirun command line, but alterna- >> tively in a system or user mca-params.conf file or as environment vari- >> ables, as described in the MCA section below. The correspondences are: >> >> mpirun option MCA parameter key value >> >> -bycore rmaps_base_schedule_policy core >> -bysocket rmaps_base_schedule_policy socket >> -bind-to-core orte_process_binding core >> -bind-to-socket orte_process_binding socket >> -bind-to-none orte_process_binding none >> >> So for version 1.8.2, the equivalent incantations would be >> >> export OMPI_MCA_orte_process_binding=none >> >> or >> >> /ompi/install/path/v1.8.2/etc/mca-params.conf >> ----- >> orte_process_binding = none >> ----- >> >> Yes? >> >> Sorry to be dense about this. >> >> Thanks, -- bennet >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2016/01/28243.php > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/01/28244.php