Le 07/11/2012 21:26, Jeff Squyres a écrit : > On Nov 7, 2012, at 1:33 PM, Blosch, Edwin L wrote: > >> I see hwloc is a subproject hosted under OpenMPI but, in reading the >> documentation, I was unable to figure out if hwloc is a module within >> OpenMPI, or if some of the code base is borrowed into OpenMPI, or something >> else. Is hwloc used by OpenMPI internally? Is it a layer above libnuma? >> Or is it just a project that is useful to OpenMPI in support of targeting >> various new platforms? > Open MPI uses hwloc internally for three main things: > > 1. all of the processor affinity options to mpirun (e.g., --bind-to-core) > 2. all its internal memory affinity functionality > 3. gather topology information about the machine it's running on > > #3 isn't used too heavily yet -- that will be more developed over time > (shared memory collectives have some obvious applications here). But we use > it to know if processes are in the same NUMA domain, which OpenFabrics > devices are "near" to a given process' NUMA domain, etc. > > But hwloc also stands alone quite well; it actually has nothing to do with > MPI. So it made sense to keep it as a standalone library+tool suite, too.
Edwin's question about libnuma also deserves an answer, and I need to prepare my marketing material for SC next week :) hwloc may somehow be considered as a layer above libnuma but: * hwloc is more portable (works on non-NUMA and non-Linux platforms) * hwloc does everything libnuma does, but it does a lot more (everything that isn't related to NUMA) * hwloc only uses libnuma for some syscalls (memory binding and migration syscalls are not in the libc unfortunately). We don't use anything else because we don't want to rely on their numa_*() interface (they broke the ABI in the past, things are not well documented, and their API is broken is some cases) Brice