Solved: The process to core locking was due to affinity being set at the PSM layer. So I added -x IPATH_NO_CPUAFFINITY=1 to the mpirun command.
Dave On Wed, Aug 4, 2010 at 12:13 PM, Eugene Loh <eugene....@oracle.com> wrote: > > David Akin wrote: > >> All, >> I'm trying to get the OpenMP portion of the code below to run >> multicore on a couple of 8 core nodes. >> > > I was gone last week and am trying to catch up on e-mail. This thread was a > little intriguing. > > I agree with Ralph and Terry: > > *) OMPI should not be binding by default. > *) There is nothing in your program that would induce binding nor anything in > your reported output that indicates binding is occurring. > > So, any possibility that your use of taskset or top is misleading? Did you > ever try running with --report-bindings as Terry suggested? > > The thread also discussed OMPI's inability to control the binding behavior of > individual threads. You can't manage individual threads with OMPI; you'd > have to use a thread-specific mechanism, and many OMP implementations support > such mechanisms. The best you could do with OMPI would be to unbind or bind > broadly (e.g., to an entire socket), and that policy would be applied to all > the threads within the process. > > But, all that should be unnecessary... there shouldn't be any binding by > default in the first place. I'd check into whether these threads really are > being bound and, if so, why. > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users