Solved: The process to core locking was due to affinity being set at
the PSM layer. So I added -x IPATH_NO_CPUAFFINITY=1 to the mpirun
command.
Dave
On Wed, Aug 4, 2010 at 12:13 PM, Eugene Loh wrote:
>
> David Akin wrote:
>
>> All,
>> I'm trying to get the OpenMP portion of the code below to ru
David Akin wrote:
All,
I'm trying to get the OpenMP portion of the code below to run
multicore on a couple of 8 core nodes.
I was gone last week and am trying to catch up on e-mail. This thread
was a little intriguing.
I agree with Ralph and Terry:
*) OMPI should not be binding by defaul
Afraid I can only reiterate: we don't support binding of individual threads
to cores at this time. You can use bind-to-socket to constrain all threads
from a process to a socket, so they can at least use those cores - but the
threads will move around between the cores in that socket, and more threa
I use the taskset command, or just use 'top' and watch the performance.
On Thu, Jul 29, 2010 at 12:02 PM, Ralph Castain wrote:
> I don't see anything in your code that would bind, but I also don't see
> anything that actually tells you whether or not your are bound. It appears
> that MPI_Get_pro
I don't see anything in your code that would bind, but I also don't see
anything that actually tells you whether or not your are bound. It appears that
MPI_Get_processor_name is simply returning the name of the node as opposed to
the name/id of any specific core. How do you know what core the th
No problem, anyways I think you are headed in the right direction now.
--td
David Akin wrote:
Sorry for the confusion. What I need is for all OpenMP threads to
*not* stay on one core. I *would* rather each OpenMP thread to run on
a separate core. Is it my example code? My gut reaction is no bec
Sorry for the confusion. What I need is for all OpenMP threads to *not* stay
on one core. I *would* rather each OpenMP thread to run on a separate core.
Is it my example code? My gut reaction is no because I can manipulate
(somewhat) the cores the threads are assigned by adding -bysocket
-bind-to-s
Ralph Castain wrote:
On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote:
Ralph Castain wrote:
How are you running it when the threads are all on one core?
If you are specifying --bind-to-core, then of course all the threads will be on
one core since we bind the process (not the thread). If you
If you check, I expect you will find that your threads and processes are not
bound to a core, but are now constrained to stay within a socket.
This means that if you run more threads than cores in a socket, you will see
threads idled due to contention.
On Jul 29, 2010, at 8:29 AM, David Akin w
Adding -bysocket -bind-to-socket worked! Now to figure out why that
is? I also assumed it was my code. You can try my simple example code
below.
On Thu, Jul 29, 2010 at 8:49 AM, Ralph Castain wrote:
>
> On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote:
>
> Ralph Castain wrote:
>
> How are you runn
On Jul 29, 2010, at 5:09 AM, Terry Dontje wrote:
> Ralph Castain wrote:
>>
>> How are you running it when the threads are all on one core?
>>
>> If you are specifying --bind-to-core, then of course all the threads will be
>> on one core since we bind the process (not the thread). If you are
>
Ralph Castain wrote:
How are you running it when the threads are all on one core?
If you are specifying --bind-to-core, then of course all the threads will be on
one core since we bind the process (not the thread). If you are specifying -mca
mpi_paffinity_alone 1, then the same behavior result
Below are all places that could contain mca related settings.
grep -i mca /usr/mpi/gcc/openmpi-1.4-qlc/etc/openmpi-mca-params.conf
# This is the default system-wide MCA parameters defaults file.
# Specifically, the MCA parameter "mca_param_files" defaults to a
# "$HOME/.openmpi/mca-params.conf:$sy
Something doesn't add up - the default for ompi is to -not- bind. Check your
default mca param file and your environment. Do you have any mca params set in
them?
On Jul 28, 2010, at 9:40 PM, David Akin wrote:
> Here's the exact command I'm running when all threads *are* pinned to
> a single co
Here's the exact command I'm running when all threads *are* pinned to
a single core:
/usr/mpi/gcc/openmpi-1.4-qlc/bin/mpirun -host c005,c006 -np 2
OMP_NUM_THREADS=4 hybrid4.gcc
Can anyone verify they have the same issue?
On Wed, Jul 28, 2010 at 7:52 PM, Ralph Castain wrote:
> How are you runnin
How are you running it when the threads are all on one core?
If you are specifying --bind-to-core, then of course all the threads will be on
one core since we bind the process (not the thread). If you are specifying -mca
mpi_paffinity_alone 1, then the same behavior results.
Generally, if you w
All,
I'm trying to get the OpenMP portion of the code below to run
multicore on a couple of 8 core nodes.
Good news: multiple threads are being spawned on each node in the run.
Bad news: each of the threads only runs on a single core, leaving 7
cores basically idle.
Sorta good news: if I provide a
All,
I'm trying to get the OpenMP portion of the code below to run
multicore on a couple of 8 core nodes.
Good news: Multiple threads are being spawned on each node in the run.
Bad news: Each of the threads only runs on a single core, leaving 7
cores basically idle.
Sorta good news: If I provide a
18 matches
Mail list logo