Nifty -- good to know. Thanks for looking into this!
Do any kernel-hacker types on this list know roundabout what version
thread-affinity was brought into the Linux kernel?
FWIW: all the same concepts here (using pid==0) should also work for
PLPA, so you can set via socket/core, etc.
On
ok, so I digged a little deeper, and have some good news. Let me start
with a set of routines, that we didn't even discuss yet, but which works
for setting thread affinity, and discuss then libnuma and
sched_setaffinity() again.
---
On linux systems, the pthread library has a set of ro
Fair enough; let me know what you find. It would be good to
understand exactly why you're seeing what you're seeing...
On Dec 2, 2008, at 5:47 PM, Edgar Gabriel wrote:
its on OpenSuSE 11 with kernel 2.6.25.11. I don't know the libnuma
library version, but I suspect that its fairly new.
I
its on OpenSuSE 11 with kernel 2.6.25.11. I don't know the libnuma
library version, but I suspect that its fairly new.
I will try to investigate that in the next days a little more. I do
think that they use sched_setaffinity() underneath the hood (because in
one of my failed attempts when I pa
On Dec 2, 2008, at 11:27 AM, Edgar Gabriel wrote:
so I ran a couple of tests today and I can not confirm your
statement. I wrote simple a simple test code where a process first
sets an affinity mask and than spawns a number of threads. The
threads modify the affinity mask and every thread (
Jeff,
so I ran a couple of tests today and I can not confirm your statement. I
wrote simple a simple test code where a process first sets an affinity
mask and than spawns a number of threads. The threads modify the
affinity mask and every thread ( including the master thread) print out
there
On Nov 20, 2008, at 9:43 AM, Ralph Castain wrote:
Interesting - learn something new every day! :-)
Sorry; I was out for the holiday last week, but a clarification:
libnuma's man page says that numa_run_on_node*() binds a "thread", but
it really should say "process". I looked at the code,
Hi,
Sorry for not answering sooner,
In Open MPI 1.3 we added a paffinity mapping module.
The syntax is quite simple and flexible:
rank N=hostA slot=socket:core_range
rank M=hostB slot=cpu
see the fallowing example:
ex:
#mpirun -rf rankfile_name ./app
#cat rankfile_name
rank 0=host1 slot=0
At the very least, you would have to call these functions -after-
MPI_Init so they could override what OMPI did.
On Nov 20, 2008, at 8:03 AM, Gabriele Fatigati wrote:
And in the hybrid program MPi+OpenMP?
Are these considerations still good?
2008/11/20 Edgar Gabriel :
I don't think that the
And in the hybrid program MPi+OpenMP?
Are these considerations still good?
2008/11/20 Edgar Gabriel :
> I don't think that they conflict with our paffinity module and setting. My
> understanding is that if you set a new affinity mask, it simply overwrites
> the previous setting. So in the worst ca
I don't think that they conflict with our paffinity module and setting.
My understanding is that if you set a new affinity mask, it simply
overwrites the previous setting. So in the worst case it voids the
setting made by Open MPI, but I don't think that it should cause
'problems'. Admittedly,
I would guess that you can, if the library is installed, and as far as I
know it is part of most recent Linux distributions...
Thanks
Edgar
Gabriele Fatigati wrote:
Thanks Edgar,
but can i use these libraries also in a not NUMA machines?
2008/11/20 Edgar Gabriel :
if you look at recent versi
Interesting - learn something new every day! :-)
How does this interact with OMPI's paffinity/maffinity assignments?
With the rank/slot mapping and binding system?
Should users -not- set paffinity if they include these numa calls in
their code?
Can we detect any potential conflict in OMPI
Thanks Edgar,
but can i use these libraries also in a not NUMA machines?
2008/11/20 Edgar Gabriel :
> if you look at recent versions of libnuma, there are two functions called
> numa_run_on_node() and numa_run_on_node_mask(), which allow thread-based
> assignments to CPUs
>
> Thanks
> Edgar
>
if you look at recent versions of libnuma, there are two functions
called numa_run_on_node() and numa_run_on_node_mask(), which allow
thread-based assignments to CPUs
Thanks
Edgar
Gabriele Fatigati wrote:
Is there a way to assign one thread to one core? Also from code, not
necessary with
Not in Linux, I'm afraid - you can assign one process to a set of
cores, but Linux doesn't track individual threads.
If you look at OMPI 1.3's man page for mpirun, you'll see some info on
the rank-file mapping. Most of what was done is aimed at the use of
hostfiles where you specify the soc
Is there a way to assign one thread to one core? Also from code, not
necessary with OpenMPI option.
Thanks.
2008/11/19 Stephen Wornom :
> Gabriele Fatigati wrote:
>>
>> Ok,
>> but in Ompi 1.3 how can i enable it?
>>
>
> This may not be relevant, but I could not get a hybrid mpi+OpenMP code to
> w
Gabriele Fatigati wrote:
Ok,
but in Ompi 1.3 how can i enable it?
This may not be relevant, but I could not get a hybrid mpi+OpenMP code
to work correctly.
Would my problem be related to Gabriele's and perhaps fixed in openmpi 1.3?
Stephen
2008/11/18 Ralph Castain :
I am afraid it is on
Ok,
but in Ompi 1.3 how can i enable it?
2008/11/18 Ralph Castain :
> I am afraid it is only available in 1.3 - we didn't backport it to the 1.2
> series
>
>
> On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:
>
>> Hi,
>> how can i set "slot mapping" as you told me? With TASK GEOMETRY? Or is
I am afraid it is only available in 1.3 - we didn't backport it to the
1.2 series
On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:
Hi,
how can i set "slot mapping" as you told me? With TASK GEOMETRY? Or is
a new 1.3 OpenMPI feature?
Thanks.
2008/11/18 Ralph Castain :
Unfortunately, p
Hi,
how can i set "slot mapping" as you told me? With TASK GEOMETRY? Or is
a new 1.3 OpenMPI feature?
Thanks.
2008/11/18 Ralph Castain :
> Unfortunately, paffinity doesn't know anything about assigning threads to
> cores. This is actually a behavior of Linux, which only allows paffinity to
> be s
Unfortunately, paffinity doesn't know anything about assigning threads
to cores. This is actually a behavior of Linux, which only allows
paffinity to be set at the process level. So, when you set paffinity
on a process, you bind all threads of that process to the specified
core(s). You cann
22 matches
Mail list logo