Manfred Mücke wrote:
> Hi,
>
> I want to enforce migration of thread+allocated memory to another lgroup but 
> failed for some threads.
>   

Ok.  You can give MADV_ACCESS_LWP as the advice to madvise(3C) for the 
memory that you want to migrate.  This advice tells the Solaris kernel 
that the next thread to touch the memory will use it a lot and the 
kernel will migrate the memory near the thread that next touches that 
memory (ie. in or near the thread's home lgroup).

If you don't want to change your application to do this, you can use 
pmadvise(1) instead, specifying the same advice for your specified 
virtual memory.


> I understand from the "Memory and Thread Placement Optimization Developer's 
> Guide", that there is no hard memory affinity in Solaris, but only the 
> possibility to define some level of affinity to the home lgroup. Therefore, 
> prior to any kind of memory migration it seems necessary to move the home 
> lgroup to the lgroup where the memory should migrate to. The lgrp_home(3LGRP) 
> man page defines the  "home lgroup" of a  thread as the "lgroup  with  the 
> strongest  affinity  that  the thread can run on". 
>
> A sequence for memory migration was already lined out in 
> http://www.opensolaris.org/jive/thread.jspa?messageID=14792:
>
> - Change the thread in questions home lgroup to the CPU where you want the 
> memory allocated,
> - use 'pmadvise -o heap=lwp_access' on the process
> - the memory should now get allocated in the newly homed lgrp (check with 
> 'pmap -L' and lgrpinfo (if the allocation is large enough to notice)).
> - rehome the lgrp to another CPU or just bind it.
>
> I tried, but for some threads, I got surprising results:
>
>   
>> ./plgrp -a all 27215
>>     
>      PID/LWPID    HOME  AFFINITY
>    27215/1        2     5/strong,0-4,6-24/none
>
> If the home lgroup is defined as the lgroup with the strongest affinity, 
> isn't the output above somewhat contradictory?
>   

Yes, this means that the home lgroup of the specified thread is 2 even 
though it has a strong affinity for lgroup 5.


>> ./plgrp -F -H 5 27215
>>     
>      PID/LWPID    HOME
>    27215/1        2 => 2
>
> This thread seems to be resistant to plgrp's attempts to assign a new home 
> lgroup (option -H) to it.
>
> This is a test program. For some other threads, I was able to freely rehome 
> them to any lgroup in the range 1..8 (the leaf lgroups). Has anyone some 
> further suggestions on what could prevent reassigning the home lgroup?
>   

If the thread is bound to a CPU that isn't in lgroup 5 or bound to a 
processor set that does not overlap lgroup 5, then the thread cannot be 
rehomed to lgroup 5 even though it may have a strong affinity for lgroup 5.

You should be able to use pbind(1M) and psrset(1M) to see whether your 
thread is bound.


> My system is a Sun Fire X4600 with eight dual-core Opterons. Lgroups 1..8 are 
> identical (i.e. same amount of memory and CPU resources per lgroup), except 
> for the CPU IDs:
>   
>> lgrpinfo.pl -a
>>     
> [..]
> lgroup 1 (leaf):
>         Children: none, Parent: 9
>         CPUs: 0 1
>         Memory: installed 3584 Mb, allocated 985 Mb, free 2599 Mb
>         Lgroup resources: 1 (CPU); 1 (memory)
>         Load: 0.00723
>         Latency: 51
> [..]
>   

If you're still having problems after trying what I suggested above, you 
should include *all* of the output from "lgrpinfo -Ta" so we can see 
where lgroup 5 is in the topology and what it contains.



Jonathan

_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to