On 3/16/2025 7:09 AM, Christian Franke wrote:
Mark Geisert wrote:
[...]

Could only test the single cpu group (aka single physical cpu) case which is the most common, I guess. Works as expected:

$ uname -r
3.6.0-dev-440-g5ec497dc80bc-dirty.x86_64

$ grep '^model name' /proc/cpuinfo | uniq -c
      28 model name      : Intel(R) Core(TM) i7-14700K

$ stress-ng --pthread 1 -v &
[1] 1323
...
stress-ng: debug: [1324] pthread: [1324] started (instance 0 on CPU 10)

$ taskset -c -p 1324
pid 1324's current affinity list: 0-27

$ taskset -p fff0000 1324 # All E-cores
pid 1324's current affinity mask: fffffff
pid 1324's new affinity mask: fff0000

$ taskset -p fff5555 1324 # All cores but no HT
pid 1324's current affinity mask: fff0000
pid 1324's new affinity mask: fff5555

$ taskset -c -p 8,9 1324 # P-core 4 with HT
pid 1324's current affinity list: 0,2,4,6,8,10,12,14,16-27
pid 1324's new affinity list: 8,9

$ taskset -p 1324
pid 1324's current affinity mask: 300

The settings have the desired effect on reported core usage.

Thanks very much Christian for testing. I want to make a minor change to the patch:
    if (procmask == 0)
will be changed to
    if (groupcount > 1)
to make it clearer what's going on. I will also add a few words to both code comments and the patch description saying what will happen on systems with more than one cpu group.

It sure would be nice to test on a system with more than 64 h/w threads but I don't have that kind of budget ;-).

So, v2 patch incoming shortly.  Comments from other folks welcome.

..mark

Reply via email to