Hi Greg and Shuah,
On 19/02/26 16:57, Greg KH wrote:
All are clean cherry-picks. After patching the selftests the test is
correctly skipped. These additional backports cleansup the code and
avoids the need for conflict resolution and might help future backports.
Shouldn't you be always running the latest selftests on older kernels?
We don't always keep selftests up to date at all, as you can see here,
but newer selftests should ALWAYS work with older kernels.
Thanks for sharing your insights on this.
Couple of problems around this, would really appreciate your guidance on
this.
1. Not all new selftests written might be correctly skipping if the
feature is not supported in older kernels.
Simple Experiment: (very small subset of tests)
- I have installed v6.12.74 kernel on my machine: (Kernel under test:
v6.12.74)
- I have compiled mm selftests present in tools/testing/selftests/mm/
./run_vmtests.sh with both latest mainline (v6.19+) and (v6.12.74)
- When I have used selftests from v6.12.74 [**]
==================================================================
# SUMMARY: PASS=53 SKIP=3 FAIL=1
1..57
==================================================================
- When I have used selftests from upstream-latest
(v6.19-10669-g970296997869)
==================================================================
# SUMMARY: PASS=57 SKIP=6 FAIL=7
1..70
==================================================================
We have 7 failures compared 1.(the one that failed from 6.12.74 doesn't
fail on mainline(it has been updated)
I have taken a look at 7 FAILed ones:
Some of them is(which are failing are)
==================================================================
# # Totals: pass:0 fail:73 xfail:0 xpass:0 skip:17 error:0
# [FAIL]
not ok 38 guard-regions # exit=1
^^ guard-regions test
==================================================================
==================================================================
# # Totals: pass:5 fail:1 xfail:0 xpass:0 skip:0 error:0
# [FAIL]
not ok 40 process_madv # exit=1
==================================================================
==================================================================
# # Totals: pass:13 fail:10 xfail:0 xpass:0 skip:0 error:0
# [FAIL]
not ok 41 merge # exit=1
==================================================================
I didn't check if these are problems with the selftest or something that
needs fixing in 6.12.74, but given that latest-upstream continues to add
new tests and when we see new failures as we update selftests, it might
be tougher to track whether they are regressions in kernel(because they
are newer tests everytime, and older tests which were passing might also
change behaviour due to fixes in the test) or newer tests not skipping
the tests correctly.
2. (Minor concern) We have been seeing some compilation issues with
latest selftests(maybe due to missing newer packages or build errors
with newer changes with latest compilers), so always keeping selftests
up-to-date with latest upstream makes it a bit challenging to keep track
of new issues(as a new test might not be correctly skipping) on stable
kernels. (thanks to subramanya for sharing about compilation issues with
latest, due to different compilers etc.,.)
[**] -- I have to skip one test among (57) as it just doesn't complete
forever.
I think trying to keep these all up to date is going to be "a lot", are
you sure it is going to be worth it?
I do fully agree. At the same time, when new selftests don't get skipped
properly on older kernels, it might be hard to track regressions with these.
Please let me know what your thoughts on this.
thanks,
Harshit
thanks,
greg k-h