On 13.05.25 19:48, Liam R. Howlett wrote:
* David Hildenbrand <da...@redhat.com> [250512 08:34]:
The "memramp() shrinking" scenario no longer applies, so let's remove
that now-unnecessary handling.

Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com>
Acked-by: Ingo Molnar <mi...@kernel.org> # x86 bits
Signed-off-by: David Hildenbrand <da...@redhat.com>

small comment, but this looks good.

Reviewed-by: Liam R. Howlett <liam.howl...@oracle.com>

Thanks!


---
  arch/x86/mm/pat/memtype_interval.c | 44 ++++--------------------------
  1 file changed, 6 insertions(+), 38 deletions(-)

diff --git a/arch/x86/mm/pat/memtype_interval.c 
b/arch/x86/mm/pat/memtype_interval.c
index 645613d59942a..9d03f0dbc4715 100644
--- a/arch/x86/mm/pat/memtype_interval.c
+++ b/arch/x86/mm/pat/memtype_interval.c
@@ -49,26 +49,15 @@ INTERVAL_TREE_DEFINE(struct memtype, rb, u64, 
subtree_max_end,
static struct rb_root_cached memtype_rbroot = RB_ROOT_CACHED; -enum {
-       MEMTYPE_EXACT_MATCH     = 0,
-       MEMTYPE_END_MATCH       = 1
-};
-
-static struct memtype *memtype_match(u64 start, u64 end, int match_type)
+static struct memtype *memtype_match(u64 start, u64 end)
  {
        struct memtype *entry_match;
entry_match = interval_iter_first(&memtype_rbroot, start, end-1); while (entry_match != NULL && entry_match->start < end) {

I think this could use interval_tree_for_each_span() instead.

Fancy, let me look at this. Probably I'll send another patch on top of this series to do that conversion. (as you found, patch #9 moves that code)

--
Cheers,

David / dhildenb

Reply via email to