The commit is pushed to "branch-rh9-5.14.0-362.18.1.vz9.40.x-ovz" and will 
appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh9-5.14.0-362.18.1.vz9.40.8
------>
commit f208f1f85ef418076f6d8f3ea521f1285ae35dd9
Author: Konstantin Khorenko <khore...@virtuozzo.com>
Date:   Fri Jun 14 16:50:17 2024 +0300

    ve/net/neighbour: restore hashtable size limit - beautify the code
    
    This is just a patch to make the code same both in vz7 and vz9.
    
    Fixes: 9ed7d66ec22a ("ve/net/neighbour: restore hashtable size limit")
    
    https://virtuozzo.atlassian.net/browse/PSBM-153199
    https://pmc.acronis.work/browse/VSTOR-81287
    Signed-off-by: Konstantin Khorenko <khore...@virtuozzo.com>
    Feature: net: make the neighbor entries limit per-CT
---
 net/core/neighbour.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 43e3e99e3415..ecda22f2b06e 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -730,10 +730,12 @@ ___neigh_create(struct neigh_table *tbl, const void *pkey,
        /*
         * Since entries can grow unlimited we limit the size of the hash table
         * here. __get_free_pages allocates continious regions of phys mem
-        * and orders above 10 are very hard to satisfy. We limit the size to 5
-        * as it is the middle ground
+        * and orders above 10 are very hard to satisfy.
+        * We limit the size to 5 as it is the middle ground.
         */
-       if (nht->hash_shift < 5 && atomic_read(&tbl->entries) > (1 << 
nht->hash_shift))
+       #define NEIGH_HASH_SHIFT_MAX 5
+       if (nht->hash_shift < NEIGH_HASH_SHIFT_MAX &&
+           atomic_read(&tbl->entries) > (1 << nht->hash_shift))
                nht = neigh_hash_grow(tbl, nht->hash_shift + 1);
 
        hash_val = tbl->hash(n->primary_key, dev, nht->hash_rnd) >> (32 - 
nht->hash_shift);
_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to