From: Kirill Tkhai <ktk...@virtuozzo.com> Patchset description: Make kstat_glob::swap_in percpu and cleanup
This patchset continues escaping of kstat_glb_lock and makes swap_in percpu. Also, newly unused primitives are dropped and reduced memory usage by using percpu seqcount (instead of separate percpu seqcount for every kstat percpu variable). Kirill Tkhai (4): kstat: Make kstat_glob::swap_in percpu kstat: Drop global kstat_lat_struct kstat: Drop cpu argument in KSTAT_LAT_PCPU_ADD() kstat: Make global percpu kstat_pcpu_seq instead of percpu seq for every variable ========================================== This patch description: Using of global local is not good for scalability. Better we make swap_in percpu, and it will be updated lockless like other statistics (e.g., page_in). Signed-off-by: Kirill Tkhai <ktk...@virtuozzo.com> Ported to vz8: - Dropped all patchset but this patch, since it is already partially included - Introduced start in do_swap_page to use it for kstat_glob.swap_in (cherry picked from ed033a381e01996f7f8061d9838d1c9ec6b38d96) Signed-off-by: Andrey Zhadchenko <andrey.zhadche...@virtuozzo.com> --- mm/memory.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index b64f317..3a48379 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3037,7 +3037,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) int locked; int exclusive = 0; vm_fault_t ret = 0; + cycles_t start; + start = get_cycles(); if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) goto out; @@ -3226,6 +3228,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); out: + local_irq_disable(); + KSTAT_LAT_PCPU_ADD(&kstat_glob.swap_in, get_cycles() - start); + local_irq_enable(); + return ret; out_nomap: mem_cgroup_cancel_charge(page, memcg, false); -- 1.8.3.1 _______________________________________________ Devel mailing list Devel@openvz.org https://lists.openvz.org/mailman/listinfo/devel