eviation: 4915 us
Signed-off-by: Ira Weiny
Signed-off-by: Prathu Baronia
[Updated commit text with test data]
---
include/linux/highmem.h | 28 ++--
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index d2c7
Updated the commit text.
Changelog:
Added my test data to Ira's v3 patch.
Ira Weiny (1):
mm/highmem: Remove deprecated kmap_atomic
include/linux/highmem.h | 28 ++--
1 file changed, 14 insertions(+), 14 deletions(-)
--
2.17.1
Reported-by: Chintan Pandya
Signed-off-by: Prathu Baronia
---
include/linux/highmem.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index d2c70d3772a3..444df139b489 100644
--- a/include/linux/highmem.h
+++ b/include/li
As discussed on the v1 thread I have used the recently introduced kmap_local_*
APIs to avoid unnecessary preemption and pagefault disabling.
I did not get further response on the previous thread so sending this
again.
Prathu Baronia (1):
mm: Optimizing hugepage zeroing in arm64
include/linux
Signed-off-by: Prathu Baronia
---
arch/arm64/include/asm/page.h | 3 +++
arch/arm64/mm/copypage.c | 8
2 files changed, 11 insertions(+)
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 012cffc574e8..8f9d005a11bb 100644
--- a/arch/arm64/include/asm/page
t the opposition that the change was not architecturally neutral.
Upon revisiting this now I see significant improvement by removing around 2k
barrier calls from the zeroing path. So hereby I propose an arm64 specific
definition of clear_user_highpage().
Prathu Baronia (1):
mm: Optimizing hugepage zeroi
6 matches
Mail list logo