n and temporary
release for write attempts on mmap_lock in smaps_rollup is still necessary.
Chinwen Chang (2):
mmap locking API: add mmap_lock_is_contended()
mm: proc: smaps_rollup: do not stall write attempts on mmap_lock
fs/proc/task_mmu.c| 21 +
include/
d-off-by: Chinwen Chang
---
fs/proc/task_mmu.c | 21 +
1 file changed, 21 insertions(+)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index dbda449..4b51f25 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -856,6 +856,27 @@ static int show_smaps_rollup(s
Add new API to query if someone wants to acquire mmap_lock
for write attempts.
Using this instead of rwsem_is_contended makes it more tolerant
of future changes to the lock type.
Signed-off-by: Chinwen Chang
---
include/linux/mmap_lock.h | 5 +
1 file changed, 5 insertions(+)
diff --git a
On Wed, 2020-08-12 at 09:39 +0100, Steven Price wrote:
> On 11/08/2020 05:42, Chinwen Chang wrote:
> > smaps_rollup will try to grab mmap_lock and go through the whole vma
> > list until it finishes the iterating. When encountering large processes,
> > the mmap_lock will be h
since v1:
- If current VMA is freed after dropping the lock, it will return
- incomplete result. To fix this issue, refine the code flow as
- suggested by Steve. [1]
[1] https://lore.kernel.org/lkml/bf40676e-b14b-44cd-75ce-419c70194...@arm.com/
Signed-off-by: Chinwen Chang
---
fs/proc/task_mmu.c
/lkml/bf40676e-b14b-44cd-75ce-419c70194...@arm.com/
Chinwen Chang (2):
mmap locking API: add mmap_lock_is_contended()
mm: proc: smaps_rollup: do not stall write attempts on mmap_lock
fs/proc/task_mmu.c| 57 ++-
include/linux/mmap_lock.h | 5
2
Add new API to query if someone wants to acquire mmap_lock
for write attempts.
Using this instead of rwsem_is_contended makes it more tolerant
of future changes to the lock type.
Signed-off-by: Chinwen Chang
---
include/linux/mmap_lock.h | 5 +
1 file changed, 5 insertions(+)
diff --git a
44cd-75ce-419c70194...@arm.com/
[2]
https://lore.kernel.org/lkml/cann689ftcsc71cjajs0gpspohgo_hrj+diwsou1wr98ypkt...@mail.gmail.com/
[3] https://lore.kernel.org/lkml/db0d40e2-72f3-09d5-c162-9c49218f1...@arm.com/
Change-Id: Idcdb6478ccd06a9e5edd4eda9285378e961a6b94
Signed-off-by: Chinwen Chang
Re
Add new API to query if someone wants to acquire mmap_lock
for write attempts.
Using this instead of rwsem_is_contended makes it more tolerant
of future changes to the lock type.
Change-Id: Idb21478bb0580ba72b9926aba3bbc4b1f75deec2
Signed-off-by: Chinwen Chang
Reviewed-by: Steven Price
Acked
.@arm.com/
Chinwen Chang (3):
mmap locking API: add mmap_lock_is_contended()
mm: smaps*: extend smap_gather_stats to support specified beginning
mm: proc: smaps_rollup: do not stall write attempts on mmap_lock
fs/proc/task_mmu.c| 96 +++
incl
:
- This is a new change to make the retry behavior of smaps_rollup
- more complete as suggested by Michel [1]
[1]
https://lore.kernel.org/lkml/cann689ftcsc71cjajs0gpspohgo_hrj+diwsou1wr98ypkt...@mail.gmail.com/
Change-Id: I8652e0ee6c5e93fb56376a68d71ed6cdd8ac10e8
Signed-off-by: Chinwen Chang
CC
On Mon, 2020-08-17 at 09:38 +0100, Steven Price wrote:
> On 15/08/2020 07:20, Chinwen Chang wrote:
> > smaps_rollup will try to grab mmap_lock and go through the whole vma
> > list until it finishes the iterating. When encountering large processes,
> > the mmap_lock will be h
Add new API to query if someone wants to acquire mmap_lock
for write attempts.
Using this instead of rwsem_is_contended makes it more tolerant
of future changes to the lock type.
Signed-off-by: Chinwen Chang
Reviewed-by: Steven Price
Acked-by: Michel Lespinasse
---
include/linux/mmap_lock.h
.@arm.com/
Chinwen Chang (3):
mmap locking API: add mmap_lock_is_contended()
mm: smaps*: extend smap_gather_stats to support specified beginning
mm: proc: smaps_rollup: do not stall write attempts on mmap_lock
fs/proc/task_mmu.c| 96 +++
incl
44cd-75ce-419c70194...@arm.com/
[2]
https://lore.kernel.org/lkml/cann689ftcsc71cjajs0gpspohgo_hrj+diwsou1wr98ypkt...@mail.gmail.com/
[3] https://lore.kernel.org/lkml/db0d40e2-72f3-09d5-c162-9c49218f1...@arm.com/
Signed-off-by: Chinwen Chang
CC: Steven Price
CC: Michel Lespinasse
---
fs/proc/t
:
- This is a new change to make the retry behavior of smaps_rollup
- more complete as suggested by Michel [1]
[1]
https://lore.kernel.org/lkml/cann689ftcsc71cjajs0gpspohgo_hrj+diwsou1wr98ypkt...@mail.gmail.com/
Signed-off-by: Chinwen Chang
CC: Michel Lespinasse
Reviewed-by: Steven Price
---
fs/proc
On Thu, 2020-08-13 at 02:53 -0700, Michel Lespinasse wrote:
> On Wed, Aug 12, 2020 at 7:14 PM Chinwen Chang
> wrote:
> > Recently, we have observed some janky issues caused by unpleasantly long
> > contention on mmap_lock which is held by smaps_rollup when probing large
> &g
On Fri, 2020-08-14 at 01:35 -0700, Michel Lespinasse wrote:
> On Wed, Aug 12, 2020 at 7:13 PM Chinwen Chang
> wrote:
> > smaps_rollup will try to grab mmap_lock and go through the whole vma
> > list until it finishes the iterating. When encountering large processes,
> > th
1.227 1.2935.10
Meituan 9.12.401 1.107 1.54328.26
WeChat7.0.32.353 2.68 12.20
Honor of Kings1.43.1.6 6.636.7131.24
By the way, we have verified our platforms with those patches and
achieved the goal of mass production.
Thanks.
Chinwen Chang
On Mon, 2020-07-06 at 14:27 +0200, Laurent Dufour wrote:
> Le 06/07/2020 à 11:25, Chinwen Chang a écrit :
> > On Thu, 2019-06-20 at 16:19 +0800, Haiyan Song wrote:
> >> Hi Laurent,
> >>
> >> I downloaded your script and run it on Intel 2s skylake platform w
20 matches
Mail list logo