2018-04-17 22:33 GMT+08:00 Laurent Dufour :
> Add speculative_pgfault vmstat counter to count successful speculative page
> fault handling.
>
> Also fixing a minor typo in include/linux/vm_event_item.h.
>
> Signed-off-by: Laurent Dufour
> ---
> include/linux/vm_event_item.h | 3 +++
> mm/memory.c
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran
---
v2: remove "if SMP"
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4
: Ganesh Mahendran
---
v2:
move find_vma() to do_page_fault()
remove IS_ENABLED()
remove fault != VM_FAULT_SIGSEGV check
initilize vma = NULL
---
arch/arm64/mm/fault.c | 29 +
1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/mm/fault.c b/arch
2018-05-02 22:46 GMT+08:00 Punit Agrawal :
> Hi Ganesh,
>
> I was looking at evaluating speculative page fault handling on arm64 and
> noticed your patch.
>
> Some comments below -
Thanks for your review.
>
> Ganesh Mahendran writes:
>
>> This patch enables
2018-05-02 17:07 GMT+08:00 Laurent Dufour :
> On 02/05/2018 09:54, Ganesh Mahendran wrote:
>> This patch enables the speculative page fault on the arm64
>> architecture.
>>
>> I completed spf porting in 4.9. From the test result,
>> we can see app launching time
2018-05-02 20:23 GMT+08:00 Will Deacon :
> On Wed, May 02, 2018 at 03:53:21PM +0800, Ganesh Mahendran wrote:
>> Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
>> enables Speculative Page Fault handler.
>
> Are there are tests for this? I'm really nervous about e
2018-03-29 15:50 GMT+08:00 Laurent Dufour :
> On 29/03/2018 05:06, Ganesh Mahendran wrote:
>> 2018-03-29 10:26 GMT+08:00 Ganesh Mahendran :
>>> Hi, Laurent
>>>
>>> 2018-02-16 23:25 GMT+08:00 Laurent Dufour :
>>>> When the speculative page fault handl
: Ganesh Mahendran
---
This patch is on top of Laurent's v10 spf
---
arch/arm64/mm/fault.c | 38 +++---
1 file changed, 35 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 4165485..e7992a3 100644
--- a/arch/arm64/mm/fa
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran
---
This patch is on top of Laurent's v10 spf
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
Set ARCH_SUPPORTS_SPECULATIVE_PAGE_FAULT for arm64. This
enables Speculative Page Fault handler.
Signed-off-by: Ganesh Mahendran
---
This patch is on top of Laurent's v10 spf
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
Hi, Pavel
Thanks for your review.
2018-04-29 22:30 GMT+08:00 Pavel Machek :
> On Wed 2018-04-25 18:59:31, Ganesh Mahendran wrote:
>> single_open() interface requires that the whole output must
>> fit into a single buffer. This will lead to timeout when
>> system memory is no
2018-04-02 14:46 GMT+08:00 Geert Uytterhoeven :
> Hi Ganesh,
>
> On Mon, Apr 2, 2018 at 3:33 AM, Ganesh Mahendran
> wrote:
>> 2018-03-30 19:00 GMT+08:00 Geert Uytterhoeven :
>>> On Fri, Mar 30, 2018 at 12:25 PM, Rafael J. Wysocki
>>> wrote:
>>>>
single_open() interface requires that the whole output must
fit into a single buffer. This will lead to timeout when
system memory is not in a good situation.
This patch use seq_open() to show wakeup stats. This method
need only one page, so timeout will not be observed.
Signed-off-by: Ganesh
2018-04-02 18:32 GMT+08:00 Minchan Kim :
> Hi Ganesh,
>
> On Mon, Apr 02, 2018 at 06:01:59PM +0800, Ganesh Mahendran wrote:
>> 2018-04-02 15:11 GMT+08:00 Minchan Kim :
>> > On Mon, Apr 02, 2018 at 02:46:14PM +0800, Ganesh Mahendran wrote:
>> >> 2018-04-02 14:3
>> On Mon, Apr 02, 2018 at 06:01:59PM +0800, Ganesh Mahendran wrote:
>>> 2018-04-02 15:11 GMT+08:00 Minchan Kim :
>>> > On Mon, Apr 02, 2018 at 02:46:14PM +0800, Ganesh Mahendran wrote:
>>> >> 2018-04-02 14:34 GMT+08:00 Minchan Kim :
>>> >> &g
2018-04-02 15:11 GMT+08:00 Minchan Kim :
> On Mon, Apr 02, 2018 at 02:46:14PM +0800, Ganesh Mahendran wrote:
>> 2018-04-02 14:34 GMT+08:00 Minchan Kim :
>> > On Fri, Mar 30, 2018 at 12:04:07PM +0200, Greg Kroah-Hartman wrote:
>> >> On Fri, Mar 30, 2018 at 10:29
2018-04-02 14:34 GMT+08:00 Minchan Kim :
> On Fri, Mar 30, 2018 at 12:04:07PM +0200, Greg Kroah-Hartman wrote:
>> On Fri, Mar 30, 2018 at 10:29:21AM +0900, Minchan Kim wrote:
>> > Hi Ganesh,
>> >
>> > On Fri, Mar 30, 2018 at 09:21:55AM +0800, Ganesh Mahendran w
2018-03-30 19:00 GMT+08:00 Geert Uytterhoeven :
> On Fri, Mar 30, 2018 at 12:25 PM, Rafael J. Wysocki
> wrote:
>> On Monday, March 5, 2018 9:47:46 AM CEST Ganesh Mahendran wrote:
>>> single_open() interface requires that the whole output must
>>> fit into a si
2018-03-30 18:25 GMT+08:00 Rafael J. Wysocki :
> On Monday, March 5, 2018 9:47:46 AM CEST Ganesh Mahendran wrote:
>> single_open() interface requires that the whole output must
>> fit into a single buffer. This will lead to timeout when
>> system memory is not in a good situat
2018-03-30 9:29 GMT+08:00 Minchan Kim :
> Hi Ganesh,
>
> On Fri, Mar 30, 2018 at 09:21:55AM +0800, Ganesh Mahendran wrote:
>> 2018-03-29 14:54 GMT+08:00 Minchan Kim :
>> > binder_update_page_range needs down_write of mmap_sem because
>> > vm_insert_page need to ch
2018-03-29 14:54 GMT+08:00 Minchan Kim :
> binder_update_page_range needs down_write of mmap_sem because
> vm_insert_page need to change vma->vm_flags to VM_MIXEDMAP unless
> it is set. However, when I profile binder working, it seems
> every binder buffers should be mapped in advance by binder_mma
ping.
2018-03-05 16:47 GMT+08:00 Ganesh Mahendran :
> single_open() interface requires that the whole output must
> fit into a single buffer. This will lead to timeout when
> system memory is not in a good situation.
>
> This patch use seq_open() to show wakeup stats. This method
2018-03-29 10:26 GMT+08:00 Ganesh Mahendran :
> Hi, Laurent
>
> 2018-02-16 23:25 GMT+08:00 Laurent Dufour :
>> When the speculative page fault handler is returning VM_RETRY, there is a
>> chance that VMA fetched without grabbing the mmap_sem can be reused by the
>> leg
Hi, Laurent
2018-02-16 23:25 GMT+08:00 Laurent Dufour :
> When the speculative page fault handler is returning VM_RETRY, there is a
> chance that VMA fetched without grabbing the mmap_sem can be reused by the
> legacy page fault handler. By reusing it, we avoid calling find_vma()
> again. To achi
Hi, Laurent
2018-03-14 1:59 GMT+08:00 Laurent Dufour :
> This is a port on kernel 4.16 of the work done by Peter Zijlstra to
> handle page fault without holding the mm semaphore [1].
>
> The idea is to try to handle user space page faults without holding the
> mmap_sem. This should allow better co
Hi, Andy
2018-03-14 0:39 GMT+08:00 Andy Shevchenko :
> On Mon, Mar 5, 2018 at 10:47 AM, Ganesh Mahendran
> wrote:
>> single_open() interface requires that the whole output must
>> fit into a single buffer. This will lead to timeout when
>> system memory is not in a go
Hello, Rafael:
2018-03-05 16:47 GMT+08:00 Ganesh Mahendran :
> single_open() interface requires that the whole output must
> fit into a single buffer. This will lead to timeout when
> system memory is not in a good situation.
>
> This patch use seq_open() to show wakeup stats. Thi
single_open() interface requires that the whole output must
fit into a single buffer. This will lead to timeout when
system memory is not in a good situation.
This patch use seq_open() to show wakeup stats. This method
need only one page, so timeout will not be observed.
Signed-off-by: Ganesh
Hi, Rafael:
2018-03-02 16:58 GMT+08:00 Rafael J. Wysocki :
> On Fri, Mar 2, 2018 at 6:01 AM, Ganesh Mahendran
> wrote:
>> single_open() interface requires that the whole output must
>> fit into a single buffer. This will lead to timeout when
>> system memory is not in a
single_open() interface requires that the whole output must
fit into a single buffer. This will lead to timeout when
system memory is not in a good situation.
This patch use seq_open() to show wakeup stats. This method
need only one page, so timeout will not be observed.
Signed-off-by: Ganesh
Hi, Bart:
2018-03-02 7:11 GMT+08:00 Bart Van Assche :
> On Mon, 2017-06-05 at 17:37 +0800, Ganesh Mahendran wrote:
>> In android system, when there are lots of threads running. Thread A
>> holding *host_busy* count is easily to be preempted, and if at the
>> same time, threa
Hi, Martijn
2018-01-24 22:33 GMT+08:00 Martijn Coenen :
> On Mon, Jan 22, 2018 at 4:54 PM, Greg KH wrote:
>> Martijn and Todd, any objections to this patch?
>
> Looks good to me.
Thanks for your review.
Should I cherry-pick this change to aosp kernel 3.10/3.18/4.4/4.9 now?
Thanks.
>
>>
>> tha
Hi, Todd:
2018-01-23 1:02 GMT+08:00 Todd Kjos :
> On Mon, Jan 22, 2018 at 7:54 AM, Greg KH wrote:
>> On Wed, Jan 10, 2018 at 10:49:05AM +0800, Ganesh Mahendran wrote:
>>> VM_IOREMAP is used to access hardware through a mechanism called
>>> I/O mapped memory. Androi
Hi, Arve:
2018-01-23 2:55 GMT+08:00 Arve Hjønnevåg :
> On Mon, Jan 22, 2018 at 9:02 AM, Todd Kjos wrote:
>> On Mon, Jan 22, 2018 at 7:54 AM, Greg KH wrote:
>>> On Wed, Jan 10, 2018 at 10:49:05AM +0800, Ganesh Mahendran wrote:
>>>> VM_IOREMAP is used to access hardw
gt;[ 4482.440053] binder_alloc: binder_alloc_mmap_handler: 15728
8ce67000-8cf65000 get_vm_area failed -12
<3>[ 4483.218817] binder_alloc: binder_alloc_mmap_handler: 15745
8ce67000-8cf65000 get_vm_area failed -12
Signed-off-by: Ganesh Mahendran
V3: update comments
V2: update comment
32 is already defined as macro SHIFT, so it's better
to use macro SHIFT
Signed-off-by: Ganesh Mahendran
---
Documentation/scheduler/sched-pelt.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/Documentation/scheduler/sched-pelt.c
b/Documentation/scheduler/sched-p
Hello, Peter:
2017-07-05 19:59 GMT+08:00 Peter Zijlstra :
> On Wed, Jul 05, 2017 at 04:46:30PM +0800, Ganesh Mahendran wrote:
>> Function __compute_runnable_contrib() is to calculate:
>>\Sum 1024*y^n {for (1..n_period)}
>> But LOAD_AVG_MAX returns sum of 1024*y^n (0..n_p
Function __compute_runnable_contrib() is to calculate:
\Sum 1024*y^n {for (1..n_period)}
But LOAD_AVG_MAX returns sum of 1024*y^n (0..n_period).
So we need to subtract 1024*y^0.
Cc: sta...@vger.kernel.org
Signed-off-by: Ganesh Mahendran
---
kernel/sched/fair.c | 2 +-
1 file changed, 1
Ping~ Willing to hear some feed back :-)
Thanks
2017-06-05 17:37 GMT+08:00 Ganesh Mahendran :
> In android system, when there are lots of threads running. Thread A
> holding *host_busy* count is easily to be preempted, and if at the
> same time, thread B set *host_blocked*, then
tch increases {host|target|device}_busy count after dispatch cmd.
Signed-off-by: Ganesh Mahendran
---
drivers/scsi/scsi_lib.c | 66 -
1 file changed, 32 insertions(+), 34 deletions(-)
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
2017-04-06 23:58 GMT+08:00 Catalin Marinas :
> On Thu, Apr 06, 2017 at 12:52:13PM +0530, Imran Khan wrote:
>> On 4/5/2017 10:13 AM, Imran Khan wrote:
>> >> We may have to revisit this logic and consider L1_CACHE_BYTES the
>> >> _minimum_ of cache line sizes in arm64 systems supported by the kernel.
Hi, Greg:
2017-02-09 18:17 GMT+08:00 Greg KH :
> On Thu, Feb 09, 2017 at 05:54:03PM +0800, Ganesh Mahendran wrote:
>> A gentle ping.
>
> I don't see a patch here that can be accepted, what are you asking for
> a response from?
I sent a patch before:
https://patchwork.k
A gentle ping.
Thanks.
2016-11-15 21:18 GMT+08:00 Ganesh Mahendran :
> Hi, Greg
>
> 2016-11-15 18:18 GMT+08:00 Greg KH :
>> On Tue, Nov 15, 2016 at 05:55:39PM +0800, Ganesh Mahendran wrote:
>>> VM_IOREMAP is used to access hardware through a mechanism called
>>>
Hi, Greg:
Sorry for the late response.
On Tue, Nov 22, 2016 at 02:53:02PM +0100, Greg KH wrote:
> On Tue, Nov 22, 2016 at 07:17:30PM +0800, Ganesh Mahendran wrote:
> > This patch use kmem_cache to allocate/free binder objects.
>
> Why do this?
I am not very familiar with kmem_ca
This patch use kmem_cache to allocate/free binder objects.
It will have better memory efficiency. And we can also get
object usage details in /sys/kernel/slab/* for futher analysis.
Signed-off-by: Ganesh Mahendran
---
drivers/android/binder.c | 127
Hi, Greg
2016-11-15 18:18 GMT+08:00 Greg KH :
> On Tue, Nov 15, 2016 at 05:55:39PM +0800, Ganesh Mahendran wrote:
>> VM_IOREMAP is used to access hardware through a mechanism called
>> I/O mapped memory. Android binder is a IPC machanism which will
>> not access I/O memory.
beforeafter
average iterations per sec: 11199.9 11886.9
No performance regression found throgh binder test.
Signed-off-by: Ganesh Mahendran
---
drivers/android/binder.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/android/binder.c b/drivers/android/bind
2016-09-02 3:59 GMT+08:00 Arve Hjønnevåg :
> On Thu, Sep 1, 2016 at 12:02 PM, Greg KH wrote:
>> On Thu, Sep 01, 2016 at 02:41:04PM +0800, Ganesh Mahendran wrote:
>>> VM_IOREMAP is used to access hardware through a mechanism called
>>> I/O mapped memory. Android bind
Hi, Greg:
2016-09-02 3:02 GMT+08:00 Greg KH :
> On Thu, Sep 01, 2016 at 02:41:04PM +0800, Ganesh Mahendran wrote:
>> VM_IOREMAP is used to access hardware through a mechanism called
>> I/O mapped memory. Android binder is a IPC machanism which will
>> not access I/O memory.
EMAP)
align = 1ul << clamp_t(int, fls_long(size),
PAGE_SHIFT, IOREMAP_MAX_ORDER);
...
}
This patch use VM_ALLOC to get vm area.
Signed-off-by: Ganesh Mahendran
---
drivers/android/binder.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/android/
Hi, Michal
2016-07-19 15:34 GMT+08:00 Michal Hocko :
> On Tue 19-07-16 10:07:29, Ganesh Mahendran wrote:
>> In patch [1], the inactive_ratio is now automatically calculated
>
> It is better to give the direct reference to the patch 59dc76b0d4df
> ("mm: vmscan: reduce siz
In patch [1], the inactive_ratio is now automatically calculated
in inactive_list_is_low(). So there is no need to keep inactive_ratio
in pglist_data, and shown in zoneinfo.
[1] mm: vmscan: reduce size of inactive file list
Signed-off-by: Ganesh Mahendran
---
include/linux/mmzone.h | 6
(), as there
is no other place to call this function.
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v4: none
v3: none
v2:
remove get_maxobj_per_zspage() - Minchan
---
mm/zsmalloc.c | 26 ++
1 file changed, 10 insertions
2016-07-07 15:44 GMT+08:00 Minchan Kim :
> Hello Ganesh,
>
> On Wed, Jul 06, 2016 at 02:23:53PM +0800, Ganesh Mahendran wrote:
>> add per-class compact trace event to get scanned objects and freed pages
>> number.
>> trace log is like below:
>> --
user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags
Signed-off-by: Ganesh Mahendran
Acked-by: Minchan Kim
Reviewed-by: Sergey Senozhatsky
v4: none
v3: none
v2: none
---
mm/zsmalloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deleti
naming consistent
with others in zsmalloc.
Signed-off-by: Ganesh Mahendran
v4:
show number of migrated object rather than the number of scanning object
v3:
add per-class compact trace event - Minchan
I put this patch from 1/8 to 8/8, since this patch depends on below patch:
mm
the obj index value should be updated after return from
find_alloced_obj() to avoid CPU burning caused by unnecessary
object scanning.
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v4: none
v3: none
v2:
- update commit description
---
mm
some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v4: none
v3: none
v2:
change *object index* to *obj
This is a cleanup patch. Change "index" to "obj_index" to keep
consistent with others in zsmalloc.
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v4: none
v3: none
v2: none
---
mm/zsmalloc.c | 14 +++---
1 file changed
num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.
Signed-off-by: Ganesh Mahendran
Acked-by: Minchan Kim
Reviewed-by: Sergey Senozhatsky
---
mm/zsmalloc.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git
Add __init,__exit attribute for function that only called in
module init/exit to save memory.
Signed-off-by: Ganesh Mahendran
v4:
remove __init/__exit from zsmalloc_mount/zsmalloc_umount
v3:
revert change in v2 - Sergey
v2:
add __init/__exit for zs_register_cpu_notifier
On Wed, Jul 06, 2016 at 02:23:51PM +0800, Ganesh Mahendran wrote:
> Add __init,__exit attribute for function that only called in
> module init/exit to save memory.
>
> Signed-off-by: Ganesh Mahendran
>
> v3:
> revert change in v2 - Sergey
> v2:
>
Add __init,__exit attribute for function that only called in
module init/exit to save memory.
Signed-off-by: Ganesh Mahendran
v3:
revert change in v2 - Sergey
v2:
add __init/__exit for zs_register_cpu_notifier/zs_unregister_cpu_notifier
---
mm/zsmalloc.c | 6 +++---
1 file changed
kswapd0-629 [001] 293.161455: zs_compact_end: pool zram0: 301
pages compacted
Also this patch changes trace_zsmalloc_compact_start[end] to
trace_zs_compact_start[end] to keep function naming consistent
with others in zsmalloc.
Signed-off-by: Ganesh Mahendran
v3
user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags
Signed-off-by: Ganesh Mahendran
Acked-by: Minchan Kim
Reviewed-by: Sergey Senozhatsky
v3: none
v2: none
---
mm/zsmalloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --gi
(), as there
is no other place to call this function.
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v3:
none
v2:
remove get_maxobj_per_zspage() - Minchan
---
mm/zsmalloc.c | 26 ++
1 file changed, 10 insertions(+), 16
some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v3:
none
v2:
change *object index* to *object offset*
num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.
Signed-off-by: Ganesh Mahendran
Acked-by: Minchan Kim
Reviewed-by: Sergey Senozhatsky
---
mm/zsmalloc.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git
the obj index value should be updated after return from
find_alloced_obj() to avoid CPU burning caused by unnecessary
object scanning.
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v3:
none
v2:
- update commit description
---
mm/zsmalloc.c | 8
This is a cleanup patch. Change "index" to "obj_index" to keep
consistent with others in zsmalloc.
Signed-off-by: Ganesh Mahendran
Reviewed-by: Sergey Senozhatsky
Acked-by: Minchan Kim
v3: none
v2: none
---
mm/zsmalloc.c | 14 +++---
1 file changed, 7 insert
2016-07-06 10:48 GMT+08:00 Minchan Kim :
> On Tue, Jul 05, 2016 at 10:00:28AM +0900, Sergey Senozhatsky wrote:
>> Hello Ganesh,
>>
>> On (07/04/16 17:21), Ganesh Mahendran wrote:
>> > > On (07/04/16 14:49), Ganesh Mahendran wrote:
>> > > [..]
>>
2016-07-06 10:32 GMT+08:00 Minchan Kim :
> Hi Ganesh,
>
> On Mon, Jul 04, 2016 at 02:49:52PM +0800, Ganesh Mahendran wrote:
>> This patch changes trace_zsmalloc_compact_start[end] to
>> trace_zs_compact_start[end] to keep function naming consistent
>> with others in zsm
Hi, Sergey
2016-07-04 16:43 GMT+08:00 Sergey Senozhatsky
:
> On (07/04/16 14:49), Ganesh Mahendran wrote:
> [..]
>> -static void zs_unregister_cpu_notifier(void)
>> +static void __exit zs_unregister_cpu_notifier(void)
>> {
>
> this __exit symbol is called from
user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags
Signed-off-by: Ganesh Mahendran
Acked-by: Minchan Kim
v2: none
---
mm/zsmalloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
in
(), as there
is no other place to call this funtion.
Signed-off-by: Ganesh Mahendran
V2:
remove get_maxobj_per_zspage() - Minchan
---
mm/zsmalloc.c | 26 ++
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ee8a29a
num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.
Signed-off-by: Ganesh Mahendran
Acked-by: Minchan Kim
---
mm/zsmalloc.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
Add __init,__exit attribute for function that only called in
module init/exit to save memory.
Signed-off-by: Ganesh Mahendran
v2:
add __init/__exit for zs_register_cpu_notifier/zs_unregister_cpu_notifier
---
mm/zsmalloc.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions
some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"
Signed-off-by: Ganesh Mahendran
v2:
change *object index* to *object offset* - Minchan
---
mm/zsmalloc.c | 7 +++
1 file changed, 3 insert
the obj index value should be updated after return from
find_alloced_obj() to avoid CPU buring caused by unnecessary
object scanning.
Signed-off-by: Ganesh Mahendran
v2:
- update commit description
Hi, Minchan:
find_alloced_obj() already has the argument which use the obj_idx
name. So I
This is a cleanup patch. Change "index" to "obj_index" to keep
consistent with others in zsmalloc.
Signed-off-by: Ganesh Mahendran
v2: none
---
mm/zsmalloc.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsma
This patch changes trace_zsmalloc_compact_start[end] to
trace_zs_compact_start[end] to keep function naming consistent
with others in zsmalloc
Also this patch remove pages_total_compacted information which
may not really needed.
Signed-off-by: Ganesh Mahendran
---
v2: change commit message
On Mon, Jul 04, 2016 at 09:05:16AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:04PM +0800, Ganesh Mahendran wrote:
> > some minor change of comments:
> > 1). update zs_malloc(),zs_create_pool() function header
> > 2). update "Usage of struct page fiel
On Mon, Jul 04, 2016 at 09:03:18AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:03PM +0800, Ganesh Mahendran wrote:
> > Currently, if a class can not be merged, the max objects of zspage
> > in that class may be calculated twice.
> >
> > This patch calcula
On Mon, Jul 04, 2016 at 08:57:04AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> > the obj index value should be updated after return from
> > find_alloced_obj()
>
> to avoid CPU buring caused by unnece
Hi, Minchan:
On Mon, Jul 04, 2016 at 08:49:21AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:00PM +0800, Ganesh Mahendran wrote:
> > add per class compact trace event. It will show how many zs pages
> > isolated, how many zs pages reclaimed.
>
> I don't kn
num of max objects in zspage is stored in each size_class now.
So there is no need to re-calculate it.
Signed-off-by: Ganesh Mahendran
---
mm/zsmalloc.c | 18 +++---
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 5c96ed1..50283b1
Currently, if a class can not be merged, the max objects of zspage
in that class may be calculated twice.
This patch calculate max objects of zspage at the begin, and pass
the value to can_merge() to decide whether the class can be merged.
Signed-off-by: Ganesh Mahendran
---
mm/zsmalloc.c | 21
user ClearPagePrivate/ClearPagePrivate2 helper to clear
PG_private/PG_private_2 in page->flags
Signed-off-by: Ganesh Mahendran
---
mm/zsmalloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1c7460b..356db9a 100644
--- a
some minor change of comments:
1). update zs_malloc(),zs_create_pool() function header
2). update "Usage of struct page fields"
Signed-off-by: Ganesh Mahendran
---
mm/zsmalloc.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmall
Add __init,__exit attribute for function that is only called in
module init/exit
Signed-off-by: Ganesh Mahendran
---
mm/zsmalloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 6fc631a..1c7460b 100644
--- a/mm/zsmalloc.c
+++ b/mm
the obj index value should be updated after return from
find_alloced_obj()
Signed-off-by: Ganesh Mahendran
---
mm/zsmalloc.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 405baa5..5c96ed1 100644
--- a/mm/zsmalloc.c
+++ b/mm
21: zs_compact_class: class 43: 0 zspage
isolated, 0 reclaimed
...
Signed-off-by: Ganesh Mahendran
---
include/trace/events/zsmalloc.h | 24
mm/zsmalloc.c | 16 +++-
2 files changed, 39 insertions(+), 1 deletion(-)
diff --git a/include/tra
1. change trace_zsmalloc_compact_* to trace_zs_compact_* to keep
consistent with other definition in zsmalloc module.
2. remove pages_total_compacted information in trace_zs_compact_end(),
since this is not very userfull for per zs_compact.
Signed-off-by: Ganesh Mahendran
---
include/trace
2016-06-23 16:42 GMT+08:00 Sergey Senozhatsky
:
> On (06/22/16 11:27), Ganesh Mahendran wrote:
> [..]
>> > > Signed-off-by: Ganesh Mahendran
>> > > ---
>> > > drivers/staging/android/lowmemorykiller.c | 12
>> > > 1 file chang
Hi, David:
On Tue, Jun 21, 2016 at 01:14:36PM -0700, David Rientjes wrote:
> On Tue, 21 Jun 2016, Ganesh Mahendran wrote:
>
> > Current task selecting logic in LMK does not fully aware of the memory
> > pressure. It may select the task with maximum score adj, but with
Hi, David:
On Tue, Jun 21, 2016 at 01:22:00PM -0700, David Rientjes wrote:
> On Tue, 21 Jun 2016, Ganesh Mahendran wrote:
>
> > lowmem_count() should only count anon pages when we have swap device.
> >
>
> Why?
I make a mistake. I thought lowmem_count will return
Hi, David:
On Tue, Jun 21, 2016 at 01:27:40PM -0700, David Rientjes wrote:
> On Tue, 21 Jun 2016, Ganesh Mahendran wrote:
>
> > om_adj is deprecated, and in lowmemorykiller module, we use score adj
> > to do the comparing.
> > ---
> > oom_score_
lowmem_count() should only count anon pages when we have swap device.
Signed-off-by: Ganesh Mahendran
---
drivers/staging/android/lowmemorykiller.c | 12
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/android/lowmemorykiller.c
b/drivers/staging
, tasksize 1M
Current LMK logic will select *task b*. But now the system already have
much memory pressure.
We should select the task with maximum task from all the tasks which
score adj >= min_score_adj.
Signed-off-by: Ganesh Mahendran
---
drivers/staging/android/lowmemorykiller.c |
continue;
}
---
This patch makes the variable name consistent with the usage.
Signed-off-by: Ganesh Mahendran
---
drivers/staging/android/lowmemorykiller.c | 29 +++--
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/drivers/staging/android/lowmemorykiller.c
1 - 100 of 169 matches
Mail list logo