[PATCH/RESEND 0/5] z3fold optimizations and fixes

2016-12-25 Thread Vitaly Wool
This is a consolidation of z3fold optimizations and fixes done so far, revised after comments from Dan [1]. The coming patches are to be applied on top of the following commit: commit 07cfe852286d5e314f8cd19781444e12a2b6cdf3 Author: zhong jiang Date: Tue Dec 20 11:53:40 2016 +1100 mm/z3fo

[PATCH/RESEND 1/5] mm/z3fold.c: make pages_nr atomic

2016-12-25 Thread Vitaly Wool
Convert pages_nr per-pool counter to atomic64_t so that we won't have to care about locking for reading/updating it. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 20 +--- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 20

[PATCH/RESEND 2/5] mm/z3fold.c: extend compaction function

2016-12-25 Thread Vitaly Wool
y due to less actual page allocations on hot path due to denser in-page allocation). This patch adds the relevant code, using BIG_CHUNK_GAP define as a threshold for middle chunk to be worth moving. Signed-off-by: Vitaly Wool --- mm/z3fold.c

[PATCH/RESEND 2/5] mm/z3fold.c: extend compaction function

2016-12-25 Thread Vitaly Wool
y due to less actual page allocations on hot path due to denser in-page allocation). This patch adds the relevant code, using BIG_CHUNK_GAP define as a threshold for middle chunk to be worth moving. Signed-off-by: Vitaly Wool --- mm/z3fold.c

[PATCH/RESEND 3/5] z3fold: use per-page spinlock

2016-12-25 Thread Vitaly Wool
implements raw spinlock-based per-page locking mechanism which is lightweight enough to normally fit ok into the z3fold header. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 148 +++- 1 file changed, 106 insertions(+), 42 deletions(-) diff --git

[PATCH/RESEND 4/5] z3fold: fix header size related issues

2016-12-25 Thread Vitaly Wool
num_free_chunks() and the address to move the middle chunk to in case of in-page compaction in z3fold_compact_page(). Signed-off-by: Vitaly Wool --- mm/z3fold.c | 161 1 file changed, 87 insertions(+), 74 deletions(-) diff --git a/mm/z3fold.c

[PATCH/RESEND 5/5] z3fold: add kref refcounting

2016-12-25 Thread Vitaly Wool
With both coming and already present locking optimizations, introducing kref to reference-count z3fold objects is the right thing to do. Moreover, it makes buddied list no longer necessary, and allows for a simpler handling of headless pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 137

Re: [PATCH 0/2] z3fold fixes

2016-12-18 Thread Vitaly Wool
On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton wrote: > On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman wrote: > >> On Sat, Nov 26, 2016 at 2:15 PM, Vitaly Wool wrote: >> > Here come 2 patches with z3fold fixes for chunks counting and locking. As >> > commi

[PATCH] z3fold: fix spinlock unlocking in page reclaim

2017-03-11 Thread Vitaly Wool
The patch "z3fold: add kref refcounting" introduced a bug in z3fold_reclaim_page() with function exit that may leave pool->lock spinlock held. Here comes the trivial fix. Reported-by: Alexey Khoroshilov Signed-off-by: Vitaly Wool --- mm/z3fold.c | 1 + 1 file changed, 1 inser

Re: [PATCH] zram: update zram to use zpool

2016-06-17 Thread Vitaly Wool
Hi Minchan, On Thu, Jun 16, 2016 at 1:17 AM, Minchan Kim wrote: > On Wed, Jun 15, 2016 at 10:42:07PM +0800, Geliang Tang wrote: >> Change zram to use the zpool api instead of directly using zsmalloc. >> The zpool api doesn't have zs_compact() and zs_pool_stats() functions. >> I did the following

[PATCH] z3fold: make pages_nr atomic

2016-11-03 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 26 +++--- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 8f9e89c..4d02280 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c

[PATH] z3fold: extend compaction function

2016-11-03 Thread Vitaly Wool
code, using BIG_CHUNK_GAP define as a threshold for middle chunk to be worth moving. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 60 +++- 1 file changed, 47 insertions(+), 13 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 4d

Re: [PATCH] z3fold: make pages_nr atomic

2016-11-03 Thread Vitaly Wool
On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton wrote: > On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool wrote: > >> This patch converts pages_nr per-pool counter to atomic64_t. > > Which is slower. > > Presumably there is a reason for making this change. This reason >

Re: [PATH] z3fold: extend compaction function

2016-11-03 Thread Vitaly Wool
On Thu, Nov 3, 2016 at 10:16 PM, Andrew Morton wrote: > On Thu, 3 Nov 2016 22:04:28 +0100 Vitaly Wool wrote: > >> z3fold_compact_page() currently only handles the situation when >> there's a single middle chunk within the z3fold page. However it >> may be worth it t

Re: [PATCH] z3fold: make pages_nr atomic

2016-11-04 Thread Vitaly Wool
On Thu, Nov 3, 2016 at 11:17 PM, Andrew Morton wrote: > On Thu, 3 Nov 2016 22:24:07 +0100 Vitaly Wool wrote: > >> On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton >> wrote: >> > On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool wrote: >> > >> >>

[PATCH/RFC] z3fold: use per-page read/write lock

2016-11-05 Thread Vitaly Wool
one directly to the z3fold header makes the latter quite big on some systems so that it won't fit in a signle chunk. This patch implements custom per-page read/write locking mechanism which is lightweight enough to fit into the z3fold header. Signed-off-by: Vitaly Wool --- mm/z3fold.c

Re: [PATCH/RFC] z3fold: use per-page read/write lock

2016-11-06 Thread Vitaly Wool
On Sun, Nov 6, 2016 at 12:38 AM, Andi Kleen wrote: > Vitaly Wool writes: > >> Most of z3fold operations are in-page, such as modifying z3fold >> page header or moving z3fold objects within a page. Taking >> per-pool spinlock to protect per-page objects is therefore >&g

Re: [PATCH/RESEND 2/5] mm/z3fold.c: extend compaction function

2017-01-10 Thread Vitaly Wool
On Wed, Jan 4, 2017 at 4:43 PM, Dan Streetman wrote: >> static int z3fold_compact_page(struct z3fold_header *zhdr) >> { >> struct page *page = virt_to_page(zhdr); >> - void *beg = zhdr; >> + int ret = 0; > > I still don't understand why you're adding ret and using goto. Ju

Re: [PATCH/RESEND 5/5] z3fold: add kref refcounting

2017-01-11 Thread Vitaly Wool
On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman wrote: > On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool wrote: >> With both coming and already present locking optimizations, >> introducing kref to reference-count z3fold objects is the right >> thing to do. Moreover, it makes b

[PATCH/RESEND v2 0/5] z3fold optimizations and fixes

2017-01-11 Thread Vitaly Wool
This is a consolidation of z3fold optimizations and fixes done so far, revised after comments from Dan ([1], [2], [3], [4]). The coming patches are to be applied on top of the following commit: Author: zhong jiang Date: Tue Dec 20 11:53:40 2016 +1100 mm/z3fold.c: limit first_num to the ac

[PATCH/RESEND v2 2/5] z3fold: fix header size related issues

2017-01-11 Thread Vitaly Wool
num_free_chunks() and the address to move the middle chunk to in case of in-page compaction in z3fold_compact_page(). Signed-off-by: Vitaly Wool --- mm/z3fold.c | 114 ++-- 1 file changed, 64 insertions(+), 50 deletions(-) diff --git a/mm/z3fold.c

[PATCH/RESEND v2 5/5] z3fold: add kref refcounting

2017-01-11 Thread Vitaly Wool
With both coming and already present locking optimizations, introducing kref to reference-count z3fold objects is the right thing to do. Moreover, it makes buddied list no longer necessary, and allows for a simpler handling of headless pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 145

[PATCH/RESEND v2 3/5] z3fold: extend compaction function

2017-01-11 Thread Vitaly Wool
code, using BIG_CHUNK_GAP define as a threshold for middle chunk to be worth moving. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 26 +- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 98ab01f..fca3310 100644 --- a/mm/z3fold.c

[PATCH/RESEND v2 1/5] z3fold: make pages_nr atomic

2017-01-11 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 20 +--- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 207e5dd..2273789 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -80,7

[PATCH/RESEND v2 4/5] z3fold: use per-page spinlock

2017-01-11 Thread Vitaly Wool
implements spinlock-based per-page locking mechanism which is lightweight enough to normally fit ok into the z3fold header. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 148 +++- 1 file changed, 106 insertions(+), 42 deletions(-) diff --git a/mm

Re: [PATCH/RESEND v2 3/5] z3fold: extend compaction function

2017-01-11 Thread Vitaly Wool
On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman wrote: > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote: >> z3fold_compact_page() currently only handles the situation when >> there's a single middle chunk within the z3fold page. However it >> may be worth it to m

Re: [PATCH/RESEND 5/5] z3fold: add kref refcounting

2017-01-11 Thread Vitaly Wool
On Wed, Jan 11, 2017 at 5:58 PM, Dan Streetman wrote: > On Wed, Jan 11, 2017 at 5:52 AM, Vitaly Wool wrote: >> On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman wrote: >>> On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool wrote: >>>> With both coming and already

Re: [PATCH/RESEND v2 5/5] z3fold: add kref refcounting

2017-01-11 Thread Vitaly Wool
On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman wrote: > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote: >> With both coming and already present locking optimizations, >> introducing kref to reference-count z3fold objects is the right >> thing to do. Moreover, it makes b

Re: [PATCH/RESEND v2 5/5] z3fold: add kref refcounting

2017-01-11 Thread Vitaly Wool
On Wed, Jan 11, 2017 at 6:39 PM, Dan Streetman wrote: > On Wed, Jan 11, 2017 at 12:27 PM, Vitaly Wool wrote: >> On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman wrote: >>> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote: >>>> With both coming and already

Re: [PATCH/RESEND v2 3/5] z3fold: extend compaction function

2017-01-11 Thread Vitaly Wool
On Wed, 11 Jan 2017 17:43:13 +0100 Vitaly Wool wrote: > On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman wrote: > > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote: > >> z3fold_compact_page() currently only handles the situation when > >> there's a single mid

Re: [PATCH/RESEND v2 5/5] z3fold: add kref refcounting

2017-01-11 Thread Vitaly Wool
t sure it is worth it but I can do that :) > > the header's already rounded up to chunk size, so if there's room then > it won't take any extra memory. but it works either way. So let's have it like this then: With both coming and a

[PATCHv2 0/3] align zpool/zbud/zsmalloc on the api

2015-09-26 Thread Vitaly Wool
Here comes the second iteration over zpool/zbud/zsmalloc API alignment. This time I divide it into three patches: for zpool, for zbud and for zsmalloc :) Patches are non-intrusive and do not change any existing functionality. They only add up stuff for the alignment purposes. -- To unsubscribe f

[PATCHv2 1/3] zpool: add compaction api

2015-09-26 Thread Vitaly Wool
This patch adds two functions to the zpool API: zpool_compact() and zpool_get_num_compacted(). The former triggers compaction for the underlying allocator and the latter retrieves the number of pages migrated due to compaction for the whole time of this pool's existence. Signed-off-by: V

[PATCHv2 2/3] zbud: add compaction callbacks

2015-09-26 Thread Vitaly Wool
Add no-op compaction callbacks to zbud. Signed-off-by: Vitaly Wool --- mm/zbud.c | 12 1 file changed, 12 insertions(+) diff --git a/mm/zbud.c b/mm/zbud.c index fa48bcdf..d67c0aa 100644 --- a/mm/zbud.c +++ b/mm/zbud.c @@ -195,6 +195,16 @@ static void zbud_zpool_unmap(void *pool

[PATCHv2 3/3] zsmalloc: add compaction callbacks

2015-09-26 Thread Vitaly Wool
Add compaction callbacks for zpool compaction API extension. Signed-off-by: Vitaly Wool --- mm/zsmalloc.c | 15 +++ 1 file changed, 15 insertions(+) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index f135b1b..8f2ddd1 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -365,6 +365,19

Re: [PATCH 0/2] prepare zbud to be used by zram as underlying allocator

2015-09-17 Thread Vitaly Wool
On Thu, Sep 17, 2015 at 1:30 AM, Sergey Senozhatsky wrote: > > just a side note, > I'm afraid this is not how it works. numbers go first, to justify > the patch set. > These patches are extension/alignment patches, why would anyone need to justify that? But just to help you understand where I a

Re: [PATCH 1/2] zbud: allow PAGE_SIZE allocations

2015-09-18 Thread Vitaly Wool
> I don't know how zsmalloc handles uncompressible PAGE_SIZE allocations, but > I wouldn't expect it to be any more clever than this? So why duplicate the > functionality in zswap and zbud? This could be handled e.g. at the zpool > level? Or maybe just in zram, as IIRC in zswap (frontswap) it's val

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-23 Thread Vitaly Wool
On Wed, Sep 23, 2015 at 5:18 AM, Seth Jennings wrote: > On Tue, Sep 22, 2015 at 02:17:33PM +0200, Vitaly Wool wrote: >> Currently zbud is only capable of allocating not more than >> PAGE_SIZE - ZHDR_SIZE_ALIGNED - CHUNK_SIZE. This is okay as >> long as only zswap is using i

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-23 Thread Vitaly Wool
On Tue, Sep 22, 2015 at 11:49 PM, Dan Streetman wrote: > On Tue, Sep 22, 2015 at 8:17 AM, Vitaly Wool wrote: >> Currently zbud is only capable of allocating not more than >> PAGE_SIZE - ZHDR_SIZE_ALIGNED - CHUNK_SIZE. This is okay as >> long as only zswap is using it, but ot

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-23 Thread Vitaly Wool
for zbud page lists * page->private to hold 'under_reclaim' flag page->private will also be used to indicate if this page contains a zbud header in the beginning or not ('headless' flag). Signed-off-by: Vitaly Wool --- mm/zbud.c | 167

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-24 Thread Vitaly Wool
Hello Seth, On Thu, Sep 24, 2015 at 12:41 AM, Seth Jennings wrote: > On Wed, Sep 23, 2015 at 10:59:00PM +0200, Vitaly Wool wrote: >> Okay, how about this? It's gotten smaller BTW :) >> >> zbud: allow up to PAGE_SIZE allocations >> >> Currently zbud is on

[PATCH v3] zbud: allow up to PAGE_SIZE allocations

2015-09-24 Thread Vitaly Wool
flag). This patch incorporates minor fixups after Seth's comments. Signed-off-by: Vitaly Wool --- mm/zbud.c | 168 ++ 1 file changed, 114 insertions(+), 54 deletions(-) diff --git a/mm/zbud.c b/mm/zbud.c index fa48bcdf..619beba 1

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-25 Thread Vitaly Wool
> I already said questions, opinion and concerns but anything is not clear > until now. Only clear thing I could hear is just "compaction stats are > better" which is not enough for me. Sorry. > > 1) https://lkml.org/lkml/2015/9/15/33 > 2) https://lkml.org/lkml/2015/9/21/2 Could you please stop p

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-25 Thread Vitaly Wool
> Have you seen those symptoms before? How did you come up to a conclusion > that zram->zbud will do the trick? I have data from various tests (partially described here: https://lkml.org/lkml/2015/9/17/244) and once again, I'll post a reply to https://lkml.org/lkml/2015/9/15/33 with more detailed

Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

2015-09-25 Thread Vitaly Wool
Hello Minchan, the main use case where I see unacceptably long stalls in UI with zsmalloc is switching between users in Android. There is a way to automate user creation and switching between them so the test I run both to get vmstat statistics and to profile stalls is to create a user, switch to

Re: [PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-25 Thread Vitaly Wool
On Fri, Sep 25, 2015 at 10:47 AM, Minchan Kim wrote: > On Fri, Sep 25, 2015 at 10:17:54AM +0200, Vitaly Wool wrote: >> >> > I already said questions, opinion and concerns but anything is not clear >> > until now. Only clear thing I could hear is just "compaction

Re: [PATCH 0/2] prepare zbud to be used by zram as underlying allocator

2015-09-21 Thread Vitaly Wool
Hello Minchan, > Sorry, because you wrote up "zram" in the title. > As I said earlier, we need several numbers to investigate. > > First of all, what is culprit of your latency? > It seems you are thinking about compaction. so compaction what? > Frequent scanning? lock collision? or frequent sleep

Re: [PATCH 1/2] zbud: allow PAGE_SIZE allocations

2015-09-22 Thread Vitaly Wool
Hi Dan, On Mon, Sep 21, 2015 at 6:17 PM, Dan Streetman wrote: > Please make sure to cc Seth also, he's the owner of zbud. Sure :) >> @@ -514,8 +552,17 @@ int zbud_reclaim_page(struct zbud_pool *pool, unsigned >> int retries) >> return -EINVAL; >> } >> for (i =

[PATCH v2] zbud: allow up to PAGE_SIZE allocations

2015-09-22 Thread Vitaly Wool
d 'under_reclaim' flag page->private will also be used to indicate if this page contains a zbud header in the beginning or not ('headless' flag). Signed-off-by: Vitaly Wool --- mm/zbud.c | 194 +- 1 file changed, 1

Re: [PATCH][next] mm/zswap: fix a couple of memory leaks and rework kzalloc failure check

2020-06-23 Thread Vitaly Wool
M > >> To: Colin King > >> Cc: Seth Jennings ; Dan Streetman > >> ; Vitaly Wool ; Andrew > >> Morton ; Song Bao Hua (Barry Song) > >> ; Stephen Rothwell ; > >> linux...@kvack.org; kernel-janit...@vger.kernel.org; > >> linux-kernel@vg

Re: [PATCH v6] mm/zswap: move to use crypto_acomp API for hardware acceleration

2020-09-28 Thread Vitaly Wool
drzej Siewior > Cc: Andrew Morton > Cc: Herbert Xu > Cc: David S. Miller > Cc: Mahipal Challa > Cc: Seth Jennings > Cc: Dan Streetman > Cc: Vitaly Wool > Cc: Zhou Wang > Cc: Hao Fang > Cc: Colin Ian King > Signed-off-by: Barry Song Acked-by: Vitaly Wool

Re: [PATCH V3 1/2] zpool: Add malloc_support_movable to zpool_driver

2019-06-05 Thread Vitaly Wool
Hi Shakeel, On Wed, Jun 5, 2019 at 6:31 PM Shakeel Butt wrote: > > On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote: > > > > As a zpool_driver, zsmalloc can allocate movable memory because it > > support migate pages. > > But zbud and z3fold cannot allocate movable memory. > > > > Cc: Vitaly thanks

Re: [PATCH v2] mm/zswap: move to use crypto_acomp API for hardware acceleration

2020-06-21 Thread Vitaly Wool
alves > Cc: Sebastian Andrzej Siewior > Cc: Andrew Morton > Cc: Herbert Xu > Cc: David S. Miller > Cc: Mahipal Challa > Cc: Seth Jennings > Cc: Dan Streetman > Cc: Vitaly Wool > Cc: Zhou Wang > Signed-off-by: Barry Song > --- > -v2: > rebase to 5.8-r

[PATCH] z3fold: fix page locking in z3fold_alloc()

2017-04-10 Thread Vitaly Wool
Stress testing of the current z3fold implementation on a 8-core system revealed it was possible that a z3fold page deleted from its unbuddied list in z3fold_alloc() would be put on another unbuddied list by z3fold_free() while z3fold_alloc() is still processing it. This has been introduced with com

[PATCH] z3fold: limit use of stale list for allocation

2018-02-10 Thread Vitaly Wool
Currently if z3fold couldn't find an unbuddied page it would first try to pull a page off the stale list. The problem with this approach is that we can't 100% guarantee that the page is not processed by the workqueue thread at the same time unless we run cancel_work_sync() on it, which we can't do

[PATCH] z3fold: use kref to prevent page free / compact race

2017-11-17 Thread Vitaly Wool
ased if its compaction is scheduled. It then becomes compaction function's responsibility to decrease the counter and quit immediately if the page was actually freed. Signed-off-by: Vitaly Wool Cc: stable --- mm/z3fold.c | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-)

[PATCH/RFC] llist: add llist_[add|del_first]_exclusive

2017-12-18 Thread Vitaly Wool
_del_first_exclusive will delete the first node off the list and mark it as not being on any list. Signed-off-by: Vitaly Wool --- include/linux/llist.h | 25 + lib/llist.c | 29 + 2 files changed, 54 insertions(+) diff --git a/inc

[PATCH v2] llist: add llist_[add|del_first]_exclusive

2017-12-26 Thread Vitaly Wool
It sometimes is necessary to be able to be able to use llist in the following manner: > if (node_unlisted(node)) > llst_add(node, list); i. e. only add a node to the list if it's not already on a list. This is not possible without taking locks because otherwise there's an obvio

z3fold: fix potential race in z3fold_reclaim_page

2017-09-13 Thread Vitaly Wool
bug. To avoid that, spin_lock() has to be taken earlier, before the kref_put() call mentioned earlier. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 486550df32be..b04fa3ba1bf2 100644 --- a/mm

[PATCH] z3fold: fix stale list handling

2017-09-14 Thread Vitaly Wool
Fix the situation when clear_bit() is called for page->private before the page pointer is actually assigned. While at it, remove work_busy() check because it is costly and does not give 100% guarantee anyway. Signed-of-by: Vitaly Wool --- mm/z3fold.c | 6 ++ 1 file changed, 2 inserti

Re: [PATCH] z3fold: fix stale list handling

2017-09-15 Thread Vitaly Wool
Hi Andrew, 2017-09-14 23:15 GMT+02:00 Andrew Morton : > On Thu, 14 Sep 2017 15:59:36 +0200 Vitaly Wool wrote: > >> Fix the situation when clear_bit() is called for page->private before >> the page pointer is actually assigned. While at it, remove work_busy() >> che

[PATCH/RFC] ion: add movability support for page pools

2017-09-05 Thread Vitaly Wool
inged-off-by: Vitaly Wool --- drivers/staging/android/ion/ion.h | 2 + drivers/staging/android/ion/ion_page_pool.c | 165 +++- 2 files changed, 163 insertions(+), 4 deletions(-) diff --git a/drivers/staging/android/ion/ion.h b/drivers/staging/android/ion/ion.h

Re: [PATCH/RFC] ion: add movability support for page pools

2017-09-06 Thread Vitaly Wool
2017-09-06 2:19 GMT+02:00 Laura Abbott : > On 09/05/2017 05:55 AM, Vitaly Wool wrote: >> ion page pool may become quite large and scattered all around >> the kernel memory area. These pages are actually not used so >> moving them around to reduce fragmentation is quite cheap

[PATCH v2] z3fold: use per-cpu unbuddied lists

2017-08-06 Thread Vitaly Wool
nb=50582KB/s, ... So we're in for almost 6x performance increase. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 479 +++- 1 file changed, 344 insertions(+), 135 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 54f63c4a809a..b4

[PATCH] z3fold: use per-cpu unbuddied lists

2017-08-02 Thread Vitaly Wool
mance will go up. This patch also introduces two worker threads which: one for async in-page object layout optimization and one for releasing freed pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 479 +++- 1 file changed, 344 inser

Re: [PATCH] z3fold: remove the unnecessary limit in z3fold_compact_page

2016-10-14 Thread Vitaly Wool
On Fri, Oct 14, 2016 at 3:35 PM, zhongjiang wrote: > From: zhong jiang > > z3fold compact page has nothing with the last_chunks. even if > last_chunks is not free, compact page will proceed. > > The patch just remove the limit without functional change. > > Signed-off-by: zhong jiang > --- > mm

[PATCH v5] z3fold: add shrinker

2016-10-15 Thread Vitaly Wool
, maxb=2049KB/s, mint=200339msec, maxt=201154msec WRITE: io=1599.5MB, aggrb=8142KB/s, minb=2023KB/s, maxb=2062KB/s, mint=200343msec, maxt=201158msec Disk stats (read/write): zram0: ios=1637032/1639304, merge=0/0, ticks=175840/458740, in_queue=637140, util=82.48% Signed-off-by: Vitaly Wool

[PATCH v5 1/3] z3fold: make counters atomic

2016-10-15 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. It also introduces a new counter, unbuddied_nr, which is also atomic64_t, to track the number of unbuddied (shrinkable) pages, as a step to prepare for z3fold shrinker implementation. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 33

[PATCH v5 3/3] z3fold: add shrinker

2016-10-15 Thread Vitaly Wool
from the freeing path since we can rely on shrinker to do the job. Also, a new flag UNDER_COMPACTION is introduced to protect against two threads trying to compact the same page. This patch has been checked with the latest Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c

[PATCH v5 2/3] z3fold: remove redundant locking

2016-10-15 Thread Vitaly Wool
The per-pool z3fold spinlock should generally be taken only when a non-atomic pool variable is modified. There's no need to take it to map/unmap an object. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 17 + 1 file changed, 5 insertions(+), 12 deletions(-) diff --git

Re: [PATCH v2] z3fold: fix the potential encode bug in encod_handle

2016-10-17 Thread Vitaly Wool
Hi Zhong Jiang, On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang wrote: > Hi, Vitaly > > About the following patch, is it right? > > Thanks > zhongjiang > On 2016/10/13 12:02, zhongjiang wrote: >> From: zhong jiang >> >> At present, zhdr->first_num plus bud can exceed the BUDDY_MASK >> in encode_h

Re: [PATCH v5 3/3] z3fold: add shrinker

2016-10-17 Thread Vitaly Wool
Hi Dan, On Tue, Oct 18, 2016 at 4:06 AM, Dan Streetman wrote: > On Sat, Oct 15, 2016 at 8:05 AM, Vitaly Wool wrote: >> This patch implements shrinker for z3fold. This shrinker >> implementation does not free up any pages directly but it allows >> for a denser placement

Re: [PATCH v5 2/3] z3fold: remove redundant locking

2016-10-17 Thread Vitaly Wool
On Mon, Oct 17, 2016 at 10:48 PM, Dan Streetman wrote: > On Sat, Oct 15, 2016 at 7:59 AM, Vitaly Wool wrote: >> The per-pool z3fold spinlock should generally be taken only when >> a non-atomic pool variable is modified. There's no need to take it >> to map/unmap an obje

Re: [PATCH v5 3/3] z3fold: add shrinker

2016-10-18 Thread Vitaly Wool
On Tue, Oct 18, 2016 at 4:27 PM, Dan Streetman wrote: > On Mon, Oct 17, 2016 at 10:45 PM, Vitaly Wool wrote: >> Hi Dan, >> >> On Tue, Oct 18, 2016 at 4:06 AM, Dan Streetman wrote: >>> On Sat, Oct 15, 2016 at 8:05 AM, Vitaly Wool wrote: >>>> This

Re: [PATCH] z3fold: limit first_num to the actual range of possible buddy indexes

2016-10-18 Thread Vitaly Wool
of buddies. >> >> The patch limit the first_num to actual range of possible buddy indexes. >> and that is more reasonable and obvious without functional change. >> >> Suggested-by: Dan Streetman >> Signed-off-by: zhong jiang > > Acked-by: Dan Streetman Ac

Re: [PATCH v5] z3fold: add shrinker

2016-10-18 Thread Vitaly Wool
On Tue, Oct 18, 2016 at 7:35 PM, Dan Streetman wrote: > On Tue, Oct 18, 2016 at 12:26 PM, Vitaly Wool wrote: >> 18 окт. 2016 г. 18:29 пользователь "Dan Streetman" >> написал: >> >> >>> >>> On Tue, Oct 18, 2016 at 10:51 AM, Vitaly Wool

[PATCH 0/3] z3fold: background page compaction

2016-10-19 Thread Vitaly Wool
patchset thus implements in-page compaction worker for z3fold, preceded by some code optimizations and preparations which, again, deserved to be separate patches. Signed-off-by: Vitaly Wool [1] https://lkml.org/lkml/2016/10/15/31

[PATCH 2/3] z3fold: remove redundant locking

2016-10-19 Thread Vitaly Wool
The per-pool z3fold spinlock should generally be taken only when a non-atomic pool variable is modified. There's no need to take it to map/unmap an object. This patch introduces per-page lock that will be used instead to protect per-page variables in map/unmap functions. Signed-off-by: V

[PATCH 1/3] z3fold: make counters atomic

2016-10-19 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. It also introduces a new counter, unbuddied_nr, which is atomic64_t, too, to track the number of unbuddied (compactable) z3fold pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 33 + 1 file changed

[PATCH 3/3] z3fold: add compaction worker

2016-10-19 Thread Vitaly Wool
Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 159 ++-- 1 file changed, 133 insertions(+), 26 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 329bc26..580a732 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -27,6

<    1   2   3