[PATCH v3] z3fold: the 3-fold allocator for compressed pages

2016-04-25 Thread Vitaly Wool
lieves-the-pressure-vitaly-wool-softprise-consulting-ou [2] https://lkml.org/lkml/2016/4/21/799 Signed-off-by: Vitaly Wool --- Documentation/vm/z3fold.txt | 27 ++ mm/Kconfig | 10 + mm/Makefile | 1 + mm/z3fold.c

Re: [PATCH v2] z3fold: the 3-fold allocator for compressed pages

2016-04-25 Thread Vitaly Wool
On Mon, Apr 25, 2016 at 9:28 AM, Vlastimil Babka wrote: > On 04/22/2016 01:22 AM, Andrew Morton wrote: >> >> So... why don't we just replace zbud with z3fold? (Update the changelog >> to answer this rather obvious question, please!) > > > There was discussion between Seth and Vitaly on v1. With

[PATCH 0/3] allow zram to use zbud as underlying allocator

2015-09-14 Thread Vitaly Wool
ism, lower latencies and lower fragmentation, so in the coming patches I tried to generalize what I've done to enable zbud for zram so far. -- Vitaly Wool -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kerne

[PATCH 1/3] zram: make max_zpage_size configurable

2015-09-14 Thread Vitaly Wool
_size configurable as a module parameter. Signed-off-by: Vitaly Wool --- drivers/block/zram/zram_drv.c | 13 + drivers/block/zram/zram_drv.h | 16 2 files changed, 13 insertions(+), 16 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c

[PATCH 2/3] zpool/zsmalloc/zbud: align on interfaces

2015-09-14 Thread Vitaly Wool
As a preparation step for zram to be able to use common zpool API, there has to be some alignment done on it. This patch adds functions that correspond to zsmalloc-specific API to the common zpool API and takes care of the callbacks that have to be introduced, too. Signed-off-by: Vitaly Wool

[PATCH 3/3] zram: use common zpool interface

2015-09-14 Thread Vitaly Wool
Update zram driver to use common zpool API instead of calling zsmalloc functions directly. This patch also adds a parameter that allows for changing underlying compressor storage to zbud. Signed-off-by: Vitaly Wool --- drivers/block/zram/Kconfig| 3 ++- drivers/block/zram/zram_drv.c | 44

Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

2015-09-14 Thread Vitaly Wool
On Mon, Sep 14, 2015 at 4:01 PM, Vlastimil Babka wrote: > > On 09/14/2015 03:49 PM, Vitaly Wool wrote: >> >> While using ZRAM on a small RAM footprint devices, together with >> KSM, >> I ran into several occasions when moving pages from compressed swap back >>

[PATCH 0/2] prepare zbud to be used by zram as underlying allocator

2015-09-16 Thread Vitaly Wool
. -- Vitaly Wool -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

[PATCH 1/2] zbud: allow PAGE_SIZE allocations

2015-09-16 Thread Vitaly Wool
be able to keep track of zbud pages in any case, struct page's lru pointer will be used for zbud page lists instead of the one that used to be part of the aforementioned internal structure. Signed-off-by: Vitaly Wool --- include/linux/page-flags.h | 3 ++ mm/zbud.c

[PATCH 2/2] zpool/zsmalloc/zbud: align on interfaces

2015-09-16 Thread Vitaly Wool
simplified 'compact' API/callbacks. Signed-off-by: Vitaly Wool --- drivers/block/zram/zram_drv.c | 4 ++-- include/linux/zpool.h | 14 ++ include/linux/zsmalloc.h | 8 ++-- mm/zbud.c | 12 mm/zpool.c

Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

2015-10-10 Thread Vitaly Wool
On Thu, Oct 1, 2015 at 9:52 AM, Vlastimil Babka wrote: > On 09/30/2015 05:46 PM, Vitaly Wool wrote: >> >> On Wed, Sep 30, 2015 at 5:37 PM, Vlastimil Babka wrote: >>> >>> On 09/25/2015 11:54 AM, Vitaly Wool wrote: >>>> >>>> >>>>

Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

2015-09-30 Thread Vitaly Wool
> Could you share your script? > I will ask our production team to reproduce it. Wait, let me get it right. Your production team? I take it as you would like me to help your company fix your bugs. You are pushing the limits here. ~vitaly -- To unsubscribe from this list: send the line "unsubscrib

Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

2015-09-30 Thread Vitaly Wool
On Wed, Sep 30, 2015 at 10:13 AM, Minchan Kim wrote: > On Wed, Sep 30, 2015 at 10:01:59AM +0200, Vitaly Wool wrote: >> > Could you share your script? >> > I will ask our production team to reproduce it. >> >> Wait, let me get it right. Your production team? >

Re: [PATCH 12/12] mm, page_alloc: Only enforce watermarks for order-0 allocations

2015-09-30 Thread Vitaly Wool
On Wed, Sep 9, 2015 at 2:39 PM, Mel Gorman wrote: > On Tue, Sep 08, 2015 at 05:26:13PM +0900, Joonsoo Kim wrote: >> 2015-08-24 21:30 GMT+09:00 Mel Gorman : >> > The primary purpose of watermarks is to ensure that reclaim can always >> > make forward progress in PF_MEMALLOC context (kswapd and dire

Re: [PATCH 12/12] mm, page_alloc: Only enforce watermarks for order-0 allocations

2015-09-30 Thread Vitaly Wool
On Wed, Sep 30, 2015 at 3:52 PM, Vlastimil Babka wrote: > On 09/30/2015 10:51 AM, Vitaly Wool wrote: >> >> On Wed, Sep 9, 2015 at 2:39 PM, Mel Gorman >> wrote: >>> >>> On Tue, Sep 08, 2015 at 05:26:13PM +0900, Joonsoo Kim wrote: >>>> >>&

Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

2015-09-30 Thread Vitaly Wool
On Wed, Sep 30, 2015 at 5:37 PM, Vlastimil Babka wrote: > On 09/25/2015 11:54 AM, Vitaly Wool wrote: >> >> Hello Minchan, >> >> the main use case where I see unacceptably long stalls in UI with >> zsmalloc is switching between users in Android. >> There

[PATCH] z3fold: fix retry mechanism in page reclaim

2019-09-08 Thread Vitaly Wool
ready freed handles by using own local slots structure in z3fold_page_reclaim(). Reported-by: Markus Linnala Reported-by: Chris Murphy Reported-by: Agustin Dall'Alba Signed-off-by: Vitaly Wool --- mm/z3fold.c | 49 ++--- 1 file changed, 34 inserti

Re: [PATCH] z3fold: fix retry mechanism in page reclaim

2019-09-08 Thread Vitaly Wool
On Sun, Sep 8, 2019 at 4:56 PM Maciej S. Szmigiero wrote: > > On 08.09.2019 15:29, Vitaly Wool wrote: > > z3fold_page_reclaim()'s retry mechanism is broken: on a second > > iteration it will have zhdr from the first one so that zhdr > > is no longer in line with struc

[PATCH] Revert "mm/z3fold.c: fix race between migration and destruction"

2019-09-10 Thread Vitaly Wool
. Reported-by: Agustín Dall'Alba Signed-off-by: Vitaly Wool --- mm/z3fold.c | 90 - 1 file changed, 90 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 75b7962439ff..ed19d98c9dcd 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@

[PATCH/RFC] zswap: do not map same object twice

2019-09-15 Thread Vitaly Wool
ing a handle _that_ fast as zswap_writeback_entry() does when it reads swpentry, the suggestion is to keep the handle mapped till the end. Signed-off-by: Vitaly Wool --- mm/zswap.c | 7 +++ 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 0e22744

[PATCH] z3fold: fix memory leak in kmem cache

2019-09-17 Thread Vitaly Wool
Currently there is a leak in init_z3fold_page() -- it allocates handles from kmem cache even for headless pages, but then they are never used and never freed, so eventually kmem cache may get exhausted. This patch provides a fix for that. Reported-by: Markus Linnala Signed-off-by: Vitaly Wool

[PATCH v2] z3fold: claim page in the beginning of free

2019-09-28 Thread Vitaly Wool
page faults (since that page would have been reclaimed by then). Fix that by claiming page in the beginning of z3fold_free() and not forgetting to clear the claim in the end. Reported-by: Markus Linnala Signed-off-by: Vitaly Wool Cc: --- mm/z3fold.c | 10 -- 1 file changed, 8 insertions(

[PATCH] z3fold: add inter-page compaction

2019-10-05 Thread Vitaly Wool
From: Vitaly Wool For each page scheduled for compaction (e. g. by z3fold_free()), try to apply inter-page compaction before running the traditional/ existing intra-page compaction. That means, if the page has only one buddy, we treat that buddy as a new object that we aim to place into an

Re: [PATCH] z3fold: fix memory leak in kmem cache

2019-09-19 Thread Vitaly Wool
On Wed, Sep 18, 2019 at 9:35 AM Vlastimil Babka wrote: > > On 9/17/19 5:53 PM, Vitaly Wool wrote: > > Currently there is a leak in init_z3fold_page() -- it allocates > > handles from kmem cache even for headless pages, but then they are > > never used and never freed, so e

[PATCH] z3fold: claim page in the beginning of free

2019-09-26 Thread Vitaly Wool
page faults (since that page would have been reclaimed by then). Fix that by claiming page in the beginning of z3fold_free(). Reported-by: Markus Linnala Signed-off-by: Vitaly Wool --- mm/z3fold.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c

[PATCH 0/3] Allow ZRAM to use any zpool-compatible backend

2019-10-10 Thread Vitaly Wool
The coming patchset is a new take on the old issue: ZRAM can currently be used only with zsmalloc even though this may not be the optimal combination for some configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and is notable for the heated discussions it has caused. The

[PATCH 1/3] zpool: extend API to match zsmalloc

2019-10-10 Thread Vitaly Wool
x27;s existence and the third one returns the huge class size. This API extension is done to align zpool API with zsmalloc API. Signed-off-by: Vitaly Wool --- include/linux/zpool.h | 14 +- mm/zpool.c| 36 2 files changed, 49 insertions(

[PATCH 2/3] zsmalloc: add compaction and huge class callbacks

2019-10-10 Thread Vitaly Wool
Add compaction callbacks for zpool compaction API extension. Add huge_class_size callback too to be fully aligned. With these in place, we can proceed with ZRAM modification to use the universal (zpool) API. Signed-off-by: Vitaly Wool --- mm/zsmalloc.c | 21 + 1 file

[PATCH 3/3] zram: use common zpool interface

2019-10-10 Thread Vitaly Wool
ned-off-by: Vitaly Wool --- drivers/block/zram/Kconfig| 3 ++- drivers/block/zram/zram_drv.c | 64 +++ drivers/block/zram/zram_drv.h | 4 +-- 3 files changed, 39 insertions(+), 32 deletions(-) diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kco

Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend

2019-10-14 Thread Vitaly Wool
Hi Sergey, On Mon, Oct 14, 2019 at 12:35 PM Sergey Senozhatsky wrote: > > Hi, > > On (10/10/19 23:04), Vitaly Wool wrote: > [..] > > The coming patchset is a new take on the old issue: ZRAM can > > currently be used only with zsmalloc even though this may not > &g

Re: [PATCH 3/3] zram: use common zpool interface

2019-10-14 Thread Vitaly Wool
On Mon, Oct 14, 2019 at 12:49 PM Sergey Senozhatsky wrote: > > On (10/10/19 23:20), Vitaly Wool wrote: > [..] > > static const char *default_compressor = "lzo-rle"; > > > > +#define BACKEND_PAR_BUF_SIZE 32 > > +static char backend_par_buf[BACKEND_P

Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend

2019-10-15 Thread Vitaly Wool
Hi Minchan, On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim wrote: > > On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote: > > The coming patchset is a new take on the old issue: ZRAM can currently be > > used only with zsmalloc even though this may not be the optimal co

Re: [PATCH 0/3] Allow ZRAM to use any zpool-compatible backend

2019-10-21 Thread Vitaly Wool
On Tue, Oct 15, 2019 at 10:00 PM Minchan Kim wrote: > > On Tue, Oct 15, 2019 at 09:39:35AM +0200, Vitaly Wool wrote: > > Hi Minchan, > > > > On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim wrote: > > > > > > On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly

[PATCH] z3fold: fix sheduling while atomic

2019-05-23 Thread Vitaly Wool
kmem_cache_alloc() may be called from z3fold_alloc() in atomic context, so we need to pass correct gfp flags to avoid "scheduling while atomic" bug. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 11 ++- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/z3f

[PATCH] z3fold: add inter-page compaction

2019-05-24 Thread Vitaly Wool
significantly better average compression ratio. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 328 +--- 1 file changed, 285 insertions(+), 43 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 985732c8b025..d82bccc8bc90 100644 --- a/mm/z

Re: [PATCH] z3fold: add inter-page compaction

2019-05-27 Thread Vitaly Wool
On Sun, May 26, 2019 at 12:09 AM Andrew Morton wrote: > Forward-declaring inline functions is peculiar, but it does appear to work. > > z3fold is quite inline-happy. Fortunately the compiler will ignore the > inline hint if it seems a bad idea. Even then, the below shrinks > z3fold.o text from

[PATCH v2] z3fold: add inter-page compaction

2019-05-27 Thread Vitaly Wool
significantly better average compression ratio. Changes from v1: * balanced use of inlining * more comments in the key parts of code * code rearranged to avoid forward declarations * rwlock instead of seqlock Signed-off-by: Vitaly Wool --- mm/z3fold.c | 538 +

Re: [PATCH] mm/z3fold.c: Allow __GFP_HIGHMEM in z3fold_alloc

2019-07-13 Thread Vitaly Wool
y related flags from the call to kmem_cache_alloc() > for our slots since it is a kernel allocation. > > Signed-off-by: Henry Burns Acked-by: Vitaly Wool > --- > mm/z3fold.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/mm/z3fo

Re: [PATCH] mm/z3fold.c: Fix race between migration and destruction

2019-08-10 Thread Vitaly Wool
Hi Henry, Den fre 9 aug. 2019 6:46 emHenry Burns skrev: > > In z3fold_destroy_pool() we call destroy_workqueue(&pool->compact_wq). > However, we have no guarantee that migration isn't happening in the > background at that time. > > Migration directly calls queue_work_on(pool->compact_wq), if dest

Re: [PATCH] mm/z3fold: Fix z3fold_buddy_slots use after free

2019-07-02 Thread Vitaly Wool
Hi Henry, On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote: > > Running z3fold stress testing with address sanitization > showed zhdr->slots was being used after it was freed. > > z3fold_free(z3fold_pool, handle) > free_handle(handle) > kmem_cache_free(pool->c_handle, zhdr->slots) > relea

Re: [PATCH v2] mm/z3fold.c: Lock z3fold page before __SetPageMovable()

2019-07-02 Thread Vitaly Wool
e is > passed in locked, as documentation. > > Signed-off-by: Henry Burns > Suggested-by: Vitaly Wool Acked-by: Vitaly Wool Thanks! > --- > Changelog since v1: > - Added an if statement around WARN_ON(trylock_page(page)) to avoid >unlocking a page locked by a

Re: [PATCH v2] mm/z3fold.c: Lock z3fold page before __SetPageMovable()

2019-07-02 Thread Vitaly Wool
On Wed, Jul 3, 2019 at 12:24 AM Andrew Morton wrote: > > On Tue, 2 Jul 2019 15:17:47 -0700 Henry Burns wrote: > > > > > > > + if (can_sleep) { > > > > > > + lock_page(page); > > > > > > + __SetPageMovable(page, pool->inode->i_mapping); > > > > > > +

Re: [PATCH v2] mm/z3fold.c: Lock z3fold page before __SetPageMovable()

2019-07-02 Thread Vitaly Wool
On Wed, Jul 3, 2019 at 12:18 AM Henry Burns wrote: > > On Tue, Jul 2, 2019 at 2:19 PM Andrew Morton > wrote: > > > > On Mon, 1 Jul 2019 18:16:30 -0700 Henry Burns wrote: > > > > > Cc: Vitaly Wool , Vitaly Vul > > > > Are these the same person? &

Re: [PATCH] mm/z3fold: Fix z3fold_buddy_slots use after free

2019-07-02 Thread Vitaly Wool
On Tue, Jul 2, 2019 at 6:57 PM Henry Burns wrote: > > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool wrote: > > > > Hi Henry, > > > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote: > > > > > > Running z3fold stress testing with address sanitizati

[PATCH] mm/z3fold.c: don't try to use buddy slots after free

2019-07-08 Thread Vitaly Wool
>From fd87fdc38ea195e5a694102a57bd4d59fc177433 Mon Sep 17 00:00:00 2001 From: Vitaly Wool Date: Mon, 8 Jul 2019 13:41:02 +0200 [PATCH] mm/z3fold: don't try to use buddy slots after free As reported by Henry Burns: Running z3fold stress testing with address sanitization showed zhdr-&g

Re: [PATCH v2] z3fold: fix reclaim lock-ups

2018-05-06 Thread Vitaly Wool
Hi Jongseok, Den tors 3 maj 2018 kl 08:36 skrev Jongseok Kim : > In the processing of headless pages, there was a problem that the > zhdr pointed to another page or a page was alread released in > z3fold_free(). So, the wrong page is encoded in headless, or test_bit > does not work properly in z3

Re: Crashes/hung tasks with z3pool under memory pressure

2018-04-16 Thread Vitaly Wool
Hey Guenter, On 04/13/2018 07:56 PM, Guenter Roeck wrote: On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote: On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck wrote: On Fri, Apr 13, 2018 at 05:21:02AM +, Vitaly Wool wrote: Hi Guenter, Den fre 13 apr. 2018 kl 00:01 skrev Guenter

Re: Crashes/hung tasks with z3pool under memory pressure

2018-04-16 Thread Vitaly Wool
On 4/16/18 5:58 PM, Guenter Roeck wrote: On Mon, Apr 16, 2018 at 02:43:01PM +0200, Vitaly Wool wrote: Hey Guenter, On 04/13/2018 07:56 PM, Guenter Roeck wrote: On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote: On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck wrote: On Fri, Apr 13

Re: Crashes/hung tasks with z3pool under memory pressure

2018-04-17 Thread Vitaly Wool
Hi Guenter, > [ ... ] > > Ugh. Could you please keep that patch and apply this on top: > > > > diff --git a/mm/z3fold.c b/mm/z3fold.c > > index c0bca6153b95..e8a80d044d9e 100644 > > --- a/mm/z3fold.c > > +++ b/mm/z3fold.c > > @@ -840,6 +840,7 @@ static int z3fold_reclaim_page(struct z3fold_pool

Re: Crashes/hung tasks with z3pool under memory pressure

2018-04-18 Thread Vitaly Wool
Den tis 17 apr. 2018 kl 18:35 skrev Guenter Roeck : > Getting better; the log is much less noisy. Unfortunately, there are still > locking problems, resulting in a hung task. I copied the log message to [1]. > This is with [2] applied on top of v4.17-rc1. Now this version (this is a full patch

[PATCH/RFC] z3fold: add kref refcounting

2016-12-08 Thread Vitaly Wool
: Vitaly Wool --- mm/z3fold.c | 108 1 file changed, 57 insertions(+), 51 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 729a2da..8dcf35e 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -52,6 +52,7 @@ enum buddy

[PATCH/RESEND v3 0/5] z3fold: optimizations and fixes

2017-01-31 Thread Vitaly Wool
This is a new take on z3fold optimizations/fixes consolidation, revised after comments from Dan ([1] - [6]). The coming patches are to be applied on top of the following commit: Author: zhong jiang Date: Tue Dec 20 11:53:40 2016 +1100 mm/z3fold.c: limit first_num to the actual range of p

[PATCH/RESEND v3 1/5] z3fold: make pages_nr atomic

2017-01-31 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 20 +--- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 207e5dd..2273789 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -80,7

[PATCH/RESEND v3 2/5] z3fold: fix header size related issues

2017-01-31 Thread Vitaly Wool
num_free_chunks() and the address to move the middle chunk to in case of in-page compaction in z3fold_compact_page(). Signed-off-by: Vitaly Wool --- mm/z3fold.c | 114 ++-- 1 file changed, 64 insertions(+), 50 deletions(-) diff --git a/mm/z3fold.c

[PATCH/RESEND v3 4/5] z3fold: use per-page spinlock

2017-01-31 Thread Vitaly Wool
implements spinlock-based per-page locking mechanism which is lightweight enough to normally fit ok into the z3fold header. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 148 +++- 1 file changed, 106 insertions(+), 42 deletions(-) diff --git a/mm

[PATCH/RESEND v3 3/5] z3fold: extend compaction function

2017-01-31 Thread Vitaly Wool
code, using BIG_CHUNK_GAP define as a threshold for middle chunk to be worth moving. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 26 +- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 98ab01f..be8b56e 100644 --- a/mm/z3fold.c

[PATCH/RESEND v3 5/5] z3fold: add kref refcounting

2017-01-31 Thread Vitaly Wool
With both coming and already present locking optimizations, introducing kref to reference-count z3fold objects is the right thing to do. Moreover, it makes buddied list no longer necessary, and allows for a simpler handling of headless pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 151

[PATCHv3 0/3] z3fold: background page compaction

2016-10-27 Thread Vitaly Wool
patchset thus implements in-page compaction worker for z3fold, preceded by some code optimizations and preparations which, again, deserved to be separate patches. Changes compared to v2: - more accurate accounting of unbuddied_nr, per Dan's comments - various cleanups. Signed-off-by: V

[PATCHv3 2/3] z3fold: change per-pool spinlock to rwlock

2016-10-27 Thread Vitaly Wool
=280249KB/s, maxb=281130KB/s, mint=839218msec, maxt=841856msec Run status group 1 (all jobs): READ: io=2700.0GB, aggrb=5210.7MB/s, minb=444640KB/s, maxb=447791KB/s, mint=526874msec, maxt=530607msec Signed-off-by: Vitaly Wool --- mm/z3fold.c | 44

[PATCHv3 1/3] z3fold: make counters atomic

2016-10-27 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. It also introduces a new counter, unbuddied_nr, which is atomic64_t, too, to track the number of unbuddied (compactable) z3fold pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 33 + 1 file changed

[PATCHv3 3/3] z3fold: add compaction worker

2016-10-27 Thread Vitaly Wool
Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 166 ++-- 1 file changed, 140 insertions(+), 26 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 014d84f..cc26ff5 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -27,6

[PATCH/RFC v2] z3fold: use per-page read/write lock

2016-11-08 Thread Vitaly Wool
ed to spinlocks - no read/write locks, just per-page spinlock [1] https://lkml.org/lkml/2016/11/5/59 Signed-off-by: Vitaly Wool --- mm/z3fold.c | 123 +--- 1 file changed, 85 insertions(+), 38 deletions(-) diff --git a/mm/z3fold.c b/mm/z3f

[PATCH v3] z3fold: use per-page read/write lock

2016-11-09 Thread Vitaly Wool
ed to spinlocks - no read/write locks, just per-page spinlock Changes from v2 [2]: - if a page is taken off its list by z3fold_alloc(), bail out from z3fold_free() early [1] https://lkml.org/lkml/2016/11/5/59 [2] https://lkml.org/lkml/2016/11/8/400 Signed-off-by: Vitaly Wool --- mm/z3fold.c

[PATCH] z3fold: add shrinker

2016-10-11 Thread Vitaly Wool
This patch implements shrinker for z3fold. This shrinker implementation does not free up any pages directly but it allows for a denser placement of compressed objects which results in less actual pages consumed and higher compression ratio therefore. Signed-off-by: Vitaly Wool --- mm/z3fold.c

Re: [PATCH] z3fold: add shrinker

2016-10-11 Thread Vitaly Wool
On Tue, Oct 11, 2016 at 11:36 PM, Dave Chinner wrote: > On Tue, Oct 11, 2016 at 11:14:08PM +0200, Vitaly Wool wrote: >> This patch implements shrinker for z3fold. This shrinker >> implementation does not free up any pages directly but it allows >> for a denser placement

[PATCH v2] z3fold: add shrinker

2016-10-11 Thread Vitaly Wool
latest Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 151 ++-- 1 file changed, 127 insertions(+), 24 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 8f9e89c..4841972 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@

Re: [PATCH v2] z3fold: add shrinker

2016-10-12 Thread Vitaly Wool
On Wed, 12 Oct 2016 09:52:06 +1100 Dave Chinner wrote: > > > +static unsigned long z3fold_shrink_scan(struct shrinker *shrink, > > + struct shrink_control *sc) > > +{ > > + struct z3fold_pool *pool = container_of(shrink, struct z3fold_pool, > > +

[PATCH v3] z3fold: add shrinker

2016-10-12 Thread Vitaly Wool
Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 157 ++-- 1 file changed, 132 insertions(+), 25 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 8f9e89c..8d35b4a 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -30,6

Re: [PATCH v2] z3fold: add shrinker

2016-10-13 Thread Vitaly Wool
On Thu, 13 Oct 2016 11:20:06 +1100 Dave Chinner wrote: > > That's an incorrect assumption. Long spinlock holds prevent > scheduling on that CPU, and so we still get latency problems. Fair enough. The problem is, some of the z3fold code that need mutual exclusion runs with preemption disabled s

[PATCH 2/3] z3fold: remove redundant locking

2016-10-13 Thread Vitaly Wool
The per-pool z3fold spinlock should generally be taken only when a non-atomic pool variable is modified. There's no need to take it to map/unmap an object. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 17 + 1 file changed, 5 insertions(+), 12 deletions(-) diff --git

[PATCH 3/3] z3fold: add shrinker

2016-10-13 Thread Vitaly Wool
Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 136 +--- 1 file changed, 111 insertions(+), 25 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 10513b5..0b2a0d3 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -27,6

[PATCHv4 0/3] z3fold: add shrinker

2016-10-13 Thread Vitaly Wool
This patch set implements shrinker for z3fold. The actual shrinker implementation will follow some code optimizations and preparations that I thought would be reasonable to have as separate patches.

[PATCH 1/3] z3fold: make counters atomic

2016-10-13 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. It also introduces a new counter, unbuddied_nr, which is also atomic64_t, to track the number of unbuddied (shrinkable) pages, as a step to prepare for z3fold shrinker implementation. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 33

Re: [PATCH] z3fold: discourage use of pages that weren't compacted

2016-11-15 Thread Vitaly Wool
On Tue, Nov 15, 2016 at 1:33 AM, Andrew Morton wrote: > On Fri, 11 Nov 2016 14:02:07 +0100 Vitaly Wool wrote: > >> If a z3fold page couldn't be compacted, we don't want it to be >> used for next object allocation in the first place. It makes more >> sense to

[PATCH 0/3] z3fold: per-page spinlock and other smaller optimizations

2016-11-15 Thread Vitaly Wool
Coming is the patchset with the per-page spinlock as the main modification, and two smaller dependent patches, one of which removes build error when the z3fold header size exceeds the size of a chunk, and the other puts non-compacted pages to the end of the unbuddied list. Signed-off-by: Vitaly

[PATCH 1/3] z3fold: use per-page spinlock

2016-11-15 Thread Vitaly Wool
z3fold_alloc(), bail out from z3fold_free() early Changes from v3 [3]: - spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger [1] https://lkml.org/lkml/2016/11/5/59 [2] https://lkml.org/lkml/2016/11/8/400 [3] https://lkml.org/lkml/2016/11/9/146 Signed-off-by: Vitaly Wool --- mm

[PATCH 2/3] z3fold: don't fail kernel build if z3fold_header is too big

2016-11-15 Thread Vitaly Wool
stead. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 11 --- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 7ad70fa..ffd9353 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -870,10 +870,15 @@ MODULE_ALIAS("zpool-z3fold"); static int __i

[PATCH 3/3] z3fold: discourage use of pages that weren't compacted

2016-11-15 Thread Vitaly Wool
idea gives 5-7% improvement in randrw fio tests and about 10% improvement in fio sequential read/write. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 22 +- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index ffd9353..e282ba0 10064

[PATCHv2 0/3] z3fold: background page compaction

2016-10-20 Thread Vitaly Wool
x86_64 with gcc 6.0) and non-obvious performance benefits - instead, per-pool spinlock is substituted with rwlock. Signed-off-by: Vitaly Wool [1] https://lkml.org/lkml/2016/10/15/31

[PATCHv2 1/3] z3fold: make counters atomic

2016-10-20 Thread Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t. It also introduces a new counter, unbuddied_nr, which is atomic64_t, too, to track the number of unbuddied (compactable) z3fold pages. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 33 + 1 file changed

[PATCHv2 2/3] z3fold: change per-pool spinlock to rwlock

2016-10-20 Thread Vitaly Wool
Mapping/unmapping goes with no actual modifications so it makes sense to only take a read lock in map/unmap functions. This change gives up to 15% performance gain in fio tests. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 44 +++- 1 file changed, 23

[PATCHv2 3/3] z3fold: add compaction worker

2016-10-20 Thread Vitaly Wool
Linus's tree. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 166 ++-- 1 file changed, 140 insertions(+), 26 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 014d84f..cc26ff5 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -27,6

Re: [PATCH 1/3] z3fold: make counters atomic

2016-10-22 Thread Vitaly Wool
On Thu, Oct 20, 2016 at 10:17 PM, Dan Streetman wrote: > On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote: >> This patch converts pages_nr per-pool counter to atomic64_t. >> It also introduces a new counter, unbuddied_nr, which is >> atomic64_t, too, to track th

Re: [PATCH 2/3] z3fold: remove redundant locking

2016-10-22 Thread Vitaly Wool
On Thu, Oct 20, 2016 at 10:15 PM, Dan Streetman wrote: > On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote: >> The per-pool z3fold spinlock should generally be taken only when >> a non-atomic pool variable is modified. There's no need to take it >> to map/unmap an obj

[PATCH v4] z3fold: use per-page spinlock

2016-11-09 Thread Vitaly Wool
z3fold_alloc(), bail out from z3fold_free() early Changes from v3 [3]: - spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger [1] https://lkml.org/lkml/2016/11/5/59 [2] https://lkml.org/lkml/2016/11/8/400 [3] https://lkml.org/lkml/2016/11/9/146 Signed-off-by: Vitaly Wool --- mm/z3fold.c

[PATCH] z3fold: don't fail kernel build if z3fold_header is too big

2016-11-09 Thread Vitaly Wool
igned-off-by: Vitaly Wool --- mm/z3fold.c | 11 --- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index cd3713d..5fe2652 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -866,10 +866,15 @@ MODULE_ALIAS("zpool-z3fold"); static int __init i

[PATCH] z3fold: discourage use of pages that weren't compacted

2016-11-11 Thread Vitaly Wool
idea gives 5-7% improvement in randrw fio tests and about 10% improvement in fio sequential read/write. Signed-off-by: Vitaly Wool --- mm/z3fold.c | 32 ++-- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 5fe2652..eb8f

Re: [PATCH] z3fold: use %z modifier for format string

2016-11-24 Thread Vitaly Wool
‘long unsigned int’ [-Werror=format=] > > Fixes: 50a50d2676c4 ("z3fold: don't fail kernel build if z3fold_header is too > big") > Signed-off-by: Arnd Bergmann Acked-by: Vitaly Wool And thanks :) ~vitaly > --- > mm/z3fold.c | 2 +- > 1 file changed, 1 inser

Re: [PATCH] z3fold: use %z modifier for format string

2016-11-24 Thread Vitaly Wool
Hi Joe, On Thu, Nov 24, 2016 at 6:08 PM, Joe Perches wrote: > On Thu, 2016-11-24 at 17:31 +0100, Arnd Bergmann wrote: >> Printing a size_t requires the %zd format rather than %d: >> >> mm/z3fold.c: In function ‘init_z3fold’: >> include/linux/kern_levels.h:4:18: error: format ‘%d’ expects argument

Re: [PATCH] z3fold: use %z modifier for format string

2016-11-25 Thread Vitaly Wool
On Fri, Nov 25, 2016 at 9:41 AM, Arnd Bergmann wrote: > On Friday, November 25, 2016 8:38:25 AM CET Vitaly Wool wrote: >> >> diff --git a/mm/z3fold.c b/mm/z3fold.c >> >> index e282ba073e77..66ac7a7dc934 100644 >> >> --- a/mm/z3fold.c >> >> +++

Re: [PATCH 2/3] z3fold: don't fail kernel build if z3fold_header is too big

2016-11-25 Thread Vitaly Wool
On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman wrote: > On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote: >> Currently the whole kernel build will be stopped if the size of >> struct z3fold_header is greater than the size of one chunk, which >> is 64 bytes by default.

Re: [PATH] z3fold: extend compaction function

2016-11-26 Thread Vitaly Wool
On Fri, Nov 25, 2016 at 10:17 PM, Dan Streetman wrote: > On Fri, Nov 25, 2016 at 9:43 AM, Dan Streetman wrote: >> On Thu, Nov 3, 2016 at 5:04 PM, Vitaly Wool wrote: >>> z3fold_compact_page() currently only handles the situation when >>> there's a single mid

Re: [PATCH 2/3] z3fold: don't fail kernel build if z3fold_header is too big

2016-11-26 Thread Vitaly Wool
On Fri, Nov 25, 2016 at 7:33 PM, Dan Streetman wrote: > On Fri, Nov 25, 2016 at 11:25 AM, Vitaly Wool wrote: >> On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman wrote: >>> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote: >>>> Currently the whole kernel build

[PATCH 0/2] z3fold fixes

2016-11-26 Thread Vitaly Wool
use of pages that weren't compacted") and applied the coming 2 instead. Signed-off-by: Vitaly Wool [1] https://lkml.org/lkml/2016/11/25/595

[PATCH 1/2] z3fold: fix header size related issues

2016-11-26 Thread Vitaly Wool
s too big"). Signed-off-by: Vitaly Wool --- mm/z3fold.c | 161 1 file changed, 87 insertions(+), 74 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index 7ad70fa..efbcfcc 100644 --- a/mm/z3fold.c +++ b/mm/

[PATCH 2/2] z3fold: fix locking issues

2016-11-26 Thread Vitaly Wool
lru entry). [1] https://lkml.org/lkml/2016/11/25/628 [2] http://www.spinics.net/lists/linux-mm/msg117227.html Signed-off-by: Vitaly Wool --- mm/z3fold.c | 18 -- 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/mm/z3fold.c b/mm/z3fold.c index efbcfcc..729a2da 10064

Re: [PATCH 3/3] z3fold: discourage use of pages that weren't compacted

2016-11-28 Thread Vitaly Wool
On Fri, Nov 25, 2016 at 7:25 PM, Dan Streetman wrote: > On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote: >> If a z3fold page couldn't be compacted, we don't want it to be >> used for next object allocation in the first place. > > why? !compacted can only mean

Re: [PATCHv3 1/3] z3fold: make counters atomic

2016-11-01 Thread Vitaly Wool
On Tue, Nov 1, 2016 at 9:03 PM, Dan Streetman wrote: > On Thu, Oct 27, 2016 at 7:08 AM, Vitaly Wool wrote: >> This patch converts pages_nr per-pool counter to atomic64_t. >> It also introduces a new counter, unbuddied_nr, which is >> atomic64_t, too, to track the number of u

Re: [PATCHv3 2/3] z3fold: change per-pool spinlock to rwlock

2016-11-01 Thread Vitaly Wool
On Tue, Nov 1, 2016 at 9:16 PM, Dan Streetman wrote: > On Thu, Oct 27, 2016 at 7:12 AM, Vitaly Wool wrote: >> Mapping/unmapping goes with no actual modifications so it makes >> sense to only take a read lock in map/unmap functions. >> >> This change gives up to 10%

Re: [PATCH 0/2] z3fold fixes

2016-12-22 Thread Vitaly Wool
On Thu, Dec 22, 2016 at 10:55 PM, Dan Streetman wrote: > On Sun, Dec 18, 2016 at 3:15 AM, Vitaly Wool wrote: >> On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton >> wrote: >>> On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman wrote: >>> >>>> On Sat

<    1   2   3   >