lieves-the-pressure-vitaly-wool-softprise-consulting-ou
[2] https://lkml.org/lkml/2016/4/21/799
Signed-off-by: Vitaly Wool
---
Documentation/vm/z3fold.txt | 27 ++
mm/Kconfig | 10 +
mm/Makefile | 1 +
mm/z3fold.c
On Mon, Apr 25, 2016 at 9:28 AM, Vlastimil Babka wrote:
> On 04/22/2016 01:22 AM, Andrew Morton wrote:
>>
>> So... why don't we just replace zbud with z3fold? (Update the changelog
>> to answer this rather obvious question, please!)
>
>
> There was discussion between Seth and Vitaly on v1. With
ism, lower latencies and
lower fragmentation, so in the coming patches I tried to generalize what I've
done to enable zbud for zram so far.
--
Vitaly Wool
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kerne
_size configurable as a module parameter.
Signed-off-by: Vitaly Wool
---
drivers/block/zram/zram_drv.c | 13 +
drivers/block/zram/zram_drv.h | 16
2 files changed, 13 insertions(+), 16 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
As a preparation step for zram to be able to use common zpool API,
there has to be some alignment done on it. This patch adds
functions that correspond to zsmalloc-specific API to the common
zpool API and takes care of the callbacks that have to be
introduced, too.
Signed-off-by: Vitaly Wool
Update zram driver to use common zpool API instead of calling
zsmalloc functions directly. This patch also adds a parameter
that allows for changing underlying compressor storage to zbud.
Signed-off-by: Vitaly Wool
---
drivers/block/zram/Kconfig| 3 ++-
drivers/block/zram/zram_drv.c | 44
On Mon, Sep 14, 2015 at 4:01 PM, Vlastimil Babka wrote:
>
> On 09/14/2015 03:49 PM, Vitaly Wool wrote:
>>
>> While using ZRAM on a small RAM footprint devices, together with
>> KSM,
>> I ran into several occasions when moving pages from compressed swap back
>>
.
--
Vitaly Wool
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
be able to keep track of zbud pages in any case, struct page's
lru pointer will be used for zbud page lists instead of the one
that used to be part of the aforementioned internal structure.
Signed-off-by: Vitaly Wool
---
include/linux/page-flags.h | 3 ++
mm/zbud.c
simplified 'compact' API/callbacks.
Signed-off-by: Vitaly Wool
---
drivers/block/zram/zram_drv.c | 4 ++--
include/linux/zpool.h | 14 ++
include/linux/zsmalloc.h | 8 ++--
mm/zbud.c | 12
mm/zpool.c
On Thu, Oct 1, 2015 at 9:52 AM, Vlastimil Babka wrote:
> On 09/30/2015 05:46 PM, Vitaly Wool wrote:
>>
>> On Wed, Sep 30, 2015 at 5:37 PM, Vlastimil Babka wrote:
>>>
>>> On 09/25/2015 11:54 AM, Vitaly Wool wrote:
>>>>
>>>>
>>>>
> Could you share your script?
> I will ask our production team to reproduce it.
Wait, let me get it right. Your production team?
I take it as you would like me to help your company fix your bugs.
You are pushing the limits here.
~vitaly
--
To unsubscribe from this list: send the line "unsubscrib
On Wed, Sep 30, 2015 at 10:13 AM, Minchan Kim wrote:
> On Wed, Sep 30, 2015 at 10:01:59AM +0200, Vitaly Wool wrote:
>> > Could you share your script?
>> > I will ask our production team to reproduce it.
>>
>> Wait, let me get it right. Your production team?
>
On Wed, Sep 9, 2015 at 2:39 PM, Mel Gorman wrote:
> On Tue, Sep 08, 2015 at 05:26:13PM +0900, Joonsoo Kim wrote:
>> 2015-08-24 21:30 GMT+09:00 Mel Gorman :
>> > The primary purpose of watermarks is to ensure that reclaim can always
>> > make forward progress in PF_MEMALLOC context (kswapd and dire
On Wed, Sep 30, 2015 at 3:52 PM, Vlastimil Babka wrote:
> On 09/30/2015 10:51 AM, Vitaly Wool wrote:
>>
>> On Wed, Sep 9, 2015 at 2:39 PM, Mel Gorman
>> wrote:
>>>
>>> On Tue, Sep 08, 2015 at 05:26:13PM +0900, Joonsoo Kim wrote:
>>>>
>>&
On Wed, Sep 30, 2015 at 5:37 PM, Vlastimil Babka wrote:
> On 09/25/2015 11:54 AM, Vitaly Wool wrote:
>>
>> Hello Minchan,
>>
>> the main use case where I see unacceptably long stalls in UI with
>> zsmalloc is switching between users in Android.
>> There
ready freed handles by using
own local slots structure in z3fold_page_reclaim().
Reported-by: Markus Linnala
Reported-by: Chris Murphy
Reported-by: Agustin Dall'Alba
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 49 ++---
1 file changed, 34 inserti
On Sun, Sep 8, 2019 at 4:56 PM Maciej S. Szmigiero
wrote:
>
> On 08.09.2019 15:29, Vitaly Wool wrote:
> > z3fold_page_reclaim()'s retry mechanism is broken: on a second
> > iteration it will have zhdr from the first one so that zhdr
> > is no longer in line with struc
.
Reported-by: Agustín Dall'Alba
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 90 -
1 file changed, 90 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 75b7962439ff..ed19d98c9dcd 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@
ing a handle _that_ fast as
zswap_writeback_entry() does when it reads swpentry, the
suggestion is to keep the handle mapped till the end.
Signed-off-by: Vitaly Wool
---
mm/zswap.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 0e22744
Currently there is a leak in init_z3fold_page() -- it allocates
handles from kmem cache even for headless pages, but then they are
never used and never freed, so eventually kmem cache may get
exhausted. This patch provides a fix for that.
Reported-by: Markus Linnala
Signed-off-by: Vitaly Wool
page faults (since
that page would have been reclaimed by then). Fix that by
claiming page in the beginning of z3fold_free() and not
forgetting to clear the claim in the end.
Reported-by: Markus Linnala
Signed-off-by: Vitaly Wool
Cc:
---
mm/z3fold.c | 10 --
1 file changed, 8 insertions(
From: Vitaly Wool
For each page scheduled for compaction (e. g. by z3fold_free()),
try to apply inter-page compaction before running the traditional/
existing intra-page compaction. That means, if the page has only one
buddy, we treat that buddy as a new object that we aim to place into
an
On Wed, Sep 18, 2019 at 9:35 AM Vlastimil Babka wrote:
>
> On 9/17/19 5:53 PM, Vitaly Wool wrote:
> > Currently there is a leak in init_z3fold_page() -- it allocates
> > handles from kmem cache even for headless pages, but then they are
> > never used and never freed, so e
page faults (since
that page would have been reclaimed by then). Fix that by
claiming page in the beginning of z3fold_free().
Reported-by: Markus Linnala
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
The coming patchset is a new take on the old issue: ZRAM can currently be used
only with zsmalloc even though this may not be the optimal combination for some
configurations. The previous (unsuccessful) attempt dates back to 2015 [1] and
is notable for the heated discussions it has caused.
The
x27;s existence and the third one returns
the huge class size.
This API extension is done to align zpool API with zsmalloc API.
Signed-off-by: Vitaly Wool
---
include/linux/zpool.h | 14 +-
mm/zpool.c| 36
2 files changed, 49 insertions(
Add compaction callbacks for zpool compaction API extension.
Add huge_class_size callback too to be fully aligned.
With these in place, we can proceed with ZRAM modification
to use the universal (zpool) API.
Signed-off-by: Vitaly Wool
---
mm/zsmalloc.c | 21 +
1 file
ned-off-by: Vitaly Wool
---
drivers/block/zram/Kconfig| 3 ++-
drivers/block/zram/zram_drv.c | 64 +++
drivers/block/zram/zram_drv.h | 4 +--
3 files changed, 39 insertions(+), 32 deletions(-)
diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kco
Hi Sergey,
On Mon, Oct 14, 2019 at 12:35 PM Sergey Senozhatsky
wrote:
>
> Hi,
>
> On (10/10/19 23:04), Vitaly Wool wrote:
> [..]
> > The coming patchset is a new take on the old issue: ZRAM can
> > currently be used only with zsmalloc even though this may not
> &g
On Mon, Oct 14, 2019 at 12:49 PM Sergey Senozhatsky
wrote:
>
> On (10/10/19 23:20), Vitaly Wool wrote:
> [..]
> > static const char *default_compressor = "lzo-rle";
> >
> > +#define BACKEND_PAR_BUF_SIZE 32
> > +static char backend_par_buf[BACKEND_P
Hi Minchan,
On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim wrote:
>
> On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly Wool wrote:
> > The coming patchset is a new take on the old issue: ZRAM can currently be
> > used only with zsmalloc even though this may not be the optimal co
On Tue, Oct 15, 2019 at 10:00 PM Minchan Kim wrote:
>
> On Tue, Oct 15, 2019 at 09:39:35AM +0200, Vitaly Wool wrote:
> > Hi Minchan,
> >
> > On Mon, Oct 14, 2019 at 6:41 PM Minchan Kim wrote:
> > >
> > > On Thu, Oct 10, 2019 at 11:04:14PM +0300, Vitaly
kmem_cache_alloc() may be called from z3fold_alloc() in atomic
context, so we need to pass correct gfp flags to avoid "scheduling
while atomic" bug.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/mm/z3f
significantly better average compression ratio.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 328 +---
1 file changed, 285 insertions(+), 43 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 985732c8b025..d82bccc8bc90 100644
--- a/mm/z
On Sun, May 26, 2019 at 12:09 AM Andrew Morton
wrote:
> Forward-declaring inline functions is peculiar, but it does appear to work.
>
> z3fold is quite inline-happy. Fortunately the compiler will ignore the
> inline hint if it seems a bad idea. Even then, the below shrinks
> z3fold.o text from
significantly better average compression ratio.
Changes from v1:
* balanced use of inlining
* more comments in the key parts of code
* code rearranged to avoid forward declarations
* rwlock instead of seqlock
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 538 +
y related flags from the call to kmem_cache_alloc()
> for our slots since it is a kernel allocation.
>
> Signed-off-by: Henry Burns
Acked-by: Vitaly Wool
> ---
> mm/z3fold.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/z3fo
Hi Henry,
Den fre 9 aug. 2019 6:46 emHenry Burns skrev:
>
> In z3fold_destroy_pool() we call destroy_workqueue(&pool->compact_wq).
> However, we have no guarantee that migration isn't happening in the
> background at that time.
>
> Migration directly calls queue_work_on(pool->compact_wq), if dest
Hi Henry,
On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote:
>
> Running z3fold stress testing with address sanitization
> showed zhdr->slots was being used after it was freed.
>
> z3fold_free(z3fold_pool, handle)
> free_handle(handle)
> kmem_cache_free(pool->c_handle, zhdr->slots)
> relea
e is
> passed in locked, as documentation.
>
> Signed-off-by: Henry Burns
> Suggested-by: Vitaly Wool
Acked-by: Vitaly Wool
Thanks!
> ---
> Changelog since v1:
> - Added an if statement around WARN_ON(trylock_page(page)) to avoid
>unlocking a page locked by a
On Wed, Jul 3, 2019 at 12:24 AM Andrew Morton wrote:
>
> On Tue, 2 Jul 2019 15:17:47 -0700 Henry Burns wrote:
>
> > > > > > + if (can_sleep) {
> > > > > > + lock_page(page);
> > > > > > + __SetPageMovable(page, pool->inode->i_mapping);
> > > > > > +
On Wed, Jul 3, 2019 at 12:18 AM Henry Burns wrote:
>
> On Tue, Jul 2, 2019 at 2:19 PM Andrew Morton
> wrote:
> >
> > On Mon, 1 Jul 2019 18:16:30 -0700 Henry Burns wrote:
> >
> > > Cc: Vitaly Wool , Vitaly Vul
> >
> > Are these the same person?
&
On Tue, Jul 2, 2019 at 6:57 PM Henry Burns wrote:
>
> On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool wrote:
> >
> > Hi Henry,
> >
> > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote:
> > >
> > > Running z3fold stress testing with address sanitizati
>From fd87fdc38ea195e5a694102a57bd4d59fc177433 Mon Sep 17 00:00:00 2001
From: Vitaly Wool
Date: Mon, 8 Jul 2019 13:41:02 +0200
[PATCH] mm/z3fold: don't try to use buddy slots after free
As reported by Henry Burns:
Running z3fold stress testing with address sanitization
showed zhdr-&g
Hi Jongseok,
Den tors 3 maj 2018 kl 08:36 skrev Jongseok Kim :
> In the processing of headless pages, there was a problem that the
> zhdr pointed to another page or a page was alread released in
> z3fold_free(). So, the wrong page is encoded in headless, or test_bit
> does not work properly in z3
Hey Guenter,
On 04/13/2018 07:56 PM, Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote:
On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:21:02AM +, Vitaly Wool wrote:
Hi Guenter,
Den fre 13 apr. 2018 kl 00:01 skrev Guenter
On 4/16/18 5:58 PM, Guenter Roeck wrote:
On Mon, Apr 16, 2018 at 02:43:01PM +0200, Vitaly Wool wrote:
Hey Guenter,
On 04/13/2018 07:56 PM, Guenter Roeck wrote:
On Fri, Apr 13, 2018 at 05:40:18PM +, Vitaly Wool wrote:
On Fri, Apr 13, 2018, 7:35 PM Guenter Roeck wrote:
On Fri, Apr 13
Hi Guenter,
> [ ... ]
> > Ugh. Could you please keep that patch and apply this on top:
> >
> > diff --git a/mm/z3fold.c b/mm/z3fold.c
> > index c0bca6153b95..e8a80d044d9e 100644
> > --- a/mm/z3fold.c
> > +++ b/mm/z3fold.c
> > @@ -840,6 +840,7 @@ static int z3fold_reclaim_page(struct z3fold_pool
Den tis 17 apr. 2018 kl 18:35 skrev Guenter Roeck :
> Getting better; the log is much less noisy. Unfortunately, there are still
> locking problems, resulting in a hung task. I copied the log message to [1].
> This is with [2] applied on top of v4.17-rc1.
Now this version (this is a full patch
: Vitaly Wool
---
mm/z3fold.c | 108
1 file changed, 57 insertions(+), 51 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 729a2da..8dcf35e 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -52,6 +52,7 @@ enum buddy
This is a new take on z3fold optimizations/fixes consolidation, revised after
comments from Dan ([1] - [6]).
The coming patches are to be applied on top of the following commit:
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fold.c: limit first_num to the actual range of p
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd..2273789 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -80,7
num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 114 ++--
1 file changed, 64 insertions(+), 50 deletions(-)
diff --git a/mm/z3fold.c
implements spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(+), 42 deletions(-)
diff --git a/mm
code, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..be8b56e 100644
--- a/mm/z3fold.c
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 151
patchset thus implements in-page compaction worker for
z3fold, preceded by some code optimizations and preparations
which, again, deserved to be separate patches.
Changes compared to v2:
- more accurate accounting of unbuddied_nr, per Dan's
comments
- various cleanups.
Signed-off-by: V
=280249KB/s, maxb=281130KB/s,
mint=839218msec, maxt=841856msec
Run status group 1 (all jobs):
READ: io=2700.0GB, aggrb=5210.7MB/s, minb=444640KB/s, maxb=447791KB/s,
mint=526874msec, maxt=530607msec
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 44
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33 +
1 file changed
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 166 ++--
1 file changed, 140 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 014d84f..cc26ff5 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6
ed to spinlocks
- no read/write locks, just per-page spinlock
[1] https://lkml.org/lkml/2016/11/5/59
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 123 +---
1 file changed, 85 insertions(+), 38 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3f
ed to spinlocks
- no read/write locks, just per-page spinlock
Changes from v2 [2]:
- if a page is taken off its list by z3fold_alloc(), bail out from
z3fold_free() early
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
This patch implements shrinker for z3fold. This shrinker
implementation does not free up any pages directly but it allows
for a denser placement of compressed objects which results in
less actual pages consumed and higher compression ratio therefore.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
On Tue, Oct 11, 2016 at 11:36 PM, Dave Chinner wrote:
> On Tue, Oct 11, 2016 at 11:14:08PM +0200, Vitaly Wool wrote:
>> This patch implements shrinker for z3fold. This shrinker
>> implementation does not free up any pages directly but it allows
>> for a denser placement
latest Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 151 ++--
1 file changed, 127 insertions(+), 24 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 8f9e89c..4841972 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@
On Wed, 12 Oct 2016 09:52:06 +1100
Dave Chinner wrote:
>
> > +static unsigned long z3fold_shrink_scan(struct shrinker *shrink,
> > + struct shrink_control *sc)
> > +{
> > + struct z3fold_pool *pool = container_of(shrink, struct z3fold_pool,
> > +
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 157 ++--
1 file changed, 132 insertions(+), 25 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 8f9e89c..8d35b4a 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -30,6
On Thu, 13 Oct 2016 11:20:06 +1100
Dave Chinner wrote:
>
> That's an incorrect assumption. Long spinlock holds prevent
> scheduling on that CPU, and so we still get latency problems.
Fair enough. The problem is, some of the z3fold code that need mutual
exclusion runs with preemption disabled s
The per-pool z3fold spinlock should generally be taken only when
a non-atomic pool variable is modified. There's no need to take it
to map/unmap an object.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 17 +
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 136 +---
1 file changed, 111 insertions(+), 25 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 10513b5..0b2a0d3 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6
This patch set implements shrinker for z3fold. The actual shrinker
implementation will follow some code optimizations and preparations
that I thought would be reasonable to have as separate patches.
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is also
atomic64_t, to track the number of unbuddied (shrinkable) pages,
as a step to prepare for z3fold shrinker implementation.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33
On Tue, Nov 15, 2016 at 1:33 AM, Andrew Morton
wrote:
> On Fri, 11 Nov 2016 14:02:07 +0100 Vitaly Wool wrote:
>
>> If a z3fold page couldn't be compacted, we don't want it to be
>> used for next object allocation in the first place. It makes more
>> sense to
Coming is the patchset with the per-page spinlock as the main
modification, and two smaller dependent patches, one of which
removes build error when the z3fold header size exceeds the
size of a chunk, and the other puts non-compacted pages to the
end of the unbuddied list.
Signed-off-by: Vitaly
z3fold_alloc(), bail out from
z3fold_free() early
Changes from v3 [3]:
- spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
[3] https://lkml.org/lkml/2016/11/9/146
Signed-off-by: Vitaly Wool
---
mm
stead.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7ad70fa..ffd9353 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -870,10 +870,15 @@ MODULE_ALIAS("zpool-z3fold");
static int __i
idea gives 5-7% improvement in randrw fio tests and
about 10% improvement in fio sequential read/write.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 22 +-
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index ffd9353..e282ba0 10064
x86_64 with gcc
6.0) and non-obvious performance benefits
- instead, per-pool spinlock is substituted with rwlock.
Signed-off-by: Vitaly Wool
[1] https://lkml.org/lkml/2016/10/15/31
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33 +
1 file changed
Mapping/unmapping goes with no actual modifications so it makes
sense to only take a read lock in map/unmap functions.
This change gives up to 15% performance gain in fio tests.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 44 +++-
1 file changed, 23
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 166 ++--
1 file changed, 140 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 014d84f..cc26ff5 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6
On Thu, Oct 20, 2016 at 10:17 PM, Dan Streetman wrote:
> On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote:
>> This patch converts pages_nr per-pool counter to atomic64_t.
>> It also introduces a new counter, unbuddied_nr, which is
>> atomic64_t, too, to track th
On Thu, Oct 20, 2016 at 10:15 PM, Dan Streetman wrote:
> On Wed, Oct 19, 2016 at 12:35 PM, Vitaly Wool wrote:
>> The per-pool z3fold spinlock should generally be taken only when
>> a non-atomic pool variable is modified. There's no need to take it
>> to map/unmap an obj
z3fold_alloc(), bail out from
z3fold_free() early
Changes from v3 [3]:
- spinlock changed to raw spinlock to avoid BUILD_BUG_ON trigger
[1] https://lkml.org/lkml/2016/11/5/59
[2] https://lkml.org/lkml/2016/11/8/400
[3] https://lkml.org/lkml/2016/11/9/146
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
igned-off-by: Vitaly Wool
---
mm/z3fold.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index cd3713d..5fe2652 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -866,10 +866,15 @@ MODULE_ALIAS("zpool-z3fold");
static int __init i
idea gives 5-7% improvement in randrw fio tests and
about 10% improvement in fio sequential read/write.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 5fe2652..eb8f
‘long unsigned int’ [-Werror=format=]
>
> Fixes: 50a50d2676c4 ("z3fold: don't fail kernel build if z3fold_header is too
> big")
> Signed-off-by: Arnd Bergmann
Acked-by: Vitaly Wool
And thanks :)
~vitaly
> ---
> mm/z3fold.c | 2 +-
> 1 file changed, 1 inser
Hi Joe,
On Thu, Nov 24, 2016 at 6:08 PM, Joe Perches wrote:
> On Thu, 2016-11-24 at 17:31 +0100, Arnd Bergmann wrote:
>> Printing a size_t requires the %zd format rather than %d:
>>
>> mm/z3fold.c: In function ‘init_z3fold’:
>> include/linux/kern_levels.h:4:18: error: format ‘%d’ expects argument
On Fri, Nov 25, 2016 at 9:41 AM, Arnd Bergmann wrote:
> On Friday, November 25, 2016 8:38:25 AM CET Vitaly Wool wrote:
>> >> diff --git a/mm/z3fold.c b/mm/z3fold.c
>> >> index e282ba073e77..66ac7a7dc934 100644
>> >> --- a/mm/z3fold.c
>> >> +++
On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman wrote:
> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote:
>> Currently the whole kernel build will be stopped if the size of
>> struct z3fold_header is greater than the size of one chunk, which
>> is 64 bytes by default.
On Fri, Nov 25, 2016 at 10:17 PM, Dan Streetman wrote:
> On Fri, Nov 25, 2016 at 9:43 AM, Dan Streetman wrote:
>> On Thu, Nov 3, 2016 at 5:04 PM, Vitaly Wool wrote:
>>> z3fold_compact_page() currently only handles the situation when
>>> there's a single mid
On Fri, Nov 25, 2016 at 7:33 PM, Dan Streetman wrote:
> On Fri, Nov 25, 2016 at 11:25 AM, Vitaly Wool wrote:
>> On Fri, Nov 25, 2016 at 4:59 PM, Dan Streetman wrote:
>>> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote:
>>>> Currently the whole kernel build
use of pages that weren't compacted") and
applied the coming 2 instead.
Signed-off-by: Vitaly Wool
[1] https://lkml.org/lkml/2016/11/25/595
s too big").
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 161
1 file changed, 87 insertions(+), 74 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 7ad70fa..efbcfcc 100644
--- a/mm/z3fold.c
+++ b/mm/
lru entry).
[1] https://lkml.org/lkml/2016/11/25/628
[2] http://www.spinics.net/lists/linux-mm/msg117227.html
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index efbcfcc..729a2da 10064
On Fri, Nov 25, 2016 at 7:25 PM, Dan Streetman wrote:
> On Tue, Nov 15, 2016 at 11:00 AM, Vitaly Wool wrote:
>> If a z3fold page couldn't be compacted, we don't want it to be
>> used for next object allocation in the first place.
>
> why? !compacted can only mean
On Tue, Nov 1, 2016 at 9:03 PM, Dan Streetman wrote:
> On Thu, Oct 27, 2016 at 7:08 AM, Vitaly Wool wrote:
>> This patch converts pages_nr per-pool counter to atomic64_t.
>> It also introduces a new counter, unbuddied_nr, which is
>> atomic64_t, too, to track the number of u
On Tue, Nov 1, 2016 at 9:16 PM, Dan Streetman wrote:
> On Thu, Oct 27, 2016 at 7:12 AM, Vitaly Wool wrote:
>> Mapping/unmapping goes with no actual modifications so it makes
>> sense to only take a read lock in map/unmap functions.
>>
>> This change gives up to 10%
On Thu, Dec 22, 2016 at 10:55 PM, Dan Streetman wrote:
> On Sun, Dec 18, 2016 at 3:15 AM, Vitaly Wool wrote:
>> On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton
>> wrote:
>>> On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman wrote:
>>>
>>>> On Sat
101 - 200 of 280 matches
Mail list logo