This is a consolidation of z3fold optimizations and fixes done so far, revised
after comments from Dan [1].
The coming patches are to be applied on top of the following commit:
commit 07cfe852286d5e314f8cd19781444e12a2b6cdf3
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fo
Convert pages_nr per-pool counter to atomic64_t so that we won't have
to care about locking for reading/updating it.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 20
y due to less actual page allocations on hot path due to denser
in-page allocation).
This patch adds the relevant code, using BIG_CHUNK_GAP define as a
threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
y due to less actual page allocations on hot path due to denser
in-page allocation).
This patch adds the relevant code, using BIG_CHUNK_GAP define as a
threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
implements raw spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(+), 42 deletions(-)
diff --git
num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 161
1 file changed, 87 insertions(+), 74 deletions(-)
diff --git a/mm/z3fold.c
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 137
On Tue, Nov 29, 2016 at 11:39 PM, Andrew Morton
wrote:
> On Tue, 29 Nov 2016 17:33:19 -0500 Dan Streetman wrote:
>
>> On Sat, Nov 26, 2016 at 2:15 PM, Vitaly Wool wrote:
>> > Here come 2 patches with z3fold fixes for chunks counting and locking. As
>> > commi
The patch "z3fold: add kref refcounting" introduced a bug in
z3fold_reclaim_page() with function exit that may leave pool->lock
spinlock held. Here comes the trivial fix.
Reported-by: Alexey Khoroshilov
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 1 +
1 file changed, 1 inser
Hi Minchan,
On Thu, Jun 16, 2016 at 1:17 AM, Minchan Kim wrote:
> On Wed, Jun 15, 2016 at 10:42:07PM +0800, Geliang Tang wrote:
>> Change zram to use the zpool api instead of directly using zsmalloc.
>> The zpool api doesn't have zs_compact() and zs_pool_stats() functions.
>> I did the following
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 26 +++---
1 file changed, 15 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 8f9e89c..4d02280 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
code, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 60 +++-
1 file changed, 47 insertions(+), 13 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 4d
On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton
wrote:
> On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool wrote:
>
>> This patch converts pages_nr per-pool counter to atomic64_t.
>
> Which is slower.
>
> Presumably there is a reason for making this change. This reason
>
On Thu, Nov 3, 2016 at 10:16 PM, Andrew Morton
wrote:
> On Thu, 3 Nov 2016 22:04:28 +0100 Vitaly Wool wrote:
>
>> z3fold_compact_page() currently only handles the situation when
>> there's a single middle chunk within the z3fold page. However it
>> may be worth it t
On Thu, Nov 3, 2016 at 11:17 PM, Andrew Morton
wrote:
> On Thu, 3 Nov 2016 22:24:07 +0100 Vitaly Wool wrote:
>
>> On Thu, Nov 3, 2016 at 10:14 PM, Andrew Morton
>> wrote:
>> > On Thu, 3 Nov 2016 22:00:58 +0100 Vitaly Wool wrote:
>> >
>> >>
one directly to the
z3fold header makes the latter quite big on some systems so that
it won't fit in a signle chunk.
This patch implements custom per-page read/write locking mechanism
which is lightweight enough to fit into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
On Sun, Nov 6, 2016 at 12:38 AM, Andi Kleen wrote:
> Vitaly Wool writes:
>
>> Most of z3fold operations are in-page, such as modifying z3fold
>> page header or moving z3fold objects within a page. Taking
>> per-pool spinlock to protect per-page objects is therefore
>&g
On Wed, Jan 4, 2017 at 4:43 PM, Dan Streetman wrote:
>> static int z3fold_compact_page(struct z3fold_header *zhdr)
>> {
>> struct page *page = virt_to_page(zhdr);
>> - void *beg = zhdr;
>> + int ret = 0;
>
> I still don't understand why you're adding ret and using goto. Ju
On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman wrote:
> On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool wrote:
>> With both coming and already present locking optimizations,
>> introducing kref to reference-count z3fold objects is the right
>> thing to do. Moreover, it makes b
This is a consolidation of z3fold optimizations and fixes done so far, revised
after comments from Dan ([1], [2], [3], [4]).
The coming patches are to be applied on top of the following commit:
Author: zhong jiang
Date: Tue Dec 20 11:53:40 2016 +1100
mm/z3fold.c: limit first_num to the ac
num_free_chunks() and the address to
move the middle chunk to in case of in-page compaction in
z3fold_compact_page().
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 114 ++--
1 file changed, 64 insertions(+), 50 deletions(-)
diff --git a/mm/z3fold.c
With both coming and already present locking optimizations,
introducing kref to reference-count z3fold objects is the right
thing to do. Moreover, it makes buddied list no longer necessary,
and allows for a simpler handling of headless pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 145
code, using BIG_CHUNK_GAP define as
a threshold for middle chunk to be worth moving.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 26 +-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 98ab01f..fca3310 100644
--- a/mm/z3fold.c
This patch converts pages_nr per-pool counter to atomic64_t.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 207e5dd..2273789 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -80,7
implements spinlock-based per-page locking mechanism which
is lightweight enough to normally fit ok into the z3fold header.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 148 +++-
1 file changed, 106 insertions(+), 42 deletions(-)
diff --git a/mm
On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
>> z3fold_compact_page() currently only handles the situation when
>> there's a single middle chunk within the z3fold page. However it
>> may be worth it to m
On Wed, Jan 11, 2017 at 5:58 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 5:52 AM, Vitaly Wool wrote:
>> On Wed, Jan 4, 2017 at 7:42 PM, Dan Streetman wrote:
>>> On Sun, Dec 25, 2016 at 7:40 PM, Vitaly Wool wrote:
>>>> With both coming and already
On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
>> With both coming and already present locking optimizations,
>> introducing kref to reference-count z3fold objects is the right
>> thing to do. Moreover, it makes b
On Wed, Jan 11, 2017 at 6:39 PM, Dan Streetman wrote:
> On Wed, Jan 11, 2017 at 12:27 PM, Vitaly Wool wrote:
>> On Wed, Jan 11, 2017 at 6:08 PM, Dan Streetman wrote:
>>> On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
>>>> With both coming and already
On Wed, 11 Jan 2017 17:43:13 +0100
Vitaly Wool wrote:
> On Wed, Jan 11, 2017 at 5:28 PM, Dan Streetman wrote:
> > On Wed, Jan 11, 2017 at 10:06 AM, Vitaly Wool wrote:
> >> z3fold_compact_page() currently only handles the situation when
> >> there's a single mid
t sure it is worth it but I can do that :)
>
> the header's already rounded up to chunk size, so if there's room then
> it won't take any extra memory. but it works either way.
So let's have it like this then:
With both coming and a
Here comes the second iteration over zpool/zbud/zsmalloc API alignment.
This time I divide it into three patches: for zpool, for zbud and for zsmalloc
:)
Patches are non-intrusive and do not change any existing functionality. They
only
add up stuff for the alignment purposes.
--
To unsubscribe f
This patch adds two functions to the zpool API: zpool_compact()
and zpool_get_num_compacted(). The former triggers compaction for
the underlying allocator and the latter retrieves the number of
pages migrated due to compaction for the whole time of this pool's
existence.
Signed-off-by: V
Add no-op compaction callbacks to zbud.
Signed-off-by: Vitaly Wool
---
mm/zbud.c | 12
1 file changed, 12 insertions(+)
diff --git a/mm/zbud.c b/mm/zbud.c
index fa48bcdf..d67c0aa 100644
--- a/mm/zbud.c
+++ b/mm/zbud.c
@@ -195,6 +195,16 @@ static void zbud_zpool_unmap(void *pool
Add compaction callbacks for zpool compaction API extension.
Signed-off-by: Vitaly Wool
---
mm/zsmalloc.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..8f2ddd1 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -365,6 +365,19
On Thu, Sep 17, 2015 at 1:30 AM, Sergey Senozhatsky
wrote:
>
> just a side note,
> I'm afraid this is not how it works. numbers go first, to justify
> the patch set.
>
These patches are extension/alignment patches, why would anyone need
to justify that?
But just to help you understand where I a
> I don't know how zsmalloc handles uncompressible PAGE_SIZE allocations, but
> I wouldn't expect it to be any more clever than this? So why duplicate the
> functionality in zswap and zbud? This could be handled e.g. at the zpool
> level? Or maybe just in zram, as IIRC in zswap (frontswap) it's val
On Wed, Sep 23, 2015 at 5:18 AM, Seth Jennings wrote:
> On Tue, Sep 22, 2015 at 02:17:33PM +0200, Vitaly Wool wrote:
>> Currently zbud is only capable of allocating not more than
>> PAGE_SIZE - ZHDR_SIZE_ALIGNED - CHUNK_SIZE. This is okay as
>> long as only zswap is using i
On Tue, Sep 22, 2015 at 11:49 PM, Dan Streetman wrote:
> On Tue, Sep 22, 2015 at 8:17 AM, Vitaly Wool wrote:
>> Currently zbud is only capable of allocating not more than
>> PAGE_SIZE - ZHDR_SIZE_ALIGNED - CHUNK_SIZE. This is okay as
>> long as only zswap is using it, but ot
for zbud page lists
* page->private to hold 'under_reclaim' flag
page->private will also be used to indicate if this page contains
a zbud header in the beginning or not ('headless' flag).
Signed-off-by: Vitaly Wool
---
mm/zbud.c | 167
Hello Seth,
On Thu, Sep 24, 2015 at 12:41 AM, Seth Jennings
wrote:
> On Wed, Sep 23, 2015 at 10:59:00PM +0200, Vitaly Wool wrote:
>> Okay, how about this? It's gotten smaller BTW :)
>>
>> zbud: allow up to PAGE_SIZE allocations
>>
>> Currently zbud is on
flag).
This patch incorporates minor fixups after Seth's comments.
Signed-off-by: Vitaly Wool
---
mm/zbud.c | 168 ++
1 file changed, 114 insertions(+), 54 deletions(-)
diff --git a/mm/zbud.c b/mm/zbud.c
index fa48bcdf..619beba 1
> I already said questions, opinion and concerns but anything is not clear
> until now. Only clear thing I could hear is just "compaction stats are
> better" which is not enough for me. Sorry.
>
> 1) https://lkml.org/lkml/2015/9/15/33
> 2) https://lkml.org/lkml/2015/9/21/2
Could you please stop p
> Have you seen those symptoms before? How did you come up to a conclusion
> that zram->zbud will do the trick?
I have data from various tests (partially described here:
https://lkml.org/lkml/2015/9/17/244) and once again, I'll post a reply
to https://lkml.org/lkml/2015/9/15/33 with more detailed
Hello Minchan,
the main use case where I see unacceptably long stalls in UI with
zsmalloc is switching between users in Android.
There is a way to automate user creation and switching between them so
the test I run both to get vmstat statistics and to profile stalls is
to create a user, switch to
On Fri, Sep 25, 2015 at 10:47 AM, Minchan Kim wrote:
> On Fri, Sep 25, 2015 at 10:17:54AM +0200, Vitaly Wool wrote:
>>
>> > I already said questions, opinion and concerns but anything is not clear
>> > until now. Only clear thing I could hear is just "compaction
Hello Minchan,
> Sorry, because you wrote up "zram" in the title.
> As I said earlier, we need several numbers to investigate.
>
> First of all, what is culprit of your latency?
> It seems you are thinking about compaction. so compaction what?
> Frequent scanning? lock collision? or frequent sleep
Hi Dan,
On Mon, Sep 21, 2015 at 6:17 PM, Dan Streetman wrote:
> Please make sure to cc Seth also, he's the owner of zbud.
Sure :)
>> @@ -514,8 +552,17 @@ int zbud_reclaim_page(struct zbud_pool *pool, unsigned
>> int retries)
>> return -EINVAL;
>> }
>> for (i =
d 'under_reclaim' flag
page->private will also be used to indicate if this page contains
a zbud header in the beginning or not ('headless' flag).
Signed-off-by: Vitaly Wool
---
mm/zbud.c | 194 +-
1 file changed, 1
M
> >> To: Colin King
> >> Cc: Seth Jennings ; Dan Streetman
> >> ; Vitaly Wool ; Andrew
> >> Morton ; Song Bao Hua (Barry Song)
> >> ; Stephen Rothwell ;
> >> linux...@kvack.org; kernel-janit...@vger.kernel.org;
> >> linux-kernel@vg
drzej Siewior
> Cc: Andrew Morton
> Cc: Herbert Xu
> Cc: David S. Miller
> Cc: Mahipal Challa
> Cc: Seth Jennings
> Cc: Dan Streetman
> Cc: Vitaly Wool
> Cc: Zhou Wang
> Cc: Hao Fang
> Cc: Colin Ian King
> Signed-off-by: Barry Song
Acked-by: Vitaly Wool
Hi Shakeel,
On Wed, Jun 5, 2019 at 6:31 PM Shakeel Butt wrote:
>
> On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
> >
> > As a zpool_driver, zsmalloc can allocate movable memory because it
> > support migate pages.
> > But zbud and z3fold cannot allocate movable memory.
> >
>
> Cc: Vitaly
thanks
alves
> Cc: Sebastian Andrzej Siewior
> Cc: Andrew Morton
> Cc: Herbert Xu
> Cc: David S. Miller
> Cc: Mahipal Challa
> Cc: Seth Jennings
> Cc: Dan Streetman
> Cc: Vitaly Wool
> Cc: Zhou Wang
> Signed-off-by: Barry Song
> ---
> -v2:
> rebase to 5.8-r
Stress testing of the current z3fold implementation on a 8-core system
revealed it was possible that a z3fold page deleted from its unbuddied
list in z3fold_alloc() would be put on another unbuddied list by
z3fold_free() while z3fold_alloc() is still processing it. This has
been introduced with com
Currently if z3fold couldn't find an unbuddied page it would first
try to pull a page off the stale list. The problem with this
approach is that we can't 100% guarantee that the page is not
processed by the workqueue thread at the same time unless we run
cancel_work_sync() on it, which we can't do
ased if its
compaction is scheduled. It then becomes compaction function's
responsibility to decrease the counter and quit immediately if
the page was actually freed.
Signed-off-by: Vitaly Wool
Cc: stable
---
mm/z3fold.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
_del_first_exclusive will delete the
first node off the list and mark it as not being on any list.
Signed-off-by: Vitaly Wool
---
include/linux/llist.h | 25 +
lib/llist.c | 29 +
2 files changed, 54 insertions(+)
diff --git a/inc
It sometimes is necessary to be able to be able to use llist in
the following manner:
> if (node_unlisted(node))
> llst_add(node, list);
i. e. only add a node to the list if it's not already on a list.
This is not possible without taking locks because otherwise there's
an obvio
bug.
To avoid that, spin_lock() has to be taken earlier, before the
kref_put() call mentioned earlier.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 486550df32be..b04fa3ba1bf2 100644
--- a/mm
Fix the situation when clear_bit() is called for page->private before
the page pointer is actually assigned. While at it, remove work_busy()
check because it is costly and does not give 100% guarantee anyway.
Signed-of-by: Vitaly Wool
---
mm/z3fold.c | 6 ++
1 file changed, 2 inserti
Hi Andrew,
2017-09-14 23:15 GMT+02:00 Andrew Morton :
> On Thu, 14 Sep 2017 15:59:36 +0200 Vitaly Wool wrote:
>
>> Fix the situation when clear_bit() is called for page->private before
>> the page pointer is actually assigned. While at it, remove work_busy()
>> che
inged-off-by: Vitaly Wool
---
drivers/staging/android/ion/ion.h | 2 +
drivers/staging/android/ion/ion_page_pool.c | 165 +++-
2 files changed, 163 insertions(+), 4 deletions(-)
diff --git a/drivers/staging/android/ion/ion.h
b/drivers/staging/android/ion/ion.h
2017-09-06 2:19 GMT+02:00 Laura Abbott :
> On 09/05/2017 05:55 AM, Vitaly Wool wrote:
>> ion page pool may become quite large and scattered all around
>> the kernel memory area. These pages are actually not used so
>> moving them around to reduce fragmentation is quite cheap
nb=50582KB/s, ...
So we're in for almost 6x performance increase.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 479 +++-
1 file changed, 344 insertions(+), 135 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 54f63c4a809a..b4
mance will go up.
This patch also introduces two worker threads which: one for async
in-page object layout optimization and one for releasing freed
pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 479 +++-
1 file changed, 344 inser
On Fri, Oct 14, 2016 at 3:35 PM, zhongjiang wrote:
> From: zhong jiang
>
> z3fold compact page has nothing with the last_chunks. even if
> last_chunks is not free, compact page will proceed.
>
> The patch just remove the limit without functional change.
>
> Signed-off-by: zhong jiang
> ---
> mm
, maxb=2049KB/s,
mint=200339msec, maxt=201154msec
WRITE: io=1599.5MB, aggrb=8142KB/s, minb=2023KB/s, maxb=2062KB/s,
mint=200343msec, maxt=201158msec
Disk stats (read/write):
zram0: ios=1637032/1639304, merge=0/0, ticks=175840/458740, in_queue=637140,
util=82.48%
Signed-off-by: Vitaly Wool
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is also
atomic64_t, to track the number of unbuddied (shrinkable) pages,
as a step to prepare for z3fold shrinker implementation.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33
from the freeing path
since we can rely on shrinker to do the job. Also, a new flag
UNDER_COMPACTION is introduced to protect against two threads
trying to compact the same page.
This patch has been checked with the latest Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c
The per-pool z3fold spinlock should generally be taken only when
a non-atomic pool variable is modified. There's no need to take it
to map/unmap an object.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 17 +
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git
Hi Zhong Jiang,
On Mon, Oct 17, 2016 at 3:58 AM, zhong jiang wrote:
> Hi, Vitaly
>
> About the following patch, is it right?
>
> Thanks
> zhongjiang
> On 2016/10/13 12:02, zhongjiang wrote:
>> From: zhong jiang
>>
>> At present, zhdr->first_num plus bud can exceed the BUDDY_MASK
>> in encode_h
Hi Dan,
On Tue, Oct 18, 2016 at 4:06 AM, Dan Streetman wrote:
> On Sat, Oct 15, 2016 at 8:05 AM, Vitaly Wool wrote:
>> This patch implements shrinker for z3fold. This shrinker
>> implementation does not free up any pages directly but it allows
>> for a denser placement
On Mon, Oct 17, 2016 at 10:48 PM, Dan Streetman wrote:
> On Sat, Oct 15, 2016 at 7:59 AM, Vitaly Wool wrote:
>> The per-pool z3fold spinlock should generally be taken only when
>> a non-atomic pool variable is modified. There's no need to take it
>> to map/unmap an obje
On Tue, Oct 18, 2016 at 4:27 PM, Dan Streetman wrote:
> On Mon, Oct 17, 2016 at 10:45 PM, Vitaly Wool wrote:
>> Hi Dan,
>>
>> On Tue, Oct 18, 2016 at 4:06 AM, Dan Streetman wrote:
>>> On Sat, Oct 15, 2016 at 8:05 AM, Vitaly Wool wrote:
>>>> This
of buddies.
>>
>> The patch limit the first_num to actual range of possible buddy indexes.
>> and that is more reasonable and obvious without functional change.
>>
>> Suggested-by: Dan Streetman
>> Signed-off-by: zhong jiang
>
> Acked-by: Dan Streetman
Ac
On Tue, Oct 18, 2016 at 7:35 PM, Dan Streetman wrote:
> On Tue, Oct 18, 2016 at 12:26 PM, Vitaly Wool wrote:
>> 18 окт. 2016 г. 18:29 пользователь "Dan Streetman"
>> написал:
>>
>>
>>>
>>> On Tue, Oct 18, 2016 at 10:51 AM, Vitaly Wool
patchset thus implements in-page compaction worker for
z3fold, preceded by some code optimizations and preparations
which, again, deserved to be separate patches.
Signed-off-by: Vitaly Wool
[1] https://lkml.org/lkml/2016/10/15/31
The per-pool z3fold spinlock should generally be taken only when
a non-atomic pool variable is modified. There's no need to take it
to map/unmap an object. This patch introduces per-page lock that
will be used instead to protect per-page variables in map/unmap
functions.
Signed-off-by: V
This patch converts pages_nr per-pool counter to atomic64_t.
It also introduces a new counter, unbuddied_nr, which is
atomic64_t, too, to track the number of unbuddied (compactable)
z3fold pages.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 33 +
1 file changed
Linus's tree.
Signed-off-by: Vitaly Wool
---
mm/z3fold.c | 159 ++--
1 file changed, 133 insertions(+), 26 deletions(-)
diff --git a/mm/z3fold.c b/mm/z3fold.c
index 329bc26..580a732 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -27,6
201 - 280 of 280 matches
Mail list logo