From: Hui Zhu
When I use punch holes to setup a test page fragmentation environment, I
didn't know when the punch holes done. I can only get this information
through top or something else.
This commit add code to output a message after punch holes done to
handle this issue.
Signed-off-by
Add code for touch-alloc.
And Change read memory to write memory to avoid use the zero-page for
reads in do_anonymous_page.
Signed-off-by: Hui Zhu
---
usemem.c | 34 ++
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/usemem.c b/usemem.c
index
From: Hui Zhu
Got following error when build usemem:
gcc -O -c -Wall -g usemem.c -o usemem.o
usemem.c:451:15: error: redefinition of ‘do_access’
unsigned long do_access(unsigned long *p, unsigned long idx, int read)
^
usemem.c:332:15: note: previous definition of
Some environment will not fault in memory even if MAP_POPULATE is set.
This commit add option touch-alloc to read memory after allocate it to
make sure the pages is fault in.
Signed-off-by: Hui Zhu
---
usemem.c | 37 +
1 file changed, 25 insertions(+), 12
From: Hui Zhu
This commit add a new option init-time to remove the initialization time
from the run time and show the initialization time.
Signed-off-by: Hui Zhu
---
usemem.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/usemem.c b/usemem.c
From: Hui Zhu
Got an error when I built samples/bpf in a separate directory:
make O=../bk/ defconfig
make -j64 bzImage
make headers_install
make V=1 M=samples/bpf
...
...
make -C /home/teawater/kernel/linux/samples/bpf/../..//tools/build
CFLAGS= LDFLAGS= fixdep
make -f
/home/teawater/kernel
This commit adds a vq dcvq to deflate continuous pages.
When VIRTIO_BALLOON_F_CONT_PAGES is set, try to get continuous pages
from icvq and use madvise MADV_WILLNEED with the pages.
Signed-off-by: Hui Zhu
---
hw/virtio/virtio-balloon.c | 14 +-
include/hw/virtio/virtio
This commit adds a new flag VIRTIO_BALLOON_F_CONT_PAGES to virtio_balloon.
Add it adds a vq inflate_cont_vq to inflate continuous pages.
When VIRTIO_BALLOON_F_CONT_PAGES is set, try to allocate continuous pages
and report them use inflate_cont_vq.
Signed-off-by: Hui Zhu
---
drivers/virtio
two benefits:
1. Increase the speed of balloon inflate and deflate.
2. Decrease the splitted THPs number in the host.
[1] https://github.com/teawater/linux/tree/balloon_conts
[2] https://github.com/teawater/qemu/tree/balloon_conts
[3] https://lkml.org/lkml/2020/5/13/1211
Hui Zhu (2
kml/2020/5/12/324
[4] https://github.com/teawater/linux/tree/balloon_conts
[5] https://github.com/teawater/qemu/tree/balloon_conts
[6] https://lkml.org/lkml/2020/5/13/1211
Hui Zhu (2):
virtio_balloon: Add VIRTIO_BALLOON_F_CONT_PAGES and inflate_cont_vq
virtio_balloon: Add deflate_cont_vq to
the pages.
Signed-off-by: Hui Zhu
---
hw/virtio/virtio-balloon.c | 80 -
include/hw/virtio/virtio-balloon.h | 2 +-
include/standard-headers/linux/virtio_balloon.h | 1 +
3 files changed, 55 insertions(+), 28 deletions(-)
diff --git a
: Hui Zhu
---
drivers/virtio/virtio_balloon.c| 73
include/linux/balloon_compaction.h | 3 ++
mm/balloon_compaction.c| 76 ++
3 files changed, 144 insertions(+), 8 deletions(-)
diff --git a/drivers/virtio
get continuous pages PFN that its order is current_pages_order
from VQ ivq use use madvise MADV_DONTNEED release the page.
This will handle the THP split issue.
Signed-off-by: Hui Zhu
---
hw/virtio/virtio-balloon.c | 77 +
include/hw/virtio/virtio
age.
[1] https://lkml.org/lkml/2020/3/12/144
[2]
https://lore.kernel.org/linux-mm/1584893097-12317-1-git-send-email-teawa...@gmail.com/
Signed-off-by: Hui Zhu
---
drivers/virtio/virtio_balloon.c | 98 +++--
include/linux/balloon_compaction.h | 9 +++-
incl
.
Signed-off-by: Hui Zhu
---
mm/frontswap.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/frontswap.c b/mm/frontswap.c
index 60bb20e..f07ea63 100644
--- a/mm/frontswap.c
+++ b/mm/frontswap.c
@@ -274,8 +274,12 @@ int __frontswap_store(struct page *page
This commit let zswap treats THP as continuous normal pages
in zswap_frontswap_store.
It will store them to a lot of "zswap_entry". These "zswap_entry"
will be inserted to "zswap_tree" together.
Signed-off-by:
enabled by the swap parameter
io_switch_enabled_enabled, zswap will just work when the swap disk has
outstanding I/O requests.
[1] https://lkml.org/lkml/2019/9/11/935
[2] https://lkml.org/lkml/2019/9/20/90
[3] https://lkml.org/lkml/2019/9/22/927
Signed-off-by: Hui Zhu
---
include/linux/swap.h |
/2019/9/20/90
[3] https://lkml.org/lkml/2019/9/20/1076
Signed-off-by: Hui Zhu
---
include/linux/swap.h | 3 +++
mm/Kconfig | 18
mm/page_io.c | 16 +++
mm/zswap.c | 58
4 files changed
will be stored to zswap only when
the IO in flight number of swap device is bigger than
zswap_read_in_flight_limit or zswap_write_in_flight_limit
when zswap is enabled.
Then the zswap just work when the IO in flight number of swap device
is low.
Signed-off-by: Hui Zhu
---
include/linux/swap.h
usemem will read memory again after access the memory with this option.
It can help test the speed that load page from swap to memory.
Signed-off-by: Hui Zhu
---
usemem.c | 46 --
1 file changed, 40 insertions(+), 6 deletions(-)
diff --git a/usemem.c
55716864 bytes / 14457505 usecs = 571159 KB/s
365961 usecs to free memory
Signed-off-by: Hui Zhu
---
include/linux/swap.h | 3 +++
mm/Kconfig | 11 +++
mm/page_io.c | 16 +++
mm/zswap.c | 55
4 fi
ge blocks is decreased
when the kernel has this commit.
Signed-off-by: Hui Zhu
---
mm/zswap.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index a4e4d36ec085..c6bf92bf5890 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1006,6 +1006,7 @@ static int
zpool_malloc_support_movable check malloc_support_movable
to make sure if a zpool support allocate movable memory.
Signed-off-by: Hui Zhu
---
include/linux/zpool.h | 3 +++
mm/zpool.c| 16
mm/zsmalloc.c | 19 ++-
3 files changed, 29 insertions(+), 9 deletions
Shakeel Butt 于2019年6月5日周三 上午1:12写道:
>
> On Sun, Jun 2, 2019 at 2:47 AM Hui Zhu wrote:
> >
> > This is the second version that was updated according to the comments
> > from Sergey Senozhatsky in https://lkml.org/lkml/2019/5/29/73
> >
> > zswap compresses swa
0
00
You can see that the number of unmovable page blocks is decreased
when malloc_movable_if_support is enabled.
Signed-off-by: Hui Zhu
---
mm/zswap.c | 16 +---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index
zpool_malloc_support_movable check malloc_support_movable
to make sure if a zpool support allocate movable memory.
Signed-off-by: Hui Zhu
---
include/linux/zpool.h | 3 +++
mm/zpool.c| 16
mm/zsmalloc.c | 19 ++-
3 files changed, 29 insertions(+), 9 deletions
n see that the number of unmovable page blocks is decreased
when malloc_force_movable is enabled.
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 0787d33b80d8..7d44c7ccd882 100644
--- a/mm/zsmall
_config", "perf_freq", "perf_period", "perf_type" can
set the options of perf.
After record, access "str" will get the record data in string.
Access "cpu0/page" will get the record data in binary that is format is
in "bin_format".
Signed-off-by:
overhead and
full coverage.
BloodTest is for it.
It is a interface can acess function of other analysing tools and
records to internal buffer that user or application can access very
quickly (mmap).
Now, BloodTest just support record cpu, perf and task infomation in
one seconds.
Hui Zhu (2
cord.
bt_insert and bt_pullout will call analysing tools.
Signed-off-by: Hui Zhu
---
fs/proc/stat.c | 8 +--
include/linux/kernel_stat.h| 3 ++
init/Kconfig | 3 ++
kernel/Makefile| 2 +
kernel/bloodtest/Makefile | 1 +
kernel/bloodtest
fomation.
After record, access "str" will get the record data in string.
Access "page" will get the record data in binary that is format is
in "bin_format".
Signed-off-by: Hui Zhu
---
include/linux/bloodtest.h | 10 +
kernel/bloodtest/Makefile | 2 +-
kernel/blood
le, it also need the address
of modules from /proc/modules.
Add /proc/modules_update_version will help the application to get if the
kernel modules address is changed or not.
Signed-off-by: Hui Zhu
---
kernel/module.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/kern
2017-09-26 17:51 GMT+08:00 Mel Gorman :
> On Tue, Sep 26, 2017 at 04:46:42PM +0800, Hui Zhu wrote:
>> Current HighAtomic just to handle the high atomic page alloc.
>> But I found that use it handle the normal unmovable continuous page
>> alloc will help to against lo
The page add a new condition to let gfp_to_alloc_flags return
alloc_flags with ALLOC_HARDER if the order is not 0 and migratetype is
MIGRATE_UNMOVABLE.
Then alloc umovable page that order is not 0 will try to use HighAtomic.
Signed-off-by: Hui Zhu
---
mm/page_alloc.c | 6 --
1 file changed
pageblocks.
Signed-off-by: Hui Zhu
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b54e94a..9322458 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2101,7 +2101,7 @@ static void reserve_highatomic_pageblock(struct p
local faults 0 0
NUMA hint local percent100 100
NUMA pages migrated 0 0
AutoNUMA cost 0% 0%
Hui Zhu (2):
Try to use HighAtomic if try to alloc umovable page that order is not 0
Change limit of
2017-08-16 12:51 GMT+08:00 Minchan Kim :
> On Wed, Aug 16, 2017 at 10:49:14AM +0800, Hui Zhu wrote:
>> Hi Minchan,
>>
>> 2017-08-16 10:13 GMT+08:00 Minchan Kim :
>> > Hi Hui,
>> >
>> > On Mon, Aug 14, 2017 at 05:56:30PM +0800, Hui Zhu w
-not-return-ebusy-if-zspage-is-not-inuse-fix.patch
[2] zsmalloc-zs_page_migrate-schedule-free_work-if-zspage-is-ZS_EMPTY.patch
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 5 -
1 file changed, 5 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index fb99953..8560c93 100644
--- a/mm
se
zs_page_migrate done the job without "schedule_work(&pool->free_work);"
Make this patch let zs_page_migrate wake up free_work if need.
[1]
zsmalloc-zs_page_migrate-skip-unnecessary-loops-but-not-return-ebusy-if-zspage-is-not-inuse-fix.patch
Signed-off-by: Hui Zhu
---
mm/zs
Hi Minchan,
2017-08-16 10:13 GMT+08:00 Minchan Kim :
> Hi Hui,
>
> On Mon, Aug 14, 2017 at 05:56:30PM +0800, Hui Zhu wrote:
>> After commit e2846124f9a2 ("zsmalloc: zs_page_migrate: skip unnecessary
>
> This patch is not merged yet so the hash is invalid.
> That means
p;pool->free_work);"
Make this patch let zs_page_migrate wake up free_work if need.
Fixes: e2846124f9a2 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not
return -EBUSY if zspage is not inuse")
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 13 +++--
1 file changed
2017-08-14 16:31 GMT+08:00 Minchan Kim :
> Hi Hui,
>
> On Mon, Aug 14, 2017 at 02:34:46PM +0800, Hui Zhu wrote:
>> After commit e2846124f9a2 ("zsmalloc: zs_page_migrate: skip unnecessary
>> loops but not return -EBUSY if zspage is not inuse") zs_page_migrate
&g
her zspage wake up free_work.
Make this patch let zs_page_migrate wake up free_work if need.
Fixes: e2846124f9a2 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not
return -EBUSY if zspage is not inuse")
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 10 --
1 file changed
/21/10
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index d41edd2..c2c7ba9 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1997,8 +1997,11 @@ int zs_page_migrate(struct address_space *mapping
2017-07-20 16:47 GMT+08:00 Minchan Kim :
> Hi Hui,
>
> On Thu, Jul 20, 2017 at 02:39:17PM +0800, Hui Zhu wrote:
>> Hi Minchan,
>>
>> I am sorry for answer late.
>> I spent some time on ubuntu 16.04 with mmtests in an old laptop.
>>
>> 2017-07-17
Hi Minchan,
I am sorry for answer late.
I spent some time on ubuntu 16.04 with mmtests in an old laptop.
2017-07-17 13:39 GMT+08:00 Minchan Kim :
> Hello Hui,
>
> On Fri, Jul 14, 2017 at 03:51:07PM +0800, Hui Zhu wrote:
>> Got some -EBUSY from zs_page_migrate that will make mi
se if migrate_mode is not
MIGRATE_ASYNC.
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 66 +--
1 file changed, 37 insertions(+), 29 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index d41edd2..c298e5c 100644
--- a/mm/zsmalloc.c
+++
1126484928 bytes / 6555923 usecs = 167799 KB/s
1126484928 bytes / 6642291 usecs = 165617 KB/s
Hui Zhu (2):
SWAP: add interface to let disk close swap cache
ZRAM: add sysfs switch swap_cache_not_keep
This patch add a interface to gendisk that SWAP device can use it to
control the swap cache rule.
Signed-off-by: Hui Zhu
---
include/linux/genhd.h | 3 +++
include/linux/swap.h | 8 ++
mm/Kconfig| 10 +++
mm/memory.c | 2 +-
mm/swapfile.c | 74
This patch add a sysfs interface swap_cache_not_keep to control the swap
cache rule for a ZRAM disk.
Swap will not keep the swap cache anytime if it set to 1.
Signed-off-by: Hui Zhu
---
drivers/block/zram/zram_drv.c | 34 ++
1 file changed, 34 insertions(+)
diff
On Mon, Sep 5, 2016 at 1:51 PM, Minchan Kim wrote:
> On Mon, Sep 05, 2016 at 01:12:05PM +0800, Hui Zhu wrote:
>> On Mon, Sep 5, 2016 at 10:18 AM, Minchan Kim wrote:
>> > On Thu, Aug 25, 2016 at 04:25:30PM +0800, Hui Zhu wrote:
>> >> On Thu, Aug 25, 2016
On Mon, Sep 5, 2016 at 10:18 AM, Minchan Kim wrote:
> On Thu, Aug 25, 2016 at 04:25:30PM +0800, Hui Zhu wrote:
>> On Thu, Aug 25, 2016 at 2:09 PM, Sergey Senozhatsky
>> wrote:
>> > Hello,
>> >
>> > On (08/22/16 16:25), Hui Zhu wrote:
>> >>
&
On Thu, Aug 25, 2016 at 2:09 PM, Sergey Senozhatsky
wrote:
> Hello,
>
> On (08/22/16 16:25), Hui Zhu wrote:
>>
>> Current ZRAM just can store all pages even if the compression rate
>> of a page is really low. So the compression rate of ZRAM is out of
>> control
Hi Minchan,
On Wed, Aug 24, 2016 at 9:04 AM, Minchan Kim wrote:
> Hi Hui,
>
> On Mon, Aug 22, 2016 at 04:25:05PM +0800, Hui Zhu wrote:
>> Current ZRAM just can store all pages even if the compression rate
>> of a page is really low. So the compression rate of ZRAM is out
the page that the
compressed size is smaller than a value.
With these patches, I set the value to 2048 and did the same test with
before. The compression rate is about 20%. The times of lowmemorykiller
also decreased.
Hui Zhu (4):
vmscan.c: shrink_page_list: unmap anon pages after pageout
Add non
New option ZRAM_NON_SWAP add a interface "non_swap" to zram.
User can set a unsigned int value to zram.
If a page that compressed size is bigger than limit, mark it as
non-swap. Then this page will add to unevictable lru list.
This patch doesn't handle the shmem file pages.
Sig
After a page marked non-swap flag in swap driver, it will add to
unevictable lru list.
This page will be kept in this status before its data changed.
Signed-off-by: Hui Zhu
---
fs/proc/meminfo.c | 6 ++
include/linux/mm_inline.h | 20 ++--
include/linux
ge is shmem file
page. Then I separate code of shmem file pages to last patch of this
series.
Signed-off-by: Hui Zhu
---
include/linux/rmap.h | 5
mm/Kconfig | 4 +++
mm/page_io.c | 11 ---
mm/rmap.c| 28 ++
mm/vmscan.c
This patch add the whole support for shmem file pages non swap.
To make sure a page is shmem file page, check mapping->a_ops == &shmem_aops.
I think it is really a hack way.
There are not a lot of shmem file pages will be swapped out.
Signed-off-by: Hui Zhu
---
drivers/block/zram/zra
The idea of this patch is same with prev version [1]. But it use the
migration frame in [1].
[1] http://comments.gmane.org/gmane.linux.kernel.mm/140014
[2] https://lkml.org/lkml/2015/7/7/21
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 214
These patches updated according to the review for the prev version [1].
So they are based on "[RFCv3 0/5] enable migration of driver pages" [2]
and "[RFC zsmalloc 0/4] meta diet" [3].
Hui Zhu (3):
zsmalloc: make struct can move
zsmalloc: mark its page "PageMobile"
zr
kml/2015/8/10/90
[2] https://lkml.org/lkml/2015/7/7/21
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 178 ++
1 file changed, 104 insertions(+), 74 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 1b18144..57c91a5 100644
--- a/mm/
Change the flags when call zs_create_pool to make zram alloc movable
zsmalloc page.
Signed-off-by: Hui Zhu
---
drivers/block/zram/zram_drv.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 9fa15bb
On Thu, Oct 15, 2015 at 5:53 PM, Minchan Kim wrote:
> On Thu, Oct 15, 2015 at 11:27:15AM +0200, Vlastimil Babka wrote:
>> On 10/15/2015 11:09 AM, Hui Zhu wrote:
>> >I got that add function interfaces is really not a good idea.
>> >So I add a new struct migration to p
Thanks. I will post a new version later.
Best,
Hui
On Tue, Oct 13, 2015 at 4:00 PM, Sergey Senozhatsky
wrote:
> On (10/13/15 14:31), Hui Zhu wrote:
>> Signed-off-by: Hui Zhu
>
> s/unless/useless/
>
> other than that
>
> Reviewed-by: Sergey Senozhatsky
>
Signed-off-by: Hui Zhu
Reviewed-by: Sergey Senozhatsky
---
mm/zsmalloc.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..c7338f0 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1428,8 +1428,6 @@ static void obj_free(struct zs_pool *pool
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..c7338f0 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1428,8 +1428,6 @@ static void obj_free(struct zs_pool *pool, struct
size_class *class
reason why there is no problem until now is huge-class page is
born with ZS_FULL so it couldn't be migrated.
Therefore, it shouldn't be real bug in practice.
However, we need this patch for future-work "VM-aware zsmalloced
page migration" to reduce external fragmentation.
Sign
On Tue, Oct 6, 2015 at 9:54 PM, Minchan Kim wrote:
> Hello,
>
> On Mon, Oct 05, 2015 at 04:23:01PM +0800, Hui Zhu wrote:
>> In function obj_malloc:
>> if (!class->huge)
>> /* record handle in the header of allocated chunk */
>>
use page_private(page) as value but not pointer
in obj_to_head.
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..e881d4f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -824,7 +824,7 @@ sta
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..f62f2fb 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -38,6 +38,7 @@
* page->lru: links together first pages of various zspa
Signed-off-by: Hui Zhu
---
mm/zsmalloc.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f135b1b..1f66d5b 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -38,6 +38,7 @@
* page->lru: links together first pages of various zspa
*
Call for Topics and Sponsors
Workshop on Open Source Development Tools 2015
Beijing, China
Sep. 12, 2015 (TBD)
HelloGCC Work Group (www.hellogcc.org)
*
On Wed, May 6, 2015 at 2:28 PM, Joonsoo Kim wrote:
> On Tue, May 05, 2015 at 11:22:59AM +0800, Hui Zhu wrote:
>> Change pfn_present to pfn_valid_within according to the review of Laura.
>>
>> I got a issue:
>> [ 214.294917] Unable to handle kernel NULL pointer derefer
if (!is_migrate_isolate_page(buddy)) {
But the begin addr of this part of CMA memory is very close to a part of
memory that is reserved in the boot time (not in buddy system).
So add a check before access it.
Suggested-by: Laura Abbott
Suggested-by: Joonsoo Kim
Signed-off-by: Hui Zhu
---
mm/page_is
On Wed, May 6, 2015 at 5:29 AM, Andrew Morton wrote:
> On Tue, 5 May 2015 11:22:59 +0800 Hui Zhu wrote:
>
>> Change pfn_present to pfn_valid_within according to the review of Laura.
>>
>> I got a issue:
>> [ 214.294917] Unable to handle kernel NULL pointer dere
emory is very close to a part of
memory that is reserved in the boot time (not in buddy system).
So add a check before access it.
Signed-off-by: Hui Zhu
---
mm/page_isolation.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 755
On Tue, May 5, 2015 at 2:34 AM, Laura Abbott wrote:
> On 05/04/2015 02:41 AM, Hui Zhu wrote:
>>
>> I got a issue:
>> [ 214.294917] Unable to handle kernel NULL pointer dereference at virtual
>> address 082a
>> [ 214.303013] pgd = cc97
>>
(not in buddy system).
So add a check before access it.
Signed-off-by: Hui Zhu
---
mm/page_isolation.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 755a42c..434730b 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@
On Mon, Jan 19, 2015 at 2:55 PM, Minchan Kim wrote:
> Hello,
>
> On Sun, Jan 18, 2015 at 04:32:59PM +0800, Hui Zhu wrote:
>> From: Hui Zhu
>>
>> The original of this patch [1] is part of Joonsoo's CMA patch series.
>> I made a patch [2] to fix the iss
On Sun, Jan 18, 2015 at 6:19 PM, Vlastimil Babka wrote:
> On 18.1.2015 10:17, Hui Zhu wrote:
>>
>> From: Hui Zhu
>>
>> To test the patch [1], I use KGTP and a script [2] to show
>> NR_FREE_CMA_PAGES
>> and gross of cma_nr_free. The values are always not s
From: Hui Zhu
To test the patch [1], I use KGTP and a script [2] to show NR_FREE_CMA_PAGES
and gross of cma_nr_free. The values are always not same.
I check the code of pages alloc and free and found that race conditions
on getting migratetype in buffered_rmqueue.
Then I add move the code of
From: Hui Zhu
The original of this patch [1] is part of Joonsoo's CMA patch series.
I made a patch [2] to fix the issue of this patch. Joonsoo reminded me
that this issue affect current kernel too. So made a new one for upstream.
Current code treat free cma pages as non-free if not ALLO
On Wed, Jan 7, 2015 at 4:45 PM, Vlastimil Babka wrote:
> On 12/30/2014 11:17 AM, Hui Zhu wrote:
>> The original of this patch [1] is used to fix the issue in Joonsoo's CMA
>> patch
>> "CMA: always treat free cma pages as non-free on watermark checking" [2].
data_breakpoint.ko can insert successful but cannot catch any change of
the data in my part because kallsyms_lookup_name rerurn 0 each time.
So add code to check the return value of kallsyms_lookup_name.
Signed-off-by: Hui Zhu
---
samples/hw_breakpoint/data_breakpoint.c | 17
f this order is substructed twice. This bug will make
__zone_watermark_ok return more false.
This patch add cma_free_area to struct free_area that just record the number
of CMA pages. And add it back in the order loop to handle the substruct
twice issue.
[1] https://lkml.org/lkml/2014/12/25/43
[2] h
On Tue, Dec 30, 2014 at 12:48 PM, Joonsoo Kim wrote:
> On Thu, Dec 25, 2014 at 05:43:26PM +0800, Hui Zhu wrote:
>> In Joonsoo's CMA patch "CMA: always treat free cma pages as non-free on
>> watermark checking" [1], it changes __zone_watermark_ok to substruct CMA
>
patches to
handle the issues.
And I merged cma_alloc_counter from [2] to cma_alloc work better.
This patchset is based on aa39477b5692611b91ac9455ae588738852b3f60 and [1].
[1] https://lkml.org/lkml/2014/5/28/64
[2] https://lkml.org/lkml/2014/10/15/623
Hui Zhu (3):
CMA: Fix the bug that CMA's
__rmqueue doesn't call __rmqueue_cma when
cma_alloc works.
[1] https://lkml.org/lkml/2014/10/24/26
Signed-off-by: Hui Zhu
---
include/linux/cma.h | 2 ++
mm/cma.c| 6 ++
mm/page_alloc.c | 8 +++-
3 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/include/
e substruct
twice issue.
[1] https://lkml.org/lkml/2014/5/28/110
Signed-off-by: Hui Zhu
Signed-off-by: Weixing Liu
---
include/linux/mmzone.h | 3 +++
mm/page_alloc.c| 29 -
2 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b
the others. This device allocated a lot of
MIGRATE_UNMOVABLE memory affect the behavior of this zone memory allocation.
This patch change __rmqueue to let nr_try_movable record all the memory
allocation of normal memory.
[1] https://lkml.org/lkml/2014/5/28/64
Signed-off-by: Hui Zhu
Signed-off-
On Fri, Oct 24, 2014 at 1:28 PM, Joonsoo Kim wrote:
> On Thu, Oct 16, 2014 at 11:35:51AM +0800, Hui Zhu wrote:
>> If page alloc function __rmqueue try to get pages from MIGRATE_MOVABLE and
>> conditions (cma_alloc_counter, cma_aggressive_free_min, cma_alloc_counter)
>> allow
On Tue, Nov 4, 2014 at 3:53 PM, Minchan Kim wrote:
> Hello,
>
> On Wed, Oct 29, 2014 at 03:43:33PM +0100, Vlastimil Babka wrote:
>> On 10/16/2014 10:55 AM, Laura Abbott wrote:
>> >On 10/15/2014 8:35 PM, Hui Zhu wrote:
>> >
>> >It's good to see anoth
On Wed, Oct 29, 2014 at 10:43 PM, Vlastimil Babka wrote:
> On 10/16/2014 10:55 AM, Laura Abbott wrote:
>>
>> On 10/15/2014 8:35 PM, Hui Zhu wrote:
>>
>> It's good to see another proposal to fix CMA utilization. Do you have
>> any data about the success r
On Mon, Nov 3, 2014 at 4:22 PM, Heesub Shin wrote:
> Hello,
>
>
> On 10/31/2014 04:25 PM, Joonsoo Kim wrote:
>>
>> In free_pcppages_bulk(), we use cached migratetype of freepage
>> to determine type of buddy list where freepage will be added.
>> This information is stored when freepage is added to
On Fri, Oct 24, 2014 at 1:25 PM, Joonsoo Kim wrote:
> On Thu, Oct 16, 2014 at 11:35:47AM +0800, Hui Zhu wrote:
>> In fallbacks of page_alloc.c, MIGRATE_CMA is the fallback of
>> MIGRATE_MOVABLE.
>> MIGRATE_MOVABLE will use MIGRATE_CMA when it doesn't have a page in
>&
On Wed, May 28, 2014 at 3:04 PM, Joonsoo Kim wrote:
> CMA is introduced to provide physically contiguous pages at runtime.
> For this purpose, it reserves memory at boot time. Although it reserve
> memory, this reserved memory can be used for movable memory allocation
> request. This usecase is be
tiguous area, increase the value of
cma_alloc_counter. CMA_AGGRESSIVE will not work in page alloc code.
After reserve custom contiguous area function return, decreases the value of
cma_alloc_counter.
Signed-off-by: Hui Zhu
---
include/linux/cma.h | 7 +++
kernel/sysctl.c | 27 ++
Update this patch according to the comments from Rafael.
Function shrink_all_memory_for_cma try to free `nr_to_reclaim' of memory.
CMA aggressive shrink function will call this functon to free `nr_to_reclaim' of
memory.
Signed-off-by: Hui Zhu
---
mm/vms
In fallbacks of page_alloc.c, MIGRATE_CMA is the fallback of
MIGRATE_MOVABLE.
MIGRATE_MOVABLE will use MIGRATE_CMA when it doesn't have a page in
order that Linux kernel want.
If a system that has a lot of user space program is running, for
instance, an Android board, most of memory is in MIGRATE_
1 - 100 of 126 matches
Mail list logo