[PATCH v2 15/15] slab: rename slab_bufctl to slab_freelist

2013-10-16 Thread Joonsoo Kim
Now, bufctl is not proper name to this array. So change it. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index fbb594f..af2db76 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2550,7 +2550,7 @@ static struct freelist *alloc_slabmgmt(struct kmem_cache *cachep, return

[PATCH v2 13/15] slab: use struct page for slab management

2013-10-16 Thread Joonsoo Kim
mechanical ones and there is no functional change. Signed-off-by: Joonsoo Kim diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8b85d8c..4e17190 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -42,18 +42,22 @@ struct page { /* First double

[PATCH v2 05/15] slab: remove cachep in struct slab_rcu

2013-10-16 Thread Joonsoo Kim
We can get cachep using page in struct slab_rcu, so remove it. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 71ba8f5..7e1aabe 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -204,7 +204,6 @@ typedef unsigned int kmem_bufctl_t; */ struct slab_rcu

[PATCH v2 07/15] slab: use well-defined macro, virt_to_slab()

2013-10-16 Thread Joonsoo Kim
This is trivial change, just use well-defined macro. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 84c4ed6..f9e676e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2865,7 +2865,6 @@ static inline void verify_redzone_free(struct kmem_cache *cache

[PATCH v2 14/15] slab: remove useless statement for checking pfmemalloc

2013-10-16 Thread Joonsoo Kim
Now, virt_to_page(page->s_mem) is same as the page, because slab use this structure for management. So remove useless statement. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 0e7f2e7..fbb594f 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -750,9 +750,7 @@ static str

[PATCH v2 12/15] slab: replace free and inuse in struct slab with newly introduced active

2013-10-16 Thread Joonsoo Kim
Now, free in struct slab is same meaning as inuse. So, remove both and replace them with active. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index c271d5b..2ec2336 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -174,8 +174,7 @@ struct slab { struct { struct

[PATCH v2 11/15] slab: remove SLAB_LIMIT

2013-10-16 Thread Joonsoo Kim
It's useless now, so remove it. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 6ced1cc..c271d5b 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -163,8 +163,6 @@ */ static bool pfmemalloc_active __read_mostly; -#defineSLAB_LIMIT (((unsigned int)(~0

[PATCH v2 06/15] slab: overloading the RCU head over the LRU for RCU free

2013-10-16 Thread Joonsoo Kim
: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index d9851ee..8b85d8c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -130,6 +130,9 @@ struct page { struct list_head list; /* slobs list of pages

[PATCH v2 08/15] slab: use __GFP_COMP flag for allocating slab pages

2013-10-16 Thread Joonsoo Kim
If we use 'struct page' of first page as 'struct slab', there is no advantage not to use __GFP_COMP. So use __GFP_COMP flag for all the cases. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index f9e676e..75c6082 100644 --- a/mm/slab.c +++ b/mm/slab.c

[PATCH v2 09/15] slab: change the management method of free objects of the slab

2013-10-16 Thread Joonsoo Kim
is method. struct slab's free = 0 kmem_bufctl_t array: 6 3 7 2 5 4 0 1 To get free objects, we access this array with following pattern. 0 -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 This may help cache line footprint if slab has many objects, and, in addition, this makes code much

[PATCH v2 10/15] slab: remove kmem_bufctl_t

2013-10-16 Thread Joonsoo Kim
Now, we changed the management method of free objects of the slab and there is no need to use special value, BUFCTL_END, BUFCTL_FREE and BUFCTL_ACTIVE. So remove them. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 05fe37e..6ced1cc 100644 --- a/mm/slab.c +++ b/mm/slab.c

[PATCH v2 04/15] slab: remove nodeid in struct slab

2013-10-16 Thread Joonsoo Kim
We can get nodeid using address translation, so this field is not useful. Therefore, remove it. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 34eb115..71ba8f5 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -222,7 +222,6 @@ struct slab { void *s_mem

[PATCH v2 00/15] slab: overload struct slab over struct page to reduce memory usage

2013-10-16 Thread Joonsoo Kim
s are also improved by 3.1% to baseline. I think that this patchsets deserve to be merged, since it reduces memory usage and also improves performance. :) Please let me know expert's opinion. Thanks. This patchset is based on v3.12-rc5. Joonsoo Kim (15): slab: correct pfmemalloc ch

[PATCH v2 02/15] slab: change return type of kmem_getpages() to struct page

2013-10-16 Thread Joonsoo Kim
b1f9 mm/slab.o * After textdata bss dec hex filename 22074 23434 4 45512b1c8 mm/slab.o And this help following patch to remove struct slab's colouroff. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 0b

[PATCH v2 03/15] slab: remove colouroff in struct slab

2013-10-16 Thread Joonsoo Kim
Now there is no user colouroff, so remove it. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 7d79bd7..34eb115 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -219,7 +219,6 @@ struct slab { union { struct

Re: [PATCH v2 01/15] slab: correct pfmemalloc check

2013-10-16 Thread Joonsoo Kim
On Wed, Oct 16, 2013 at 03:27:54PM +, Christoph Lameter wrote: > On Wed, 16 Oct 2013, Joonsoo Kim wrote: > > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -930,7 +930,8 @@ static void *__ac_put_obj(struct kmem_cache *cachep, > > struct array_cache *a

Re: [PATCH v2 00/15] slab: overload struct slab over struct page to reduce memory usage

2013-10-16 Thread Joonsoo Kim
On Wed, Oct 16, 2013 at 01:34:57PM -0700, Andrew Morton wrote: > On Wed, 16 Oct 2013 17:43:57 +0900 Joonsoo Kim wrote: > > > There is two main topics in this patchset. One is to reduce memory usage > > and the other is to change a management method of free objects of a slab.

[PATCH v2 1/5] slab: factor out calculate nr objects in cache_estimate

2013-10-16 Thread Joonsoo Kim
This logic is not simple to understand so that making separate function helping readability. Additionally, we can use this change in the following patch which implement for freelist to have another sized index in according to nr objects. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm

[PATCH v2 5/5] slab: make more slab management structure off the slab

2013-10-16 Thread Joonsoo Kim
bytes so that 97 bytes, that is, more than 75% of object size, are wasted. In a 64 byte sized slab case, no space is wasted if we use on-slab. So set off-slab determining constraint to 128 bytes. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index

[PATCH v2 4/5] slab: introduce byte sized index for the freelist of a slab

2013-10-16 Thread Joonsoo Kim
9837 seconds time elapsed ( +- 0.21% ) cache-misses are reduced by this patchset, roughly 5%. And elapsed times are improved by 1%. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 3cee122..2f379ba 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -627,8 +627,8 @@ s

[PATCH v2 0/5] slab: implement byte sized indexes for the freelist of a slab

2013-10-16 Thread Joonsoo Kim
https://lkml.org/lkml/2013/8/23/315 Patches are on top of my previous posting named as "slab: overload struct slab over struct page to reduce memory usage" https://lkml.org/lkml/2013/10/16/155 Thanks. Joonsoo Kim (5): slab: factor out calculate nr objects in cache_estimate slab: intr

[PATCH v2 3/5] slab: restrict the number of objects in a slab

2013-10-16 Thread Joonsoo Kim
that the number of objects in a slab is less or equal to 256 for a slab with 1 page. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index ec197b9..3cee122 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -157,6 +157,10 @@ #define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN #endif +/* We

[PATCH v2 2/5] slab: introduce helper functions to get/set free object

2013-10-16 Thread Joonsoo Kim
In the following patches, to get/set free objects from the freelist is changed so that simple casting doesn't work for it. Therefore, introduce helper functions. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index cb0a734..ec197b9 100644 ---

Re: [PATCH v2 13/15] slab: use struct page for slab management

2013-10-18 Thread JoonSoo Kim
2013/10/18 Christoph Lameter : > On Wed, 16 Oct 2013, Joonsoo Kim wrote: > >> - * see PAGE_MAPPING_ANON below. >> - */ >> + union { >> + struct address_space *mapping;

Re: [PATCH v2 3/5] slab: restrict the number of objects in a slab

2013-10-18 Thread JoonSoo Kim
2013/10/18 Christoph Lameter : > n Thu, 17 Oct 2013, Joonsoo Kim wrote: > >> To prepare to implement byte sized index for managing the freelist >> of a slab, we should restrict the number of objects in a slab to be less >> or equal to 256, since byte only represent 256 diff

Re: [PATCH v2 08/15] slab: use __GFP_COMP flag for allocating slab pages

2013-10-18 Thread JoonSoo Kim
2013/10/18 Christoph Lameter : > On Wed, 16 Oct 2013, Joonsoo Kim wrote: > >> If we use 'struct page' of first page as 'struct slab', there is no >> advantage not to use __GFP_COMP. So use __GFP_COMP flag for all the cases. > > Yes this is going to ma

Re: [PATCH v2 13/15] slab: use struct page for slab management

2013-10-30 Thread Joonsoo Kim
n_area callsite, since it is useless now. 2. %s/'struct freelist *'/'void *' -8<--- >From 6d11304824a3b8c3bf7574323a3e55471cc26937 Mon Sep 17 00:00:00 2001 From: Joonsoo Kim Date: Wed, 28 Aug 2013 16:30:27 +0900 Subje

[PATCH v2 17/15] slab: replace non-existing 'struct freelist *' with 'void *'

2013-10-30 Thread Joonsoo Kim
There is no 'strcut freelist', but codes use pointer to 'struct freelist'. Although compiler doesn't complain anything about this wrong usage and codes work fine, but fixing it is better. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index a8a9349..a983e

[PATCH v2 16/15] slab: fix to calm down kmemleak warning

2013-10-30 Thread Joonsoo Kim
After using struct page as slab management, we should not call kmemleak_scan_area(), since struct page isn't the tracking object of kmemleak. Without this patch and if CONFIG_DEBUG_KMEMLEAK is enabled, so many kmemleak warnings are printed. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c

Re: [PATCH v2 13/15] slab: use struct page for slab management

2013-10-30 Thread Joonsoo Kim
On Wed, Oct 30, 2013 at 10:42:14AM +0200, Pekka Enberg wrote: > On 10/30/2013 10:28 AM, Joonsoo Kim wrote: > >If you want an incremental patch against original patchset, > >I can do it. Please let me know what you want. > > Yes, please. Incremental is much easier to deal with

[PATCH v3 5/5] slab: make more slab management structure off the slab

2013-12-02 Thread Joonsoo Kim
bytes so that 97 bytes, that is, more than 75% of object size, are wasted. In a 64 byte sized slab case, no space is wasted if we use on-slab. So set off-slab determining constraint to 128 bytes. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index

[PATCH v3 1/5] slab: factor out calculate nr objects in cache_estimate

2013-12-02 Thread Joonsoo Kim
This logic is not simple to understand so that making separate function helping readability. Additionally, we can use this change in the following patch which implement for freelist to have another sized index in according to nr objects. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim

[PATCH v3 4/5] slab: introduce byte sized index for the freelist of a slab

2013-12-02 Thread Joonsoo Kim
.21% ) cache-misses are reduced by this patchset, roughly 5%. And elapsed times are improved by 1%. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 7c3c132..7fab788 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -634,8 +634,8 @@ static void cache_estimate(unsigned long gfporde

[PATCH v3 0/5] slab: implement byte sized indexes for the freelist of a slab

2013-12-02 Thread Joonsoo Kim
https://lkml.org/lkml/2013/8/23/315 Patches are on top of v3.13-rc1. Thanks. Joonsoo Kim (5): slab: factor out calculate nr objects in cache_estimate slab: introduce helper functions to get/set free object slab: restrict the number of objects in a slab slab: introduce byte sized index for

[PATCH v3 3/5] slab: restrict the number of objects in a slab

2013-12-02 Thread Joonsoo Kim
and give up this optimization. Signed-off-by: Joonsoo Kim diff --git a/include/linux/slab.h b/include/linux/slab.h index c2bba24..23e1fa1 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -201,6 +201,17 @@ struct kmem_cache { #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW

[PATCH v3 2/5] slab: introduce helper functions to get/set free object

2013-12-02 Thread Joonsoo Kim
In the following patches, to get/set free objects from the freelist is changed so that simple casting doesn't work for it. Therefore, introduce helper functions. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index e749f75..77f9eae 100644 ---

Re: [PATCH 1/9] mm/rmap: recompute pgoff for huge page

2013-12-02 Thread Joonsoo Kim
On Mon, Dec 02, 2013 at 02:44:34PM -0800, Andrew Morton wrote: > On Thu, 28 Nov 2013 16:48:38 +0900 Joonsoo Kim wrote: > > > We have to recompute pgoff if the given page is huge, since result based > > on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as &

Re: [PATCH 4/9] mm/rmap: make rmap_walk to get the rmap_walk_control argument

2013-12-02 Thread Joonsoo Kim
On Mon, Dec 02, 2013 at 02:51:05PM -0800, Andrew Morton wrote: > On Mon, 02 Dec 2013 15:09:33 -0500 Naoya Horiguchi > wrote: > > > > --- a/include/linux/rmap.h > > > +++ b/include/linux/rmap.h > > > @@ -235,11 +235,16 @@ struct anon_vma *page_lock_anon_vma_read(struct > > > page *page); > > >

Re: [PATCH 5/9] mm/rmap: extend rmap_walk_xxx() to cope with different cases

2013-12-02 Thread Joonsoo Kim
On Mon, Dec 02, 2013 at 03:09:42PM -0500, Naoya Horiguchi wrote: > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > index 0f65686..58624b4 100644 > > --- a/include/linux/rmap.h > > +++ b/include/linux/rmap.h > > @@ -239,6 +239,12 @@ struct rmap_walk_control { > > int (*main)(struc

Re: [PATCH 6/9] mm/rmap: use rmap_walk() in try_to_unmap()

2013-12-02 Thread Joonsoo Kim
On Mon, Dec 02, 2013 at 03:01:07PM -0800, Andrew Morton wrote: > On Thu, 28 Nov 2013 16:48:43 +0900 Joonsoo Kim wrote: > > > Now, we have an infrastructure in rmap_walk() to handle difference > > from variants of rmap traversing functions. > > > > So, just use it

Re: [PATCH v3 5/5] slab: make more slab management structure off the slab

2013-12-02 Thread Joonsoo Kim
On Mon, Dec 02, 2013 at 02:58:41PM +, Christoph Lameter wrote: > On Mon, 2 Dec 2013, Joonsoo Kim wrote: > > > Now, the size of the freelist for the slab management diminish, > > so that the on-slab management structure can waste large space > > if the object of the

Re: [PATCH v3 4/5] slab: introduce byte sized index for the freelist of a slab

2013-12-02 Thread Joonsoo Kim
On Mon, Dec 02, 2013 at 05:49:42PM +0900, Joonsoo Kim wrote: > Currently, the freelist of a slab consist of unsigned int sized indexes. > Since most of slabs have less number of objects than 256, large sized > indexes is needless. For example, consider the minimum kmalloc slab. It's

Re: Slab BUG with DEBUG_* options

2013-12-03 Thread Joonsoo Kim
2013/12/3 Pekka Enberg : > On 11/30/2013 01:42 PM, Meelis Roos wrote: >> >> I am debugging a reboot problem on Sun Ultra 5 (sparc64) with 512M RAM >> and turned on DEBUG_PAGEALLOC DEBUG_SLAB and DEBUG_SLAB_LEAK (and most >> other debug options) and got the following BUG and hang on startup. This >>

[PATCH v2 6/9] mm/rmap: use rmap_walk() in try_to_unmap()

2013-12-03 Thread Joonsoo Kim
(). Reviewed-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 616aa4d..2462458 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -190,7 +190,7 @@ int page_referenced_one(struct page *, struct vm_area_struct *, int

[PATCH v2 0/9] mm/rmap: unify rmap traversing functions through rmap_walk

2013-12-03 Thread Joonsoo Kim
. textdata bss dec hex filename 10640 1 16 1065729a1 mm/rmap.o 10047 1 16 100642750 mm/rmap.o 13823 7058288 228165920 mm/ksm.o 13199 7058288 2219256b0 mm/ksm.o Thanks. Joonsoo Kim (9): mm/rmap: recompute

[PATCH v2 9/9] mm/rmap: use rmap_walk() in page_mkclean()

2013-12-03 Thread Joonsoo Kim
use rmap_walk() in page_mkclean(). Reviewed-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/mm/rmap.c b/mm/rmap.c index 7944d4b..d792e71 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -808,12 +808,13 @@ int page_referenced(struct page *page, } static int page_mkclean_one(struct p

[PATCH v2 7/9] mm/rmap: use rmap_walk() in try_to_munlock()

2013-12-03 Thread Joonsoo Kim
non, try_to_unmap_file 2. mechanical change to use rmap_walk() in try_to_munlock(). 3. copy and paste comments. Reviewed-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 0eef8cb..91b9719 100644 --- a/include/linux/ksm.h +++ b/include/li

[PATCH v2 8/9] mm/rmap: use rmap_walk() in page_referenced()

2013-12-03 Thread Joonsoo Kim
-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 91b9719..3be6bb1 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -73,8 +73,6 @@ static inline void set_page_stable_node(struct page *page, struct page *ksm_might_need_to_c

[PATCH v2 4/9] mm/rmap: make rmap_walk to get the rmap_walk_control argument

2013-12-03 Thread Joonsoo Kim
separate, because it clarify changes. Reviewed-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 45c9b6a..0eef8cb 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -76,8 +76,7 @@ struct page *ksm_might_need_to_copy(struct

[PATCH v2 5/9] mm/rmap: extend rmap_walk_xxx() to cope with different cases

2013-12-03 Thread Joonsoo Kim
this patch, I introduce 4 function pointers to handle above differences. Signed-off-by: Joonsoo Kim diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 6a456ce..616aa4d 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -235,10 +235,25 @@ struct anon_vma

[PATCH v2 1/9] mm/rmap: recompute pgoff for huge page

2013-12-03 Thread Joonsoo Kim
mpute pgoff for unmapping huge page"). To handle both the cases, normal page for page cache and hugetlb page, by same way, we can use compound_page(). It returns 0 on non-compound page and it also returns proper value on compound page. Signed-off-by: Joonsoo Kim diff --git a/mm/rmap.c b/mm/

[PATCH v2 3/9] mm/rmap: factor lock function out of rmap_walk_anon()

2013-12-03 Thread Joonsoo Kim
oring lock function for anon_lock out of rmap_walk_anon(). It will be used in case of removing migration entry and in default of rmap_walk_anon(). Reviewed-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/mm/rmap.c b/mm/rmap.c index a387c44..91c73c4 100644 --- a/mm/rmap.c +++ b/mm/r

[PATCH v2 2/9] mm/rmap: factor nonlinear handling out of try_to_unmap_file()

2013-12-03 Thread Joonsoo Kim
of it. Therfore it is better to factor nonlinear handling out of try_to_unmap_file() in order to merge all kinds of rmap traverse functions easily. Reviewed-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/mm/rmap.c b/mm/rmap.c index 20c1a0d..a387c44 100644 --- a/mm/rmap.c +++ b/mm/r

Re: [patch 2/2] fs: buffer: move allocation failure loop into the allocator

2013-12-03 Thread Joonsoo Kim
> of the page allocator. > > Can we please make slab stop doing this? > > radix_tree_nodes are 560 bytes and the kernel often allocates them in > times of extreme memory stress. We really really want them to be > backed by order=0 pages. Hello, Andrew. Following patch would fix this problem. Thanks.

Re: [patch 2/2] fs: buffer: move allocation failure loop into the allocator

2013-12-03 Thread Joonsoo Kim
On Tue, Dec 03, 2013 at 06:07:17PM -0800, Andrew Morton wrote: > On Wed, 4 Dec 2013 10:52:18 +0900 Joonsoo Kim wrote: > > > SLUB already try to allocate high order page with clearing __GFP_NOFAIL. > > But, when allocating shadow page for kmemcheck, it missed clearing > >

Re: [patch 2/2] fs: buffer: move allocation failure loop into the allocator

2013-12-04 Thread Joonsoo Kim
2013/12/5 Christoph Lameter : > On Tue, 3 Dec 2013, Andrew Morton wrote: > >> > page = alloc_slab_page(alloc_gfp, node, oo); >> > if (unlikely(!page)) { >> > oo = s->min; >> >> What is the value of s->min? Please tell me it's zero. > > It usually is. > >> > @@ -1349,7 +1350,7 @

Re: possible regression on 3.13 when calling flush_dcache_page

2013-12-12 Thread Joonsoo Kim
On Thu, Dec 12, 2013 at 03:36:19PM +0100, Ludovic Desroches wrote: > fix mmc mailing list address error > > On Thu, Dec 12, 2013 at 03:31:50PM +0100, Ludovic Desroches wrote: > > Hi, > > > > With v3.13-rc3 I have an error when the atmel-mci driver calls > > flush_dcache_page (log at the end of th

Re: [PATCH V2 0/6] Memory compaction efficiency improvements

2013-12-12 Thread Joonsoo Kim
> >>stress-highalloc > >> 3.13-rc2 3.13-rc2 > >> 3.13-rc2 3.13-rc2 3.13-rc2 > >> 2-thp 3-thp > >> 4-thp 5-thp 6-thp > >>

[PATCH v3 4/6] mm/migrate: remove putback_lru_pages, fix comment on putback_movable_pages

2013-12-12 Thread Joonsoo Kim
now, so fix it. Reviewed-by: Wanpeng Li Signed-off-by: Joonsoo Kim diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f5096b5..e4671f9 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -35,7 +35,6 @@ enum migrate_reason { #ifdef CONFIG_MIGRATION -extern

[PATCH v3 0/6] correct and clean-up migration related stuff

2013-12-12 Thread Joonsoo Kim
s() on 4th patch - Add Acked-by and Review-by Here is the patchset for correcting and cleaning-up migration related stuff. These are random correction and clean-up, so please see each patches ;) Thanks. Naoya Horiguchi (1): mm/migrate: add comment about permanent failure path Joonsoo Kim (5):

[PATCH v3 3/6] mm/mempolicy: correct putback method for isolate pages if failed

2013-12-12 Thread Joonsoo Kim
guchi Reviewed-by: Wanpeng Li Signed-off-by: Joonsoo Kim diff --git a/mm/mempolicy.c b/mm/mempolicy.c index eca4a31..6d04d37 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1318,7 +1318,7 @@ static long do_mbind(unsigned long start, unsigned long len, if (nr_failed &

[PATCH v3 6/6] mm/migrate: remove unused function, fail_migrate_page()

2013-12-12 Thread Joonsoo Kim
fail_migrate_page() isn't used anywhere, so remove it. Acked-by: Christoph Lameter Reviewed-by: Naoya Horiguchi Reviewed-by: Wanpeng Li Signed-off-by: Joonsoo Kim diff --git a/include/linux/migrate.h b/include/linux/migrate.h index e4671f9..4308018 100644 --- a/include/linux/migrate.h

[PATCH v3 5/6] mm/compaction: respect ignore_skip_hint in update_pageblock_skip

2013-12-12 Thread Joonsoo Kim
it on update_pageblock_skip() to prevent from setting the wrong information. Cc: # 3.7+ Acked-by: Vlastimil Babka Reviewed-by: Naoya Horiguchi Reviewed-by: Wanpeng Li Signed-off-by: Joonsoo Kim diff --git a/mm/compaction.c b/mm/compaction.c index 805165b..f58bcd0 100644 --- a/mm/compaction.c +++ b/mm/compact

[PATCH v3 2/6] mm/migrate: correct failure handling if !hugepage_migration_support()

2013-12-12 Thread Joonsoo Kim
put back the new hugepage if !hugepage_migration_support(). If not, we would leak hugepage memory. Acked-by: Christoph Lameter Reviewed-by: Wanpeng Li Signed-off-by: Joonsoo Kim diff --git a/mm/migrate.c b/mm/migrate.c index c6ac87a..b1cfd01 100644 --- a/mm/migrate.c +++ b/mm/migrate.c

[PATCH v3 1/6] mm/migrate: add comment about permanent failure path

2013-12-12 Thread Joonsoo Kim
From: Naoya Horiguchi Let's add a comment about where the failed page goes to, which makes code more readable. Acked-by: Christoph Lameter Reviewed-by: Wanpeng Li Signed-off-by: Naoya Horiguchi Signed-off-by: Joonsoo Kim diff --git a/mm/migrate.c b/mm/migrate.c index 3747fcd..c6

Re: [patch 2/2] fs: buffer: move allocation failure loop into the allocator

2013-12-12 Thread Joonsoo Kim
On Wed, Dec 04, 2013 at 10:52:18AM +0900, Joonsoo Kim wrote: > On Tue, Dec 03, 2013 at 04:59:10PM -0800, Andrew Morton wrote: > > On Tue, 8 Oct 2013 16:58:10 -0400 Johannes Weiner > > wrote: > > > > > Buffer allocation has a very crude indefinite loop around wa

Re: [PATCH v3 5/5] slab: make more slab management structure off the slab

2013-12-12 Thread Joonsoo Kim
On Tue, Dec 03, 2013 at 11:13:08AM +0900, Joonsoo Kim wrote: > On Mon, Dec 02, 2013 at 02:58:41PM +, Christoph Lameter wrote: > > On Mon, 2 Dec 2013, Joonsoo Kim wrote: > > > > > Now, the size of the freelist for the slab management diminish, > > > so that the

Re: [patch 2/2] fs: buffer: move allocation failure loop into the allocator

2013-12-16 Thread Joonsoo Kim
On Fri, Dec 13, 2013 at 04:40:58PM +, Christoph Lameter wrote: > On Fri, 13 Dec 2013, Joonsoo Kim wrote: > > > Could you review this patch? > > I think that we should merge it to fix the problem reported by Christian. > > I'd be fine with clearing __GFP_NOFAI

Re: [PATCH 2/2] mm/compaction: cleanup isolate_freepages()

2014-04-22 Thread Joonsoo Kim
to stop compaction. And your [1/2] patch in this patchset > >>> always makes free page scanner start on pageblock boundary so when the > >>> loop in isolate_freepages is finished and pfn is lower low_pfn, the pfn > >>> would be lower than migration scanner so compact

Re: [PATCH] zram: correct offset usage in zram_bio_discard

2014-04-22 Thread Joonsoo Kim
it. As far as I understand, there is no end-user visible effect, because request size is alway PAGE_SIZE aligned and if n < PAGE_SIZE, no real operation happens. Am I missing? Anyway, Acked-by: Joonsoo Kim Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu

Re: [PATCH] zram: correct offset usage in zram_bio_discard

2014-04-22 Thread Joonsoo Kim
On Wed, Apr 23, 2014 at 11:52:08AM +0800, Weijie Yang wrote: > On Wed, Apr 23, 2014 at 11:08 AM, Joonsoo Kim wrote: > > On Wed, Apr 23, 2014 at 10:32:30AM +0800, Weijie Yang wrote: > >> On Wed, Apr 23, 2014 at 3:55 AM, Andrew Morton > >> wrote: > >> > On

Re: [PATCH 2/2] mm/compaction: cleanup isolate_freepages()

2014-04-23 Thread Joonsoo Kim
2014-04-23 16:30 GMT+09:00 Vlastimil Babka : > On 04/23/2014 04:58 AM, Joonsoo Kim wrote: >> On Tue, Apr 22, 2014 at 03:17:30PM +0200, Vlastimil Babka wrote: >>> On 04/22/2014 08:52 AM, Minchan Kim wrote: >>>> On Tue, Apr 22, 2014 at 08:33:35AM +0200, Vlastimil Babka

Re: [PATCH v5 14/14] mm, compaction: try to capture the just-created high-order freepage

2014-07-30 Thread Joonsoo Kim
On Tue, Jul 29, 2014 at 05:34:37PM +0200, Vlastimil Babka wrote: > >>@@ -570,6 +572,14 @@ isolate_migratepages_block(struct compact_control *cc, > >>unsigned long low_pfn, > >>unsigned long flags; > >>bool locked = false; > >>struct page *page = NULL, *valid_page = NULL; > >>+ unsign

Re: [PATCH v5 14/14] mm, compaction: try to capture the just-created high-order freepage

2014-07-30 Thread Joonsoo Kim
Oops... resend because of omitting everyone on CC. 2014-07-30 18:56 GMT+09:00 Vlastimil Babka : > On 07/30/2014 10:39 AM, Joonsoo Kim wrote: >> >> On Tue, Jul 29, 2014 at 05:34:37PM +0200, Vlastimil Babka wrote: >>> >>> Could do it in isolate_migratepage

Re: [PATCH 0/2] new API to allocate buffer-cache for superblock in non-movable area

2014-08-01 Thread Joonsoo Kim
On Thu, Jul 31, 2014 at 02:21:14PM +0200, Jan Kara wrote: > On Thu 31-07-14 09:37:15, Gioh Kim wrote: > > > > > > 2014-07-31 오전 9:03, Jan Kara 쓴 글: > > >On Thu 31-07-14 08:54:40, Gioh Kim wrote: > > >>2014-07-30 오후 7:11, Jan Kara 쓴 글: > > >>>On Wed 30-07-14 16:44:24, Gioh Kim wrote: > > 2014-

Re: + mm-slab_common-commonize-slab-merge-logic.patch added to -mm tree

2014-09-21 Thread Joonsoo Kim
ntation/SubmitChecklist when testing your code *** > > The -mm tree is included into linux-next and is updated > there every 3-4 working days > > -- > From: Joonsoo Kim > Subject: mm/slab_common: commonize slab merge logic

Re: + mm-slab_common-commonize-slab-merge-logic.patch added to -mm tree

2014-09-22 Thread Joonsoo Kim
On Mon, Sep 22, 2014 at 12:48:41AM -0700, Andrew Morton wrote: > On Mon, 22 Sep 2014 09:32:45 +0900 Joonsoo Kim wrote: > > > Hello, Andrew. > > > > This patch has build failure problem if CONFIG_SLUB. > > Detailed information and fix is in following patch. >

[PATCH] zsmalloc: merge size_class to reduce fragmentation

2014-09-23 Thread Joonsoo Kim
head ratio, 5th column) and uses less memory (mem_used_total, 3rd column). Signed-off-by: Joonsoo Kim --- mm/zsmalloc.c | 41 + 1 file changed, 29 insertions(+), 12 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c4a9157..36484f4 10064

Re: [PATCH] zsmalloc: merge size_class to reduce fragmentation

2014-09-23 Thread Joonsoo Kim
On Tue, Sep 23, 2014 at 03:25:55PM -0700, Andrew Morton wrote: > On Tue, 23 Sep 2014 17:30:11 +0900 Joonsoo Kim wrote: > > > zsmalloc has many size_classes to reduce fragmentation and they are > > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_SIZE is 4096. >

[PATCH v2] zsmalloc: merge size_class to reduce fragmentation

2014-09-23 Thread Joonsoo Kim
sn't need after initialization. - Rename __size_class to size_class, size_class to merged_size_class. - Add code comment for merged_size_class of struct zs_pool. - Add code comment how merging works in zs_create_pool(). Signed-off-by: Joonsoo Kim ---

Re: [RFC PATCH v3 1/4] mm/page_alloc: fix incorrect isolation behavior by rechecking migratetype

2014-09-24 Thread Joonsoo Kim
On Wed, Sep 24, 2014 at 03:30:26PM +0200, Vlastimil Babka wrote: > On 09/15/2014 04:31 AM, Joonsoo Kim wrote: > >On Mon, Sep 08, 2014 at 10:31:29AM +0200, Vlastimil Babka wrote: > >>On 08/26/2014 10:08 AM, Joonsoo Kim wrote: > >> > >>>diff --git a/mm/pa

Re: [PATCH v2] zsmalloc: merge size_class to reduce fragmentation

2014-09-24 Thread Joonsoo Kim
On Wed, Sep 24, 2014 at 05:12:20PM +0900, Minchan Kim wrote: > Hi Joonsoo, > > On Wed, Sep 24, 2014 at 03:03:46PM +0900, Joonsoo Kim wrote: > > zsmalloc has many size_classes to reduce fragmentation and they are > > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_

Re: [PATCH v2] zsmalloc: merge size_class to reduce fragmentation

2014-09-24 Thread Joonsoo Kim
On Wed, Sep 24, 2014 at 12:24:14PM -0400, Dan Streetman wrote: > On Wed, Sep 24, 2014 at 2:03 AM, Joonsoo Kim wrote: > > zsmalloc has many size_classes to reduce fragmentation and they are > > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_SIZE is 4096. > > And, zs

[PATCH v3] zsmalloc: merge size_class to reduce fragmentation

2014-09-25 Thread Joonsoo Kim
or merged_size_class of struct zs_pool. - Add code comment how merging works in zs_create_pool(). Changes from v2: - Add more commit description (Dan) - dynamically allocate size_class structure (Dan) - rename objs_per_zspage to get_maxobj_per_zspage (Minchan) Signed-off-by: Joonsoo

[RFC PATCH 2/2] zram: make afmalloc as zram's backend memory allocator

2014-09-25 Thread Joonsoo Kim
Signed-off-by: Joonsoo Kim --- drivers/block/zram/Kconfig|2 +- drivers/block/zram/zram_drv.c | 40 drivers/block/zram/zram_drv.h |4 ++-- 3 files changed, 15 insertions(+), 31 deletions(-) diff --git a/drivers/block/zram/Kconfig b/drivers

[RFC PATCH 1/2] mm/afmalloc: introduce anti-fragmentation memory allocator

2014-09-25 Thread Joonsoo Kim
o we don't have to much worry about overhead metric in afmalloc. Anyway, overhead metric is also better in afmalloc, 4% ~ 26%. As a result, I think that afmalloc is better than zsmalloc in terms of memory efficiency. But, I could be wrong so any comments are welcome. :) Signed-off-by: Joonsoo K

Re: [REGRESSION] [PATCH 1/3] mm/slab: use percpu allocator for cpu cache

2014-09-29 Thread Joonsoo Kim
On Sat, Sep 27, 2014 at 11:24:49PM -0700, Jeremiah Mahler wrote: > On Thu, Aug 21, 2014 at 05:11:13PM +0900, Joonsoo Kim wrote: > > Because of chicken and egg problem, initializaion of SLAB is really > > complicated. We need to allocate cpu cache through SLAB to make > > the

Re: [PATCH v3] zsmalloc: merge size_class to reduce fragmentation

2014-09-29 Thread Joonsoo Kim
On Fri, Sep 26, 2014 at 04:48:45PM -0400, Dan Streetman wrote: > On Fri, Sep 26, 2014 at 2:27 AM, Joonsoo Kim wrote: > > zsmalloc has many size_classes to reduce fragmentation and they are > > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_SIZE is 4096. > > And, zs

[PATCH v4] zsmalloc: merge size_class to reduce fragmentation

2014-09-29 Thread Joonsoo Kim
g logic in zs_create_pool (Dan) Signed-off-by: Joonsoo Kim --- mm/zsmalloc.c | 84 +++-- 1 file changed, 70 insertions(+), 14 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c4a9157..11556ae 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmal

Re: [PATCH v6 05/13] mm, compaction: move pageblock checks up from isolate_migratepages_range()

2014-09-29 Thread Joonsoo Kim
turn immediately > on failed pageblock checks, but keeps going until isolate_migratepages_range() > gets called once. Similarily to isolate_freepages(), the function periodically > checks if it needs to reschedule or abort async compaction. > > Signed-off-by: Vlastimil Babka > C

Re: [PATCHv4 0/3] new APIs to allocate buffer-cache with user specific flag

2014-09-14 Thread Joonsoo Kim
On Fri, Sep 05, 2014 at 10:14:16AM -0400, Theodore Ts'o wrote: > On Fri, Sep 05, 2014 at 04:32:48PM +0900, Joonsoo Kim wrote: > > I also test another approach, such as allocate freepage in CMA > > reserved region as late as possible, which is also similar to your > > s

Re: [PATCH -mmotm] mm: fix kmemcheck.c build errors

2014-09-14 Thread Joonsoo Kim
On Fri, Sep 05, 2014 at 01:01:02PM -0700, Andrew Morton wrote: > On Fri, 5 Sep 2014 16:28:06 +0900 Joonsoo Kim wrote: > > > mm-slab_common-move-kmem_cache-definition-to-internal-header.patch > > in mmotm makes following build failure. > > > > ../mm/kmemchec

Re: [PATCH] slab: implement kmalloc guard

2014-09-14 Thread Joonsoo Kim
On Thu, Sep 11, 2014 at 10:32:52PM -0400, Mikulas Patocka wrote: > > > On Mon, 8 Sep 2014, Christoph Lameter wrote: > > > On Mon, 8 Sep 2014, Mikulas Patocka wrote: > > > > > I don't know what you mean. If someone allocates 1 objects with sizes > > > from 1 to 1, you can't have 1 sl

Re: [RFC PATCH v3 1/4] mm/page_alloc: fix incorrect isolation behavior by rechecking migratetype

2014-09-14 Thread Joonsoo Kim
On Mon, Sep 08, 2014 at 10:31:29AM +0200, Vlastimil Babka wrote: > On 08/26/2014 10:08 AM, Joonsoo Kim wrote: > > >diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >index f86023b..51e0d13 100644 > >--- a/mm/page_alloc.c > >+++ b/mm/page_alloc.c > >@@ -740,9

[PATCH v2 1/3] mm/slab_common: commonize slab merge logic

2014-09-14 Thread Joonsoo Kim
Signed-off-by: Joonsoo Kim --- Documentation/kernel-parameters.txt | 14 -- mm/slab.h | 15 ++ mm/slab_common.c| 91 +++ mm/slub.c | 91 +-- 4

[PATCH v2 2/3] mm/slab: support slab merge

2014-09-14 Thread Joonsoo Kim
debug flag and object size change on these functions. v2: add commit description for the reason to implement SLAB specific functions. Signed-off-by: Joonsoo Kim --- mm/slab.c | 20 mm/slab.h |2 +- 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/mm

[PATCH v2 3/3] mm/slab: use percpu allocator for cpu cache

2014-09-14 Thread Joonsoo Kim
to alloc_kmem_cache_cpus(). Add possible problem from this patch. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- include/linux/slab_def.h | 20 +--- mm/slab.c| 235 ++ mm/slab.h|1 - 3 files changed, 75 inserti

Re: oops in slab/leaks_show

2014-03-10 Thread Joonsoo Kim
On Fri, Mar 07, 2014 at 11:18:30AM -0600, Christoph Lameter wrote: > Joonsoo recently changed the handling of the freelist in SLAB. CCing him. > > On Thu, 6 Mar 2014, Dave Jones wrote: > > > I pretty much always use SLUB for my fuzzing boxes, but thought I'd give > > SLAB a try > > for a change.

Re: oops in slab/leaks_show

2014-03-10 Thread Joonsoo Kim
On Tue, Mar 11, 2014 at 09:35:00AM +0900, Joonsoo Kim wrote: > On Fri, Mar 07, 2014 at 11:18:30AM -0600, Christoph Lameter wrote: > > Joonsoo recently changed the handling of the freelist in SLAB. CCing him. > > > > On Thu, 6 Mar 2014, Dave Jones wrote: > > > >

Re: oops in slab/leaks_show

2014-03-10 Thread Joonsoo Kim
On Mon, Mar 10, 2014 at 09:24:55PM -0400, Dave Jones wrote: > On Tue, Mar 11, 2014 at 10:01:35AM +0900, Joonsoo Kim wrote: > > On Tue, Mar 11, 2014 at 09:35:00AM +0900, Joonsoo Kim wrote: > > > On Fri, Mar 07, 2014 at 11:18:30AM -0600, Christoph Lameter wrote: > > >

<    5   6   7   8   9   10   11   12   13   14   >