Now, bufctl is not proper name to this array.
So change it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index fbb594f..af2db76 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2550,7 +2550,7 @@ static struct freelist *alloc_slabmgmt(struct kmem_cache
*cachep,
return
mechanical ones and there is no functional change.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8b85d8c..4e17190 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -42,18 +42,22 @@ struct page {
/* First double
We can get cachep using page in struct slab_rcu, so remove it.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 71ba8f5..7e1aabe 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -204,7 +204,6 @@ typedef unsigned int kmem_bufctl_t;
*/
struct slab_rcu
This is trivial change, just use well-defined macro.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 84c4ed6..f9e676e 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2865,7 +2865,6 @@ static inline void verify_redzone_free(struct kmem_cache
*cache
Now, virt_to_page(page->s_mem) is same as the page,
because slab use this structure for management.
So remove useless statement.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 0e7f2e7..fbb594f 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -750,9 +750,7 @@ static str
Now, free in struct slab is same meaning as inuse.
So, remove both and replace them with active.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index c271d5b..2ec2336 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -174,8 +174,7 @@ struct slab {
struct {
struct
It's useless now, so remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 6ced1cc..c271d5b 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -163,8 +163,6 @@
*/
static bool pfmemalloc_active __read_mostly;
-#defineSLAB_LIMIT (((unsigned int)(~0
: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index d9851ee..8b85d8c 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -130,6 +130,9 @@ struct page {
struct list_head list; /* slobs list of pages
If we use 'struct page' of first page as 'struct slab', there is no
advantage not to use __GFP_COMP. So use __GFP_COMP flag for all the cases.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index f9e676e..75c6082 100644
--- a/mm/slab.c
+++ b/mm/slab.c
is method.
struct slab's free = 0
kmem_bufctl_t array: 6 3 7 2 5 4 0 1
To get free objects, we access this array with following pattern.
0 -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7
This may help cache line footprint if slab has many objects, and,
in addition, this makes code much
Now, we changed the management method of free objects of the slab and
there is no need to use special value, BUFCTL_END, BUFCTL_FREE and
BUFCTL_ACTIVE. So remove them.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 05fe37e..6ced1cc 100644
--- a/mm/slab.c
+++ b/mm/slab.c
We can get nodeid using address translation, so this field is not useful.
Therefore, remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 34eb115..71ba8f5 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -222,7 +222,6 @@ struct slab {
void *s_mem
s are also improved by 3.1% to baseline.
I think that this patchsets deserve to be merged, since it reduces memory usage
and
also improves performance. :)
Please let me know expert's opinion.
Thanks.
This patchset is based on v3.12-rc5.
Joonsoo Kim (15):
slab: correct pfmemalloc ch
b1f9 mm/slab.o
* After
textdata bss dec hex filename
22074 23434 4 45512b1c8 mm/slab.o
And this help following patch to remove struct slab's colouroff.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 0b
Now there is no user colouroff, so remove it.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 7d79bd7..34eb115 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -219,7 +219,6 @@ struct slab {
union {
struct
On Wed, Oct 16, 2013 at 03:27:54PM +, Christoph Lameter wrote:
> On Wed, 16 Oct 2013, Joonsoo Kim wrote:
>
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -930,7 +930,8 @@ static void *__ac_put_obj(struct kmem_cache *cachep,
> > struct array_cache *a
On Wed, Oct 16, 2013 at 01:34:57PM -0700, Andrew Morton wrote:
> On Wed, 16 Oct 2013 17:43:57 +0900 Joonsoo Kim wrote:
>
> > There is two main topics in this patchset. One is to reduce memory usage
> > and the other is to change a management method of free objects of a slab.
This logic is not simple to understand so that making separate function
helping readability. Additionally, we can use this change in the
following patch which implement for freelist to have another sized index
in according to nr objects.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm
bytes
so that 97 bytes, that is, more than 75% of object size, are wasted.
In a 64 byte sized slab case, no space is wasted if we use on-slab.
So set off-slab determining constraint to 128 bytes.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index
9837 seconds time elapsed
( +- 0.21% )
cache-misses are reduced by this patchset, roughly 5%.
And elapsed times are improved by 1%.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 3cee122..2f379ba 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -627,8 +627,8 @@ s
https://lkml.org/lkml/2013/8/23/315
Patches are on top of my previous posting named as
"slab: overload struct slab over struct page to reduce memory usage"
https://lkml.org/lkml/2013/10/16/155
Thanks.
Joonsoo Kim (5):
slab: factor out calculate nr objects in cache_estimate
slab: intr
that the number of objects in a slab is less or
equal to 256 for a slab with 1 page.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index ec197b9..3cee122 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -157,6 +157,10 @@
#define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
#endif
+/* We
In the following patches, to get/set free objects from the freelist
is changed so that simple casting doesn't work for it. Therefore,
introduce helper functions.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index cb0a734..ec197b9 100644
---
2013/10/18 Christoph Lameter :
> On Wed, 16 Oct 2013, Joonsoo Kim wrote:
>
>> - * see PAGE_MAPPING_ANON below.
>> - */
>> + union {
>> + struct address_space *mapping;
2013/10/18 Christoph Lameter :
> n Thu, 17 Oct 2013, Joonsoo Kim wrote:
>
>> To prepare to implement byte sized index for managing the freelist
>> of a slab, we should restrict the number of objects in a slab to be less
>> or equal to 256, since byte only represent 256 diff
2013/10/18 Christoph Lameter :
> On Wed, 16 Oct 2013, Joonsoo Kim wrote:
>
>> If we use 'struct page' of first page as 'struct slab', there is no
>> advantage not to use __GFP_COMP. So use __GFP_COMP flag for all the cases.
>
> Yes this is going to ma
n_area callsite, since it is useless now.
2. %s/'struct freelist *'/'void *'
-8<---
>From 6d11304824a3b8c3bf7574323a3e55471cc26937 Mon Sep 17 00:00:00 2001
From: Joonsoo Kim
Date: Wed, 28 Aug 2013 16:30:27 +0900
Subje
There is no 'strcut freelist', but codes use pointer to 'struct freelist'.
Although compiler doesn't complain anything about this wrong usage and
codes work fine, but fixing it is better.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index a8a9349..a983e
After using struct page as slab management, we should not call
kmemleak_scan_area(), since struct page isn't the tracking object of
kmemleak. Without this patch and if CONFIG_DEBUG_KMEMLEAK is enabled,
so many kmemleak warnings are printed.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c
On Wed, Oct 30, 2013 at 10:42:14AM +0200, Pekka Enberg wrote:
> On 10/30/2013 10:28 AM, Joonsoo Kim wrote:
> >If you want an incremental patch against original patchset,
> >I can do it. Please let me know what you want.
>
> Yes, please. Incremental is much easier to deal with
bytes
so that 97 bytes, that is, more than 75% of object size, are wasted.
In a 64 byte sized slab case, no space is wasted if we use on-slab.
So set off-slab determining constraint to 128 bytes.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index
This logic is not simple to understand so that making separate function
helping readability. Additionally, we can use this change in the
following patch which implement for freelist to have another sized index
in according to nr objects.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
.21% )
cache-misses are reduced by this patchset, roughly 5%.
And elapsed times are improved by 1%.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 7c3c132..7fab788 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -634,8 +634,8 @@ static void cache_estimate(unsigned long gfporde
https://lkml.org/lkml/2013/8/23/315
Patches are on top of v3.13-rc1.
Thanks.
Joonsoo Kim (5):
slab: factor out calculate nr objects in cache_estimate
slab: introduce helper functions to get/set free object
slab: restrict the number of objects in a slab
slab: introduce byte sized index for
and give up this optimization.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/slab.h b/include/linux/slab.h
index c2bba24..23e1fa1 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -201,6 +201,17 @@ struct kmem_cache {
#ifndef KMALLOC_SHIFT_LOW
#define KMALLOC_SHIFT_LOW
In the following patches, to get/set free objects from the freelist
is changed so that simple casting doesn't work for it. Therefore,
introduce helper functions.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index e749f75..77f9eae 100644
---
On Mon, Dec 02, 2013 at 02:44:34PM -0800, Andrew Morton wrote:
> On Thu, 28 Nov 2013 16:48:38 +0900 Joonsoo Kim wrote:
>
> > We have to recompute pgoff if the given page is huge, since result based
> > on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as
&
On Mon, Dec 02, 2013 at 02:51:05PM -0800, Andrew Morton wrote:
> On Mon, 02 Dec 2013 15:09:33 -0500 Naoya Horiguchi
> wrote:
>
> > > --- a/include/linux/rmap.h
> > > +++ b/include/linux/rmap.h
> > > @@ -235,11 +235,16 @@ struct anon_vma *page_lock_anon_vma_read(struct
> > > page *page);
> > >
On Mon, Dec 02, 2013 at 03:09:42PM -0500, Naoya Horiguchi wrote:
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index 0f65686..58624b4 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -239,6 +239,12 @@ struct rmap_walk_control {
> > int (*main)(struc
On Mon, Dec 02, 2013 at 03:01:07PM -0800, Andrew Morton wrote:
> On Thu, 28 Nov 2013 16:48:43 +0900 Joonsoo Kim wrote:
>
> > Now, we have an infrastructure in rmap_walk() to handle difference
> > from variants of rmap traversing functions.
> >
> > So, just use it
On Mon, Dec 02, 2013 at 02:58:41PM +, Christoph Lameter wrote:
> On Mon, 2 Dec 2013, Joonsoo Kim wrote:
>
> > Now, the size of the freelist for the slab management diminish,
> > so that the on-slab management structure can waste large space
> > if the object of the
On Mon, Dec 02, 2013 at 05:49:42PM +0900, Joonsoo Kim wrote:
> Currently, the freelist of a slab consist of unsigned int sized indexes.
> Since most of slabs have less number of objects than 256, large sized
> indexes is needless. For example, consider the minimum kmalloc slab. It's
2013/12/3 Pekka Enberg :
> On 11/30/2013 01:42 PM, Meelis Roos wrote:
>>
>> I am debugging a reboot problem on Sun Ultra 5 (sparc64) with 512M RAM
>> and turned on DEBUG_PAGEALLOC DEBUG_SLAB and DEBUG_SLAB_LEAK (and most
>> other debug options) and got the following BUG and hang on startup. This
>>
().
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 616aa4d..2462458 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -190,7 +190,7 @@ int page_referenced_one(struct page *, struct
vm_area_struct *,
int
.
textdata bss dec hex filename
10640 1 16 1065729a1 mm/rmap.o
10047 1 16 100642750 mm/rmap.o
13823 7058288 228165920 mm/ksm.o
13199 7058288 2219256b0 mm/ksm.o
Thanks.
Joonsoo Kim (9):
mm/rmap: recompute
use rmap_walk() in page_mkclean().
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index 7944d4b..d792e71 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -808,12 +808,13 @@ int page_referenced(struct page *page,
}
static int page_mkclean_one(struct p
non, try_to_unmap_file
2. mechanical change to use rmap_walk() in try_to_munlock().
3. copy and paste comments.
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 0eef8cb..91b9719 100644
--- a/include/linux/ksm.h
+++ b/include/li
-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 91b9719..3be6bb1 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -73,8 +73,6 @@ static inline void set_page_stable_node(struct page *page,
struct page *ksm_might_need_to_c
separate, because it clarify changes.
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 45c9b6a..0eef8cb 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -76,8 +76,7 @@ struct page *ksm_might_need_to_copy(struct
this patch, I introduce 4 function pointers to
handle above differences.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6a456ce..616aa4d 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -235,10 +235,25 @@ struct anon_vma
mpute pgoff
for unmapping huge page").
To handle both the cases, normal page for page cache and hugetlb page,
by same way, we can use compound_page(). It returns 0 on non-compound page
and it also returns proper value on compound page.
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/
oring lock function for anon_lock out
of rmap_walk_anon(). It will be used in case of removing migration entry
and in default of rmap_walk_anon().
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index a387c44..91c73c4 100644
--- a/mm/rmap.c
+++ b/mm/r
of it. Therfore it is better
to factor nonlinear handling out of try_to_unmap_file() in order to
merge all kinds of rmap traverse functions easily.
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/mm/rmap.c b/mm/rmap.c
index 20c1a0d..a387c44 100644
--- a/mm/rmap.c
+++ b/mm/r
> of the page allocator.
>
> Can we please make slab stop doing this?
>
> radix_tree_nodes are 560 bytes and the kernel often allocates them in
> times of extreme memory stress. We really really want them to be
> backed by order=0 pages.
Hello, Andrew.
Following patch would fix this problem.
Thanks.
On Tue, Dec 03, 2013 at 06:07:17PM -0800, Andrew Morton wrote:
> On Wed, 4 Dec 2013 10:52:18 +0900 Joonsoo Kim wrote:
>
> > SLUB already try to allocate high order page with clearing __GFP_NOFAIL.
> > But, when allocating shadow page for kmemcheck, it missed clearing
> >
2013/12/5 Christoph Lameter :
> On Tue, 3 Dec 2013, Andrew Morton wrote:
>
>> > page = alloc_slab_page(alloc_gfp, node, oo);
>> > if (unlikely(!page)) {
>> > oo = s->min;
>>
>> What is the value of s->min? Please tell me it's zero.
>
> It usually is.
>
>> > @@ -1349,7 +1350,7 @
On Thu, Dec 12, 2013 at 03:36:19PM +0100, Ludovic Desroches wrote:
> fix mmc mailing list address error
>
> On Thu, Dec 12, 2013 at 03:31:50PM +0100, Ludovic Desroches wrote:
> > Hi,
> >
> > With v3.13-rc3 I have an error when the atmel-mci driver calls
> > flush_dcache_page (log at the end of th
> >>stress-highalloc
> >> 3.13-rc2 3.13-rc2
> >> 3.13-rc2 3.13-rc2 3.13-rc2
> >> 2-thp 3-thp
> >> 4-thp 5-thp 6-thp
> >>
now, so fix it.
Reviewed-by: Wanpeng Li
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f5096b5..e4671f9 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -35,7 +35,6 @@ enum migrate_reason {
#ifdef CONFIG_MIGRATION
-extern
s() on 4th patch
- Add Acked-by and Review-by
Here is the patchset for correcting and cleaning-up migration
related stuff. These are random correction and clean-up, so
please see each patches ;)
Thanks.
Naoya Horiguchi (1):
mm/migrate: add comment about permanent failure path
Joonsoo Kim (5):
guchi
Reviewed-by: Wanpeng Li
Signed-off-by: Joonsoo Kim
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eca4a31..6d04d37 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1318,7 +1318,7 @@ static long do_mbind(unsigned long start, unsigned long
len,
if (nr_failed &
fail_migrate_page() isn't used anywhere, so remove it.
Acked-by: Christoph Lameter
Reviewed-by: Naoya Horiguchi
Reviewed-by: Wanpeng Li
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e4671f9..4308018 100644
--- a/include/linux/migrate.h
it on
update_pageblock_skip() to prevent from setting the wrong information.
Cc: # 3.7+
Acked-by: Vlastimil Babka
Reviewed-by: Naoya Horiguchi
Reviewed-by: Wanpeng Li
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 805165b..f58bcd0 100644
--- a/mm/compaction.c
+++ b/mm/compact
put back the new hugepage if
!hugepage_migration_support(). If not, we would leak hugepage memory.
Acked-by: Christoph Lameter
Reviewed-by: Wanpeng Li
Signed-off-by: Joonsoo Kim
diff --git a/mm/migrate.c b/mm/migrate.c
index c6ac87a..b1cfd01 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
From: Naoya Horiguchi
Let's add a comment about where the failed page goes to, which makes
code more readable.
Acked-by: Christoph Lameter
Reviewed-by: Wanpeng Li
Signed-off-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/mm/migrate.c b/mm/migrate.c
index 3747fcd..c6
On Wed, Dec 04, 2013 at 10:52:18AM +0900, Joonsoo Kim wrote:
> On Tue, Dec 03, 2013 at 04:59:10PM -0800, Andrew Morton wrote:
> > On Tue, 8 Oct 2013 16:58:10 -0400 Johannes Weiner
> > wrote:
> >
> > > Buffer allocation has a very crude indefinite loop around wa
On Tue, Dec 03, 2013 at 11:13:08AM +0900, Joonsoo Kim wrote:
> On Mon, Dec 02, 2013 at 02:58:41PM +, Christoph Lameter wrote:
> > On Mon, 2 Dec 2013, Joonsoo Kim wrote:
> >
> > > Now, the size of the freelist for the slab management diminish,
> > > so that the
On Fri, Dec 13, 2013 at 04:40:58PM +, Christoph Lameter wrote:
> On Fri, 13 Dec 2013, Joonsoo Kim wrote:
>
> > Could you review this patch?
> > I think that we should merge it to fix the problem reported by Christian.
>
> I'd be fine with clearing __GFP_NOFAI
to stop compaction. And your [1/2] patch in this patchset
> >>> always makes free page scanner start on pageblock boundary so when the
> >>> loop in isolate_freepages is finished and pfn is lower low_pfn, the pfn
> >>> would be lower than migration scanner so compact
it.
As far as I understand, there is no end-user visible effect, because
request size is alway PAGE_SIZE aligned and if n < PAGE_SIZE,
no real operation happens. Am I missing?
Anyway,
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel&qu
On Wed, Apr 23, 2014 at 11:52:08AM +0800, Weijie Yang wrote:
> On Wed, Apr 23, 2014 at 11:08 AM, Joonsoo Kim wrote:
> > On Wed, Apr 23, 2014 at 10:32:30AM +0800, Weijie Yang wrote:
> >> On Wed, Apr 23, 2014 at 3:55 AM, Andrew Morton
> >> wrote:
> >> > On
2014-04-23 16:30 GMT+09:00 Vlastimil Babka :
> On 04/23/2014 04:58 AM, Joonsoo Kim wrote:
>> On Tue, Apr 22, 2014 at 03:17:30PM +0200, Vlastimil Babka wrote:
>>> On 04/22/2014 08:52 AM, Minchan Kim wrote:
>>>> On Tue, Apr 22, 2014 at 08:33:35AM +0200, Vlastimil Babka
On Tue, Jul 29, 2014 at 05:34:37PM +0200, Vlastimil Babka wrote:
> >>@@ -570,6 +572,14 @@ isolate_migratepages_block(struct compact_control *cc,
> >>unsigned long low_pfn,
> >>unsigned long flags;
> >>bool locked = false;
> >>struct page *page = NULL, *valid_page = NULL;
> >>+ unsign
Oops... resend because of omitting everyone on CC.
2014-07-30 18:56 GMT+09:00 Vlastimil Babka :
> On 07/30/2014 10:39 AM, Joonsoo Kim wrote:
>>
>> On Tue, Jul 29, 2014 at 05:34:37PM +0200, Vlastimil Babka wrote:
>>>
>>> Could do it in isolate_migratepage
On Thu, Jul 31, 2014 at 02:21:14PM +0200, Jan Kara wrote:
> On Thu 31-07-14 09:37:15, Gioh Kim wrote:
> >
> >
> > 2014-07-31 오전 9:03, Jan Kara 쓴 글:
> > >On Thu 31-07-14 08:54:40, Gioh Kim wrote:
> > >>2014-07-30 오후 7:11, Jan Kara 쓴 글:
> > >>>On Wed 30-07-14 16:44:24, Gioh Kim wrote:
> > 2014-
ntation/SubmitChecklist when testing your code ***
>
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
>
> --
> From: Joonsoo Kim
> Subject: mm/slab_common: commonize slab merge logic
On Mon, Sep 22, 2014 at 12:48:41AM -0700, Andrew Morton wrote:
> On Mon, 22 Sep 2014 09:32:45 +0900 Joonsoo Kim wrote:
>
> > Hello, Andrew.
> >
> > This patch has build failure problem if CONFIG_SLUB.
> > Detailed information and fix is in following patch.
>
head
ratio, 5th column) and uses less memory (mem_used_total, 3rd column).
Signed-off-by: Joonsoo Kim
---
mm/zsmalloc.c | 41 +
1 file changed, 29 insertions(+), 12 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c4a9157..36484f4 10064
On Tue, Sep 23, 2014 at 03:25:55PM -0700, Andrew Morton wrote:
> On Tue, 23 Sep 2014 17:30:11 +0900 Joonsoo Kim wrote:
>
> > zsmalloc has many size_classes to reduce fragmentation and they are
> > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_SIZE is 4096.
>
sn't need after initialization.
- Rename __size_class to size_class, size_class to merged_size_class.
- Add code comment for merged_size_class of struct zs_pool.
- Add code comment how merging works in zs_create_pool().
Signed-off-by: Joonsoo Kim
---
On Wed, Sep 24, 2014 at 03:30:26PM +0200, Vlastimil Babka wrote:
> On 09/15/2014 04:31 AM, Joonsoo Kim wrote:
> >On Mon, Sep 08, 2014 at 10:31:29AM +0200, Vlastimil Babka wrote:
> >>On 08/26/2014 10:08 AM, Joonsoo Kim wrote:
> >>
> >>>diff --git a/mm/pa
On Wed, Sep 24, 2014 at 05:12:20PM +0900, Minchan Kim wrote:
> Hi Joonsoo,
>
> On Wed, Sep 24, 2014 at 03:03:46PM +0900, Joonsoo Kim wrote:
> > zsmalloc has many size_classes to reduce fragmentation and they are
> > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_
On Wed, Sep 24, 2014 at 12:24:14PM -0400, Dan Streetman wrote:
> On Wed, Sep 24, 2014 at 2:03 AM, Joonsoo Kim wrote:
> > zsmalloc has many size_classes to reduce fragmentation and they are
> > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_SIZE is 4096.
> > And, zs
or merged_size_class of struct zs_pool.
- Add code comment how merging works in zs_create_pool().
Changes from v2:
- Add more commit description (Dan)
- dynamically allocate size_class structure (Dan)
- rename objs_per_zspage to get_maxobj_per_zspage (Minchan)
Signed-off-by: Joonsoo
Signed-off-by: Joonsoo Kim
---
drivers/block/zram/Kconfig|2 +-
drivers/block/zram/zram_drv.c | 40
drivers/block/zram/zram_drv.h |4 ++--
3 files changed, 15 insertions(+), 31 deletions(-)
diff --git a/drivers/block/zram/Kconfig b/drivers
o we don't
have to much worry about overhead metric in afmalloc. Anyway, overhead
metric is also better in afmalloc, 4% ~ 26%.
As a result, I think that afmalloc is better than zsmalloc in terms of
memory efficiency. But, I could be wrong so any comments are welcome. :)
Signed-off-by: Joonsoo K
On Sat, Sep 27, 2014 at 11:24:49PM -0700, Jeremiah Mahler wrote:
> On Thu, Aug 21, 2014 at 05:11:13PM +0900, Joonsoo Kim wrote:
> > Because of chicken and egg problem, initializaion of SLAB is really
> > complicated. We need to allocate cpu cache through SLAB to make
> > the
On Fri, Sep 26, 2014 at 04:48:45PM -0400, Dan Streetman wrote:
> On Fri, Sep 26, 2014 at 2:27 AM, Joonsoo Kim wrote:
> > zsmalloc has many size_classes to reduce fragmentation and they are
> > in 16 bytes unit, for example, 16, 32, 48, etc., if PAGE_SIZE is 4096.
> > And, zs
g logic in zs_create_pool (Dan)
Signed-off-by: Joonsoo Kim
---
mm/zsmalloc.c | 84 +++--
1 file changed, 70 insertions(+), 14 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c4a9157..11556ae 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmal
turn immediately
> on failed pageblock checks, but keeps going until isolate_migratepages_range()
> gets called once. Similarily to isolate_freepages(), the function periodically
> checks if it needs to reschedule or abort async compaction.
>
> Signed-off-by: Vlastimil Babka
> C
On Fri, Sep 05, 2014 at 10:14:16AM -0400, Theodore Ts'o wrote:
> On Fri, Sep 05, 2014 at 04:32:48PM +0900, Joonsoo Kim wrote:
> > I also test another approach, such as allocate freepage in CMA
> > reserved region as late as possible, which is also similar to your
> > s
On Fri, Sep 05, 2014 at 01:01:02PM -0700, Andrew Morton wrote:
> On Fri, 5 Sep 2014 16:28:06 +0900 Joonsoo Kim wrote:
>
> > mm-slab_common-move-kmem_cache-definition-to-internal-header.patch
> > in mmotm makes following build failure.
> >
> > ../mm/kmemchec
On Thu, Sep 11, 2014 at 10:32:52PM -0400, Mikulas Patocka wrote:
>
>
> On Mon, 8 Sep 2014, Christoph Lameter wrote:
>
> > On Mon, 8 Sep 2014, Mikulas Patocka wrote:
> >
> > > I don't know what you mean. If someone allocates 1 objects with sizes
> > > from 1 to 1, you can't have 1 sl
On Mon, Sep 08, 2014 at 10:31:29AM +0200, Vlastimil Babka wrote:
> On 08/26/2014 10:08 AM, Joonsoo Kim wrote:
>
> >diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >index f86023b..51e0d13 100644
> >--- a/mm/page_alloc.c
> >+++ b/mm/page_alloc.c
> >@@ -740,9
Signed-off-by: Joonsoo Kim
---
Documentation/kernel-parameters.txt | 14 --
mm/slab.h | 15 ++
mm/slab_common.c| 91 +++
mm/slub.c | 91 +--
4
debug flag and object size
change on these functions.
v2: add commit description for the reason to implement SLAB specific
functions.
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 20
mm/slab.h |2 +-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/mm
to alloc_kmem_cache_cpus().
Add possible problem from this patch.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
include/linux/slab_def.h | 20 +---
mm/slab.c| 235 ++
mm/slab.h|1 -
3 files changed, 75 inserti
On Fri, Mar 07, 2014 at 11:18:30AM -0600, Christoph Lameter wrote:
> Joonsoo recently changed the handling of the freelist in SLAB. CCing him.
>
> On Thu, 6 Mar 2014, Dave Jones wrote:
>
> > I pretty much always use SLUB for my fuzzing boxes, but thought I'd give
> > SLAB a try
> > for a change.
On Tue, Mar 11, 2014 at 09:35:00AM +0900, Joonsoo Kim wrote:
> On Fri, Mar 07, 2014 at 11:18:30AM -0600, Christoph Lameter wrote:
> > Joonsoo recently changed the handling of the freelist in SLAB. CCing him.
> >
> > On Thu, 6 Mar 2014, Dave Jones wrote:
> >
> >
On Mon, Mar 10, 2014 at 09:24:55PM -0400, Dave Jones wrote:
> On Tue, Mar 11, 2014 at 10:01:35AM +0900, Joonsoo Kim wrote:
> > On Tue, Mar 11, 2014 at 09:35:00AM +0900, Joonsoo Kim wrote:
> > > On Fri, Mar 07, 2014 at 11:18:30AM -0600, Christoph Lameter wrote:
> > >
901 - 1000 of 2325 matches
Mail list logo