identical case as migrate_pages()
Signed-off-by: Joonsoo Kim
Cc: Christoph Lameter
Acked-by: Christoph Lameter
Acked-by: Michal Nazarewicz
---
[Patch 2/4]: add "Acked-by: Michal Nazarewicz "
[Patch 3/4]: commit log is changed according to Michal Nazarewicz's suggestion.
There is
Additionally, Correct comment above do_migrate_pages()
Signed-off-by: Joonsoo Kim
Cc: Sasha Levin
Cc: Christoph Lameter
Acked-by: Michal Nazarewicz
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1d771e4..0732729 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -948,7 +948,7 @@ static int
move_pages() syscall may return success in case that
do_move_page_to_node_array return positive value which means migration failed.
This patch changes return value of do_move_page_to_node_array
for not returning positive value. It can fix the problem.
Signed-off-by: Joonsoo Kim
Cc: Brice Goglin
re that all the cpu partial slabs is removed
from cpu partial list. In this time, we could expect that
this_cpu_cmpxchg is mostly succeed.
Signed-off-by: Joonsoo Kim
Cc: Christoph Lameter
Cc: David Rientjes
Acked-by: Christoph Lameter
---
Change log: Just add "Acked-by: Christoph Lameter
release a lock first, and re-take a lock if necessary" policy is
helpful to this.
Signed-off-by: Joonsoo Kim
Cc: Christoph Lameter
---
This is v2 of "slub: release a lock if freeing object with a lock is failed in
__slab_free()"
Subject and commit log are changed from v1.
Code is
2012/7/28 Christoph Lameter :
> On Sat, 28 Jul 2012, Joonsoo Kim wrote:
>
>> move_pages() syscall may return success in case that
>> do_move_page_to_node_array return positive value which means migration
>> failed.
>
> Nope. It only means that the migration for s
2012/7/28 Christoph Lameter :
> On Sat, 28 Jul 2012, Joonsoo Kim wrote:
>
>> do_migrate_pages() can return the number of pages not migrated.
>> Because migrate_pages() syscall return this value directly,
>> migrate_pages() syscall may return the number of pages not migr
2012/7/28 Christoph Lameter :
> On Sat, 28 Jul 2012, Joonsoo Kim wrote:
>
>> Subject and commit log are changed from v1.
>
> That looks a bit better. But the changelog could use more cleanup and
> clearer expression.
>
>> @@ -2490,25 +2492,17 @@ static void __slab_fre
2012/7/31 Christoph Lameter :
> On Sat, 28 Jul 2012, JoonSoo Kim wrote:
>
>> 2012/7/28 Christoph Lameter :
>> > On Sat, 28 Jul 2012, Joonsoo Kim wrote:
>> >
>> >> move_pages() syscall may return success in case that
>> >> do_move_page_to_
Hi Glauber.
2012/9/18 Glauber Costa :
> diff --git a/mm/slub.c b/mm/slub.c
> index 0b68d15..9d79216 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2602,6 +2602,7 @@ redo:
> } else
> __slab_free(s, page, x, addr);
>
> + kmem_cache_verify_dead(s);
> }
As far as u kn
>> "*_memcg = memcg" should be executed when "memcg_charge_kmem" is success.
>> "memcg_charge_kmem" return 0 if success in charging.
>> Therefore, I think this code is wrong.
>> If I am right, it is a serious bug that affect behavior of all the patchset.
>
> Which is precisely what it does. ret is
Hi, Glauber.
>> 2012/9/18 Glauber Costa :
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 0b68d15..9d79216 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2602,6 +2602,7 @@ redo:
>>> } else
>>> __slab_free(s, page, x, addr);
>>>
>>> + kmem_cache_verify_dead(s)
code for checking this.
Signed-off-by: Joonsoo Kim
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index a1135c6..1a65132 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -739,8 +739,10 @@ void wq_worker_waking_up(struct task_struct *task,
unsigned int cpu)
{
struct w
2012/10/19 Joonsoo Kim :
> This patchset introduces setup_timer_deferrable() macro.
> Using it makes code simple and understandable.
>
> This patchset doesn't make any functional difference.
> It is just for clean-up.
>
> It is based on v3.7-rc1
>
>
2012/10/25 Christoph Lameter :
> On Wed, 24 Oct 2012, Pekka Enberg wrote:
>
>> So I hate this patch with a passion. We don't have any fastpaths in
>> mm/slab_common.c nor should we. Those should be allocator specific.
>
> I have similar thoughts on the issue. Lets keep the fast paths allocator
> sp
To calculate an index of pkmap, using PKMAP_NR() is more understandable
and maintainable, So change it.
Cc: Mel Gorman
Signed-off-by: Joonsoo Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index d517cd1..b3b3d68 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -99,7 +99,7 @@ struct page
We can find free page_address_map instance without the page_address_pool.
So remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index 017bad1..731cf9a 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -323,11 +323,7 @@ struct page_address_map {
void *virtual
flush_all_zero_pkmaps()
and return index of last flushed entry.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ef788b5..0683869 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -32,6 +32,7 @@ static inline void invalidate_kernel_vmap_range
The pool_lock protects the page_address_pool from concurrent access.
But, access to the page_address_pool is already protected by kmap_lock.
So remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index b3b3d68..017bad1 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
kmaps().
Joonsoo Kim (5):
mm, highmem: use PKMAP_NR() to calculate an index of pkmap
mm, highmem: remove useless pool_lock
mm, highmem: remove page_address_pool list
mm, highmem: makes flush_all_zero_pkmaps() return index of last
flushed entry
mm, highmem: get virtual address of the page
In flush_all_zero_pkmaps(), we have an index of the pkmap associated the page.
Using this index, we can simply get virtual address of the page.
So change it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index 65beb9a..1417f4f 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
2012/10/29 Minchan Kim :
> On Mon, Oct 29, 2012 at 04:12:55AM +0900, Joonsoo Kim wrote:
>> In current code, after flush_all_zero_pkmaps() is invoked,
>> then re-iterate all pkmaps. It can be optimized if flush_all_zero_pkmaps()
>> return index of flushed entry. With
Hi, Minchan.
2012/10/29 Minchan Kim :
> Hi Joonsoo,
>
> On Mon, Oct 29, 2012 at 04:12:51AM +0900, Joonsoo Kim wrote:
>> This patchset clean-up and optimize highmem related code.
>>
>> [1] is just clean-up and doesn't introduce any functional change.
>> [
Without defining ARCH=arm, building a perf for Android ARM will be failed,
because it needs architecture specific files.
So add related information to a document.
Signed-off-by: Joonsoo Kim
Cc: Irina Tirdea
Cc: David Ahern
Cc: Ingo Molnar
Cc: Jiri Olsa
Cc: Namhyung Kim
Cc: Paul Mackerras
commit 099a19d9('allow limited allocation before slab is online') changes a
method
allocating a chunk from kzalloc to pcpu_mem_alloc.
But, it missed changing matched free operation.
It may not be a problem for now, but fix it for consistency.
Signed-off-by: Joonsoo Kim
Cc: Christo
Hi, Glauber.
2012/10/19 Glauber Costa :
> We are able to match a cache allocation to a particular memcg. If the
> task doesn't change groups during the allocation itself - a rare event,
> this will give us a good picture about who is the first group to touch a
> cache page.
>
> This patch uses th
2012/10/19 Glauber Costa :
> +void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
> +{
> + struct kmem_cache *c;
> + int i;
> +
> + if (!s->memcg_params)
> + return;
> + if (!s->memcg_params->is_root_cache)
> + return;
> +
> + /*
>
On Mon, Aug 26, 2013 at 06:31:35PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > On Thu, Aug 22, 2013 at 12:38:12PM +0530, Aneesh Kumar K.V wrote:
> >> Joonsoo Kim writes:
> >>
> >> > Hello, Aneesh.
> >> >
> >> >
On Mon, Aug 26, 2013 at 06:39:35PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Currently, we have two variable to represent whether we can use reserved
> > page or not, chg and avoid_reserve, respectively. With aggregating these,
> > we can have more c
> >> @@ -2504,6 +2498,8 @@ static int hugetlb_cow(struct mm_struct *mm, struct
> >> vm_area_struct *vma,
> >>struct hstate *h = hstate_vma(vma);
> >>struct page *old_page, *new_page;
> >>int outside_reserve = 0;
> >> + long chg;
> >> + bool use_reserve;
> >>unsigned long mmun_sta
Hello,
On Tue, Aug 27, 2013 at 04:06:04PM -0600, Jonathan Corbet wrote:
> On Thu, 22 Aug 2013 17:44:16 +0900
> Joonsoo Kim wrote:
>
> > With build-time size checking, we can overload the RCU head over the LRU
> > of struct page to free pages of a slab in rcu context
On Thu, Jul 04, 2013 at 12:00:44PM +0200, Michal Hocko wrote:
> On Thu 04-07-13 13:24:50, Joonsoo Kim wrote:
> > On Thu, Jul 04, 2013 at 12:01:43AM +0800, Zhang Yanfei wrote:
> > > On 07/03/2013 11:51 PM, Zhang Yanfei wrote:
> > > > On 07/03/2013 11:28 PM, Michal Hock
On Wed, Jul 10, 2013 at 11:17:03AM +0200, Michal Hocko wrote:
> On Wed 10-07-13 09:31:42, Joonsoo Kim wrote:
> > On Thu, Jul 04, 2013 at 12:00:44PM +0200, Michal Hocko wrote:
> > > On Thu 04-07-13 13:24:50, Joonsoo Kim wrote:
> > > > On Thu, Jul 04, 2013 at 12:01:
On Wed, Jul 10, 2013 at 09:20:27AM +0800, Zhang Yanfei wrote:
> 于 2013/7/10 8:31, Joonsoo Kim 写道:
> > On Thu, Jul 04, 2013 at 12:00:44PM +0200, Michal Hocko wrote:
> >> On Thu 04-07-13 13:24:50, Joonsoo Kim wrote:
> >>> On Thu, Jul 04, 2013 at 12:01:43AM +0800, Zhang
On Wed, Jul 10, 2013 at 04:19:06PM -0300, Arnaldo Carvalho de Melo wrote:
> From: Joonsoo Kim
>
> Currently, lib lk doesn't use CROSS_COMPILE environment variable, so
> cross build always fails.
Hello, Arnaldo.
Fix for lib lk cross build is already merged into mainline.
It
On Wed, Jul 10, 2013 at 03:52:42PM -0700, Dave Hansen wrote:
> On 07/03/2013 01:34 AM, Joonsoo Kim wrote:
> > - if (page)
> > + do {
> > + page = buffered_rmqueue(preferred_zone, zone, order,
> > +
On Wed, Jul 10, 2013 at 01:27:37PM +0200, Michal Hocko wrote:
> On Wed 10-07-13 18:55:33, Joonsoo Kim wrote:
> > On Wed, Jul 10, 2013 at 11:17:03AM +0200, Michal Hocko wrote:
> > > On Wed 10-07-13 09:31:42, Joonsoo Kim wrote:
> > > > On Thu, Jul 04, 2013 at 12:00:
On Wed, Jul 10, 2013 at 10:38:20PM -0700, Dave Hansen wrote:
> On 07/10/2013 06:02 PM, Joonsoo Kim wrote:
> > On Wed, Jul 10, 2013 at 03:52:42PM -0700, Dave Hansen wrote:
> >> On 07/03/2013 01:34 AM, Joonsoo Kim wrote:
> >>> - if (page)
> >>> +
On Wed, Sep 04, 2013 at 11:38:04AM +0800, Wanpeng Li wrote:
> Hi Joonsoo,
> On Fri, Aug 23, 2013 at 03:35:39PM +0900, Joonsoo Kim wrote:
> >On Thu, Aug 22, 2013 at 04:47:25PM +, Christoph Lameter wrote:
> >> On Thu, 22 Aug 2013, Joonsoo Kim wrote:
> >
> [..
On Wed, Sep 04, 2013 at 10:17:46AM +0800, Wanpeng Li wrote:
> Hi Joonsoo,
> On Mon, Sep 02, 2013 at 05:38:54PM +0900, Joonsoo Kim wrote:
> >This patchset implements byte sized indexes for the freelist of a slab.
> >
> >Currently, the freelist of a slab consist of un
On Tue, Sep 03, 2013 at 02:15:42PM +, Christoph Lameter wrote:
> On Mon, 2 Sep 2013, Joonsoo Kim wrote:
>
> > This patchset implements byte sized indexes for the freelist of a slab.
> >
> > Currently, the freelist of a slab consist of unsigned int sized indexes.
>
On Fri, Aug 09, 2013 at 06:26:37PM +0900, Joonsoo Kim wrote:
> If parallel fault occur, we can fail to allocate a hugepage,
> because many threads dequeue a hugepage to handle a fault of same address.
> This makes reserved pool shortage just for a little while and this cause
> faultin
Hello, David.
First of all, thanks for review!
On Thu, Sep 05, 2013 at 11:15:53AM +1000, David Gibson wrote:
> On Fri, Aug 09, 2013 at 06:26:37PM +0900, Joonsoo Kim wrote:
> > If parallel fault occur, we can fail to allocate a hugepage,
> > because many threads dequeue a hugep
On Wed, Sep 04, 2013 at 05:33:05PM +0900, Joonsoo Kim wrote:
> On Tue, Sep 03, 2013 at 02:15:42PM +, Christoph Lameter wrote:
> > On Mon, 2 Sep 2013, Joonsoo Kim wrote:
> >
> > > This patchset implements byte sized indexes for the freelist of a slab.
> > >
&
ould check subpool counter
for a new hugepage. This patch implement it.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
---
Replenishing commit message and adding reviewed-by tag.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 12b6581..ea1ae0a 100644
--- a/mm/hugetlb.c
+++ b/mm
ould check subpool counter
for a new hugepage. This patch implement it.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
---
Replenishing commit message and adding reviewed-by tag.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 12b6581..ea1ae0a 100644
--- a/mm/hugetlb.c
+++ b/mm
n't use the
LRU mechanism so that there is no other user of this page except us.
Therefore we can use this flag safely.
Signed-off-by: Joonsoo Kim
---
Replenishing commit message only.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6c8eec2..3f834f1 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetl
On Thu, Sep 05, 2013 at 02:33:56PM +, Christoph Lameter wrote:
> On Thu, 5 Sep 2013, Joonsoo Kim wrote:
>
> > I think that all patchsets deserve to be merged, since it reduces memory
> > usage and
> > also improves performance. :)
>
> Could you clean th
likely branch to functions used for setting/getting
objects to/from the freelist, but we may get more benefits from
this change.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index a0e49bb..bd366e5 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -565,8 +565,16 @@ static inline struct
bytes
so that 97 bytes, that is, more than 75% of object size, are wasted.
In a 64 byte sized slab case, no space is wasted if we use on-slab.
So set off-slab determining constraint to 128 bytes.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index bd366e5..d01a2f0 100644
--- a/mm
hes are on top of my previous posting.
https://lkml.org/lkml/2013/8/22/137
Joonsoo Kim (4):
slab: factor out calculate nr objects in cache_estimate
slab: introduce helper functions to get/set free object
slab: introduce byte sized index for the freelist of a slab
slab: make more sl
This logic is not simple to understand so that making separate function
helping readability. Additionally, we can use this change in the
following patch which implement for freelist to have another sized index
in according to nr objects.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm
In the following patches, to get/set free objects from the freelist
is changed so that simple casting doesn't work for it. Therefore,
introduce helper functions.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 9d4bad5..a0e49bb 100644
--- a/mm/slab.c
+++ b/mm/s
On Sun, Sep 08, 2013 at 10:46:00AM -0400, Sasha Levin wrote:
> Hi all,
>
> While fuzzing with trinity inside a KVM tools guest, running latest -next
> kernel, I've
> stumbled on the following:
>
> [ 998.281867] BUG: unable to handle kernel NULL pointer dereference at
> 0274
> [ 99
On Fri, Sep 06, 2013 at 03:58:18PM +, Christoph Lameter wrote:
> On Fri, 6 Sep 2013, Joonsoo Kim wrote:
>
> > Currently, the freelist of a slab consist of unsigned int sized indexes.
> > Most of slabs have less number of objects than 256, since restriction
> > for pa
On Fri, Sep 06, 2013 at 03:48:04PM +, Christoph Lameter wrote:
> On Fri, 6 Sep 2013, Joonsoo Kim wrote:
>
> > }
> > *num = nr_objs;
> > - *left_over = slab_size - nr_objs*buffer_size - mgmt_size;
> > + *left_over = slab_size - (nr_objs * buffer_size)
On Fri, Sep 06, 2013 at 02:23:16PM +0900, Joonsoo Kim wrote:
> If we fail with a reserved page, just calling put_page() is not sufficient,
> because put_page() invoke free_huge_page() at last step and it doesn't
> know whether a page comes from a reserved pool or not. So it doesn
me a number about 30% lower than I expected - ~180k files/s
> when I was expecting somewhere around 250k files/s.
>
> I did a bisect, and the bisect landed on this commit:
>
> commit 23f0d2093c789e612185180c468fa09063834e87
> Author: Joonsoo Kim
> Date: Tue Aug 6 17:36:4
On Mon, Sep 09, 2013 at 02:44:03PM +, Christoph Lameter wrote:
> On Mon, 9 Sep 2013, Joonsoo Kim wrote:
>
> > 32 byte is not minimum object size, minimum *kmalloc* object size
> > in default configuration. There are some slabs that their object size is
> > less than
On Tue, Sep 10, 2013 at 04:15:20PM +1000, Dave Chinner wrote:
> On Tue, Sep 10, 2013 at 01:47:59PM +0900, Joonsoo Kim wrote:
> > On Tue, Sep 10, 2013 at 02:02:54PM +1000, Dave Chinner wrote:
> > > Hi folks,
> > >
> > > I just updated my performance test VM to t
On Tue, Sep 10, 2013 at 10:31:41AM +0100, Mel Gorman wrote:
> @@ -5045,15 +5038,50 @@ static int need_active_balance(struct lb_env *env)
>
> static int active_load_balance_cpu_stop(void *data);
>
> +static int should_we_balance(struct lb_env *env)
> +{
> + struct sched_group *sg = env->sd-
On Tue, Sep 10, 2013 at 09:25:05PM +, Christoph Lameter wrote:
> On Tue, 10 Sep 2013, Joonsoo Kim wrote:
>
> > On Mon, Sep 09, 2013 at 02:44:03PM +, Christoph Lameter wrote:
> > > On Mon, 9 Sep 2013, Joonsoo Kim wrote:
> > >
> > > > 32 byte is
On Wed, Sep 11, 2013 at 02:30:03PM +, Christoph Lameter wrote:
> On Thu, 22 Aug 2013, Joonsoo Kim wrote:
>
> > And, therefore we should check pfmemalloc in page flag of first page,
> > but current implementation don't do that. virt_to_head_page(obj) just
> >
On Wed, Sep 11, 2013 at 02:22:25PM +, Christoph Lameter wrote:
> On Wed, 11 Sep 2013, Joonsoo Kim wrote:
>
> > Anyway, could you review my previous patchset, that is, 'overload struct
> > slab
> > over struct page to reduce memory usage'? I'm not sure
On Wed, Sep 11, 2013 at 02:39:22PM +, Christoph Lameter wrote:
> On Thu, 22 Aug 2013, Joonsoo Kim wrote:
>
> > With build-time size checking, we can overload the RCU head over the LRU
> > of struct page to free pages of a slab in rcu context. This really help to
> > i
In the following patches, to get/set free objects from the freelist
is changed so that simple casting doesn't work for it. Therefore,
introduce helper functions.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 9d4bad5..a0e49bb 100644
--- a/mm/slab.c
+++ b/mm/s
This logic is not simple to understand so that making separate function
helping readability. Additionally, we can use this change in the
following patch which implement for freelist to have another sized index
in according to nr objects.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm
likely branch to functions used for setting/getting
objects to/from the freelist, but we may get more benefits from
this change.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index a0e49bb..bd366e5 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -565,8 +565,16 @@ static inline struct
bytes
so that 97 bytes, that is, more than 75% of object size, are wasted.
In a 64 byte sized slab case, no space is wasted if we use on-slab.
So set off-slab determining constraint to 128 bytes.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index bd366e5..d01a2f0 100644
--- a/mm
lkml.org/lkml/2013/8/22/137
Joonsoo Kim (4):
slab: factor out calculate nr objects in cache_estimate
slab: introduce helper functions to get/set free object
slab: introduce byte sized index for the freelist of a slab
slab: make more slab management structure off the sla
On Tue, Sep 03, 2013 at 03:01:46PM +0800, Wanpeng Li wrote:
> There is a race window between vmap_area free and show vmap_area information.
>
> AB
>
> remove_vm_area
> spin_lock(&vmap_area_lock);
> va->flags &= ~VM_VM_AREA;
> spin_unlock(&vmap
On Tue, Sep 03, 2013 at 03:51:39PM +0800, Wanpeng Li wrote:
> On Tue, Sep 03, 2013 at 04:42:21PM +0900, Joonsoo Kim wrote:
> >On Tue, Sep 03, 2013 at 03:01:46PM +0800, Wanpeng Li wrote:
> >> There is a race window between vmap_area free and show vmap_area
> >> i
On Mon, Sep 16, 2013 at 10:09:09PM +1000, David Gibson wrote:
> > >
> > > > + *do_dequeue = false;
> > > > spin_unlock(&hugetlb_lock);
> > > > page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
> > > > if (!page) {
> > >
> > > I think the
We should clear the page's private flag when returing the page to
the page allocator or the hugepage pool. This patch fixes it.
Signed-off-by: Joonsoo Kim
---
Hello, Andrew.
I sent the new version of commit ('07443a8') before you did pull request,
but it isn't included. It m
On Mon, Sep 30, 2013 at 02:35:14PM -0700, Andrew Morton wrote:
> On Mon, 30 Sep 2013 16:59:44 +0900 Joonsoo Kim wrote:
>
> > We should clear the page's private flag when returing the page to
> > the page allocator or the hugepage pool. This patch fixes it.
> >
&
unately it
> is possible that that __ac_put_obj() checks SlabPfmemalloc on a tail page
> and while this is harmless, it is sloppy. This patch ensures that the head
> page is always used.
>
> This problem was originally identified by Joonsoo Kim.
>
> [js1...@gmail.com: Original i
it is expected that order-0 pages are in use. Unfortunately it
> is possible that that __ac_put_obj() checks SlabPfmemalloc on a tail page
> and while this is harmless, it is sloppy. This patch ensures that the head
> page is always used.
>
> This problem was originally identifie
2012/8/25 JoonSoo Kim :
> 2012/8/16 Joonsoo Kim :
>> When we try to free object, there is some of case that we need
>> to take a node lock. This is the necessary step for preventing a race.
>> After taking a lock, then we try to cmpxchg_double_slab().
>> But, there
2012/9/7 Mel Gorman :
> This churns code a lot more than is necessary. How about this as a
> replacement patch?
>
> ---8<---
> From: Joonsoo Kim
> Subject: [PATCH] slab: do ClearSlabPfmemalloc() for all pages of slab
>
> Right now, we call ClearSlabPfmemalloc() for
våg
CC: Dima Zavin
CC: Robert Love
Signed-off-by: Joonsoo Kim
diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
index 634b9ae..2fde9df 100644
--- a/drivers/staging/android/ashmem.c
+++ b/drivers/staging/android/ashmem.c
@@ -49,6 +49,7 @@ struct ashmem_area {
Hello, Dan.
2012/12/2 Dan Carpenter :
> On Sat, Dec 01, 2012 at 02:45:57AM +0900, Joonsoo Kim wrote:
>> @@ -614,21 +616,35 @@ static int ashmem_pin_unpin(struct ashmem_area *asma,
>> unsigned long cmd,
>> pgstart = pin.offset / PAGE_SIZE;
>> pgend = pgs
2012/12/3 Dan Carpenter :
> On Mon, Dec 03, 2012 at 09:09:59AM +0900, JoonSoo Kim wrote:
>> Hello, Dan.
>>
>> 2012/12/2 Dan Carpenter :
>> > On Sat, Dec 01, 2012 at 02:45:57AM +0900, Joonsoo Kim wrote:
>> >> @@ -614,21 +616,35 @@ static int ashmem
"make cscope O=. SRCARCH=arm SUBARCH=xxx"
Signed-off-by: Joonsoo Kim
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 79fdafb..a400c88 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -48,13 +48,14 @@ find_arch_sources()
for i in $archincludedir; do
d after building the kernel.
Signed-off-by: Joonsoo Kim
diff --git a/scripts/tags.sh b/scripts/tags.sh
index a400c88..ef9668c 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -96,6 +96,29 @@ all_sources()
find_other_sources '*.[chS]'
}
+all_compiled_sources()
+{
+
"make cscope O=. SRCARCH=arm SUBARCH=xxx"
Signed-off-by: Joonsoo Kim
---
v2: change bash specific '[[]]' to 'case in' statement.
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 79fdafb..38483f4 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -48,13 +48,14 @
cuted after building the kernel.
Signed-off-by: Joonsoo Kim
---
v2: change bash specific '[[]]' to 'case in' statement.
use COMPILED_SOURCE env var, instead of abusing SUBARCH
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 38483f4..9c02921 100755
--- a/scripts/tags.
2012/12/4 Michal Marek :
> On 3.12.2012 17:22, Joonsoo Kim wrote:
>> We usually have interst in compiled files only,
>> because they are strongly related to individual's work.
>> Current tags.sh can't select compiled files, so support it.
>>
>> We can
Now, we have a handy macro for initializing deferrable timer.
Using it makes code clean and easy to understand.
Additionally, in some driver codes, use setup_timer() instead of init_timer().
This patch doesn't make any functional difference.
Signed-off-by: Joonsoo Kim
Cc: Len Brown
Cc:
.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/timer.h b/include/linux/timer.h
index 8c5a197..5950276 100644
--- a/include/linux/timer.h
+++ b/include/linux/timer.h
@@ -151,6 +151,8 @@ static inline void init_timer_on_stack_key(struct
timer_list *timer,
#define setup_timer(timer, fn, data
This patchset introduces setup_timer_deferrable() macro.
Using it makes code simple and understandable.
This patchset doesn't make any functional difference.
It is just for clean-up.
It is based on v3.7-rc1
Joonsoo Kim (2):
timer: add setup_timer_deferrable() macro
timer: us
Hello, Eric.
Thank you very much for a kind comment about my question.
I have one more question related to network subsystem.
Please let me know what I misunderstand.
2012/10/14 Eric Dumazet :
> In latest kernels, skb->head no longer use kmalloc()/kfree(), so SLAB vs
> SLUB is less a concern for n
r:
textdata bss dec hex filename
100226271443136 5722112 171878751064423 vmlinux
Cc: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
With Christoph's patchset(common kmalloc caches:
'[15/15] Common Kmalloc cache determination') which is not merged
kmalloc() and kmalloc_node() of the SLUB isn't inlined when @flags = __GFP_DMA.
This patch optimize this case,
so when @flags = __GFP_DMA, it will be inlined into generic code.
Cc: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/slub_def.h b/include/linux/slub_
This patchset do minor cleanup for workqueue code.
First patch makes minor behavior change, however, it is trivial.
Others doesn't makes any functional difference.
These are based on v3.7-rc1
Joonsoo Kim (3):
workqueue: optimize mod_delayed_work_on() when @delay == 0
workqueue: trivia
After try_to_grab_pending(), __queue_delayed_work() is invoked
in mod_delayed_work_on(). When @delay == 0, we can call __queue_work()
directly in order to avoid setting useless timer.
Signed-off-by: Joonsoo Kim
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d951daa..c57358e 100644
Commit 63d95a91 ('workqueue: use @pool instead of @gcwq or @cpu where
applicable') changes an approach to access nr_running.
Thus, wq_worker_waking_up() doesn't use @cpu anymore.
Remove it and remove comment related to it.
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/
Return type of work_busy() is unsigned int.
There is return statement returning boolean value, 'false' in work_busy().
It is not problem, because 'false' may be treated '0'.
However, fixing it would make code robust.
Signed-off-by: Joonsoo Kim
diff --git
Hello, Glauber.
2012/10/23 Glauber Costa :
> On 10/22/2012 06:45 PM, Christoph Lameter wrote:
>> On Mon, 22 Oct 2012, Glauber Costa wrote:
>>
>>> + * kmem_cache_free - Deallocate an object
>>> + * @cachep: The cache the allocation was from.
>>> + * @objp: The previously allocated object.
>>> + *
>
2012/10/21 Tejun Heo :
> On Sun, Oct 21, 2012 at 01:30:07AM +0900, Joonsoo Kim wrote:
>> Commit 63d95a91 ('workqueue: use @pool instead of @gcwq or @cpu where
>> applicable') changes an approach to access nr_running.
>> Thus, wq_worker_waking_up() doesn't use @
2012/10/22 Christoph Lameter :
> On Sun, 21 Oct 2012, Joonsoo Kim wrote:
>
>> kmalloc() and kmalloc_node() of the SLUB isn't inlined when @flags =
>> __GFP_DMA.
>> This patch optimize this case,
>> so when @flags = __GFP_DMA, it will be inlined into generic c
101 - 200 of 2325 matches
Mail list logo