On Fri, Jul 04, 2014 at 02:03:10PM +0200, Vlastimil Babka wrote:
> On 07/04/2014 09:57 AM, Joonsoo Kim wrote:
> > Isolation is really rare case so !is_migrate_isolate() is
> > likely case. Remove unlikely macro.
>
> Good catch. Why not replace it with likely then?
posed to CONFIG_DEBUG_SLAB_LEAK
which is mainly used for debugging, so memory overhead isn't big
problem.
Signed-off-by: Joonsoo Kim
Reported-by: Dave Jones
Reported-by: Tetsuo Handa
Reviewed-by: Vladimir Davydov
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientje
On Wed, Apr 16, 2014 at 10:44:11AM -0700, Steven King wrote:
> On Wednesday 16 April 2014 9:06:57 am Geert Uytterhoeven wrote:
> > Hi Steven,
> >
> > On Wed, Apr 16, 2014 at 5:47 PM, Steven King wrote:
> > > --- a/mm/slab.c
> > > +++ b/mm/slab.c
> > > @@ -2572,13 +2572,13 @@ static void *alloc_sla
-by: Joonsoo Kim
---
Hello, Pekka.
Could you send this for v3.15-rc2?
Without this patch, many architecture using 2 bytes freelist index cannot
work properly, I guess.
This patch is based on v3.15-rc1.
Thanks.
diff --git a/mm/slab.c b/mm/slab.c
index 388cb1a..d7f9f44 100644
--- a/mm/slab.c
+++ b
On Thu, Apr 17, 2014 at 12:09:43PM -0700, Steven King wrote:
> On Wednesday 16 April 2014 6:49:11 pm Joonsoo Kim wrote:
> > On Wed, Apr 16, 2014 at 10:44:11AM -0700, Steven King wrote:
> > > On Wednesday 16 April 2014 9:06:57 am Geert Uytterhoeven wrote:
> > > > Hi
On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA
On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Signed-off-by:
On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area e
On Mon, Jun 09, 2014 at 04:09:38PM -0700, David Rientjes wrote:
> On Mon, 9 Jun 2014, Dave Jones wrote:
>
> > Kernel based on v3.15-7257-g963649d735c8
> >
> > Dave
> >
> > Oops: [#1] PREEMPT SMP
> > Modules linked in: dlci 8021q garp snd_seq_dummy bnep llc2 af_key bridge
> > stp fuse
On Fri, Jun 06, 2014 at 05:22:45PM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of empty slabs for such caches,
> otherwise they will be hanging around forever.
>
> This patch makes SLAB discard
On Fri, Jun 06, 2014 at 05:22:40PM +0400, Vladimir Davydov wrote:
> This will be used by the next patches.
>
> Signed-off-by: Vladimir Davydov
> Acked-by: Christoph Lameter
> ---
> include/linux/slab.h |2 ++
> mm/memcontrol.c |1 +
> mm/slab.h| 10 ++
> 3 fil
rink, which is always called on memcg offline (see
> memcg_unregister_all_caches).
>
> Signed-off-by: Vladimir Davydov
> Thanks-to: Joonsoo Kim
> ---
> mm/slub.c | 20
> 1 file changed, 20 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
>
va->flags |= VM_LAZY_FREEING;
va->flags &= ~VM_LAZY_FREE;
}
}
rcu_read_unlock();
v2: add more commit description from Eric
[eduma...@google.com: add more commit description]
Reported-by: Richard Yao
Acked-by: Eric Du
om: Peter Hurley
> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101
> Thunderbird/24.5.0
> MIME-Version: 1.0
> To: Joonsoo Kim , Andrew Morton
>
> CC: Zhang Yanfei , Johannes Weiner
> ,
> Andi Kleen , linux...@kvack.org,
> linux-kernel@vger.kernel.or
On Tue, Jun 10, 2014 at 07:18:34PM +0400, Vladimir Davydov wrote:
> On Tue, Jun 10, 2014 at 09:26:19AM -0500, Christoph Lameter wrote:
> > On Tue, 10 Jun 2014, Vladimir Davydov wrote:
> >
> > > Frankly, I incline to shrinking dead SLAB caches periodically from
> > > cache_reap too, because it look
ing success rates.
> >
> > Signed-off-by: Vlastimil Babka
> Reviewed-by: Minchan Kim
>
> Below is a nitpick.
>
> > Cc: Minchan Kim
> > Cc: Mel Gorman
> > Cc: Joonsoo Kim
> > Cc: Michal Nazarewicz
> > Cc: Naoya Horiguchi
> > Cc: Chris
We should free memory for bitmap when we find zone mis-match,
otherwise this memory will leak.
Additionally, I copy code comment from ppc kvm's cma code to notify
why we need to check zone mis-match.
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/bas
debug log on cma_activate_area().
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..bd0bb81 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -144,7 +144,7 @@ void __init dma_contig
h DMA APIs while extending
core functions.
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index fb0cdce..8a44c82 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -231,9 +231,9 @@ core_initcall(cma_init_reserved_
ppc kvm's cma area management needs alignment constraint on
cma region. So support it to prepare generalization of cma area
management functionality.
Additionally, add some comments which tell us why alignment
constraint is needed on cma region.
Signed-off-by: Joonsoo Kim
diff --git a/dr
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Acked-by: Michal Nazarewicz
Acked-by: Paolo Bonzini
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c
b/arch/powerpc/kvm
ff-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index bc4c171..9bc9340 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -38,6 +38,7 @@ struct cma {
unsigned long base_pfn;
unsigned long
agement and now it's time to do it. This patch
moves core functions to mm/cma.c and change DMA APIs to use
these functions.
There is no functional change in DMA APIs.
v2: There is no big change from v1 in mm/cma.c. Mostly renaming.
Acked-by: Michal Nazarewicz
Signed-off-by: Joonsoo Kim
Currently, we should take the mutex for manipulating bitmap.
This job may be really simple and short so we don't need to sleep
if contended. So I change it to spinlock.
Signed-off-by: Joonsoo Kim
diff --git a/mm/cma.c b/mm/cma.c
index 22a5b23..3085e8c 100644
--- a/mm/cma.c
+++ b/mm/
m/cma.c is same with v1's one. So I carry Ack to patch 6-7.
Patch 1-5 prepare some features to cover ppc kvm's requirements.
Patch 6-7 generalize CMA reserved area management code and change users
to use it.
Patch 8-10 clean-up minor things.
Joonsoo Kim (10):
DMA, CMA: clean-up log m
Conventionally, we put output param to the end of param list.
cma_declare_contiguous() doesn't look like that, so change it.
Additionally, move down cma_areas reference code to the position
where it is really needed.
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/kvm/book3s_hv_buil
We can remove one call sites for clear_cma_bitmap() if we first
call it before checking error number.
Signed-off-by: Joonsoo Kim
diff --git a/mm/cma.c b/mm/cma.c
index 1e1b017..01a0713 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -282,11 +282,12 @@ struct page *cma_alloc(struct cma *cma, int count
On Thu, Jun 12, 2014 at 10:11:19AM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > We don't need explicit 'CMA:' prefix, since we already define prefix
> > 'cma:' in pr_fmt. So remove it.
> >
> > And, some logs print function
On Thu, Jun 12, 2014 at 02:18:53PM +0900, Minchan Kim wrote:
> Hi Joonsoo,
>
> On Thu, Jun 12, 2014 at 12:21:38PM +0900, Joonsoo Kim wrote:
> > We don't need explicit 'CMA:' prefix, since we already define prefix
> > 'cma:' in pr_fmt. So remove it.
&
On Thu, Jun 12, 2014 at 02:25:43PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:39PM +0900, Joonsoo Kim wrote:
> > We should free memory for bitmap when we find zone mis-match,
> > otherwise this memory will leak.
>
> Then, -stable stuff?
I don't think so.
On Thu, Jun 12, 2014 at 02:52:20PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:41PM +0900, Joonsoo Kim wrote:
> > ppc kvm's cma area management needs alignment constraint on
> > cma region. So support it to prepare generalization of cma area
> >
On Thu, Jun 12, 2014 at 03:06:10PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
> > ppc kvm's cma region management requires arbitrary bitmap granularity,
> > since they want to reserve very large memory and manage this region
>
On Thu, Jun 12, 2014 at 01:24:34AM +0400, Vladimir Davydov wrote:
> On Tue, Jun 10, 2014 at 07:18:34PM +0400, Vladimir Davydov wrote:
> > On Tue, Jun 10, 2014 at 09:26:19AM -0500, Christoph Lameter wrote:
> > > On Tue, 10 Jun 2014, Vladimir Davydov wrote:
> > >
> > > > Frankly, I incline to shrink
On Fri, Jun 06, 2014 at 05:22:42PM +0400, Vladimir Davydov wrote:
> Since per memcg cache destruction is scheduled when the last slab is
> freed, to avoid use-after-free in kmem_cache_free we should either
> rearrange code in kmem_cache_free so that it won't dereference the cache
> ptr after freein
On Thu, Jun 12, 2014 at 04:08:11PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:42PM +0900, Joonsoo Kim wrote:
> > ppc kvm's cma region management requires arbitrary bitmap granularity,
> > since they want to reserve very large memory and manage this region
>
On Thu, Jun 12, 2014 at 04:13:11PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:43PM +0900, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > t
On Thu, Jun 12, 2014 at 04:19:31PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:46PM +0900, Joonsoo Kim wrote:
> > Conventionally, we put output param to the end of param list.
> > cma_declare_contiguous() doesn't look like that, so change it.
>
> If you
On Thu, Jun 12, 2014 at 04:40:29PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:47PM +0900, Joonsoo Kim wrote:
> > Currently, we should take the mutex for manipulating bitmap.
> > This job may be really simple and short so we don't need to sleep
> > if co
On Wed, May 07, 2014 at 03:06:10PM +0900, Joonsoo Kim wrote:
> This patchset does some clean-up and tries to remove lockdep annotation.
>
> Patches 1~3 are just for really really minor improvement.
> Patches 4~10 are for clean-up and removing lockdep annotation.
>
> There
On Fri, May 23, 2014 at 05:57:58PM -0700, Laura Abbott wrote:
> On 5/12/2014 10:04 AM, Laura Abbott wrote:
> >
> > I'm going to see about running this through tests internally for comparison.
> > Hopefully I'll get useful results in a day or so.
> >
> > Thanks,
> > Laura
> >
>
> We ran some tes
ing
kswapd. Now, previous patch changes the behaviour of allocator that
movable allocation uses the page on cma reserved region aggressively,
so this watermark hack isn't needed anymore. Therefore remove it.
Acked-by: Michal Nazarewicz
Signed-off-by: Joonsoo Kim
diff --git a/mm/co
case, however, current __alloc_contig_migrate_range() does. But,
I think that this isn't problem, because in this case, we may fail again
with same reason.
Acked-by: Michal Nazarewicz
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5dba293..674ade7 10064
e free CMA pages aren't
used easily. But, with this patch, free CMA pages are used easily so
this problem can be possible. I will handle it on another patchset
after some investigating.
v2: In fastpath, just replenish counters. Calculation is done whenver
cma area is varied
Acked-by:
imple optimization which remove useless re-trial and patch 3
is for removing useless alloc flag, so these are not important.
See patch 2 for more detailed description.
This patchset is based on v3.15-rc7.
Joonsoo Kim (3):
CMA: remove redundant retrying code in __alloc_contig_migrate_range
CMA: aggress
I haven't heard that this alien cache lock is contended, but to reduce
chance of contention would be better generally. And with this change,
we can simplify complex lockdep annotation in slab code.
In the following patch, it will be implemented.
Acked-by: Christoph Lameter
Signed-off-by: Jo
BAD_ALIEN_MAGIC value isn't used anymore. So remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 4030a89..8476ffc 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -437,8 +437,6 @@ static struct kmem_cache kmem_cache_boot = {
.name = "kmem_cache",
ption.
directly return return value of clear_obj_pfmemalloc().
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 1fede40..e2c80df 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -215,9 +215,9 @@ static inline void set_obj_pfmemalloc(void **objp)
return;
}
-static inline
node isn't changed, so we don't need to retreive this structure
everytime we move the object. Maybe compiler do this optimization,
but making it explicitly is better.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index e2c80df..92d08e3 10
performance effect of this, but we'd be better not to hold the lock
as much as possible.
Commented by Christoph:
This is also good because kmem_cache_free is no longer called while
holding the node lock. So we avoid one case of recursion.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
es
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 25317fd..1fede40 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2993,7 +2993,7 @@ static void *cache_alloc_debugcheck_after(struct
kmem_cache *cachep,
static bool slab_should_failslab(struct kmem_cache *cachep, gfp_t flags)
{
Factor out initialization of array cache to use it in following patch.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 7647728..755fb57 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -741,13 +741,8 @@ static void start_cpu_timer(int cpu
d, so removing it would be better. This patch prepare it by
introducing alien_cache and using it. In the following patch,
we remove spinlock in array_cache.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 755fb57..41b7651 100644
--- a/mm/slab.c
+++
Now, we have separate alien_cache structure, so it'd be better to hold
the lock on alien_cache while manipulating alien_cache. After that,
we don't need the lock on array_cache, so remove it.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/sl
Now, there is no code to hold two lock simultaneously, since
we don't call slab_destroy() with holding any lock. So, lockdep
annotation is useless now. Remove it.
v2: don't remove BAD_ALIEN_MAGIC in this patch. It will be removed
in the following patch.
Signed-off-by: Joonsoo
tation. As short stat noted, this makes SLAB code much simpler.
Many of this series get Ack from Christoph Lameter on previous iteration,
but 1, 2, 9 and 10 need to get Ack. There is no big change from previous
iteration. It is just rebased on current linux-next.
Thanks.
Joonsoo Kim (10):
>> Most popular use of zram is the in-memory swap for small embedded system
>> so I don't want to increase memory footprint without good reason although
>> it makes synthetic benchmark. Alhought it's 1M for 1G, it isn't small if we
>> consider compression ratio and real free memory after boot
We c
imple optimization which remove useless re-trial and patch 3
is for removing useless alloc flag, so these are not important.
See patch 2 for more detailed description.
This patchset is based on v3.15-rc4.
Thanks.
Joonsoo Kim (3):
CMA: remove redundant retrying code in __alloc_contig_migrate_range
tant for some system, so, I can say that
this patch have advantages and disadvantages in terms of latency.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fac5509..3ff24d4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -389,6 +
ing
kswapd. Now, previous patch changes the behaviour of allocator that
movable allocation uses the page on cma reserved region aggressively,
so this watermark hack isn't needed anymore. Therefore remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 6
case, however, current __alloc_contig_migrate_range() does. But,
I think that this isn't problem, because in this case, we may fail again
with same reason.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5dba293..674ade7 100644
--- a/mm/page_alloc.c
+++ b/mm/pa
On Tue, May 06, 2014 at 07:22:52PM -0700, David Rientjes wrote:
> Async compaction terminates prematurely when need_resched(), see
> compact_checklock_irqsave(). This can never trigger, however, if the
> cond_resched() in isolate_migratepages_range() always takes care of the
> scheduling.
>
> I
On Wed, May 07, 2014 at 02:09:10PM +0200, Vlastimil Babka wrote:
> The compaction free scanner in isolate_freepages() currently remembers PFN of
> the highest pageblock where it successfully isolates, to be used as the
> starting pageblock for the next invocation. The rationale behind this is that
t;>
> >>>>>>>> This is ensured by setting the freepage_migratetype appropriately
> >>>>>>>> when placing
> >>>>>>>> pages on pcp lists, and using the information when releasing them
> >>>>>>>
Signed-off-by: Joonsoo Kim
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f64632b..fdbb116 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2690,14 +2690,14 @@ void get_vmalloc_info(struct vmalloc_info *vmi)
prev_end = VMALLOC_START;
- spin_lock(&vmap_area_lock);
+
_light, but it could in the future, so change back as it was.
And pass cc->mode to migrate_pages(), instead of passing MIGRATE_SYNC
to migrate_pages().
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7f97767..97c4185 100644
--- a/mm/page_alloc.c
+++ b/mm/page_all
'cma: Remove potential deadlock situation' introduces per cma area mutex
for bitmap management. It is good, but there is one mistake. When we
can't find appropriate area in bitmap, we release cma_mutex global lock
rather than cma->lock and this is a bug. So fix it.
Signed-o
On Thu, May 29, 2014 at 03:35:05PM +0900, Minchan Kim wrote:
> On Thu, May 29, 2014 at 03:25:50PM +0900, Joonsoo Kim wrote:
> > Before commit 'mm, compaction: embed migration mode in compact_control'
> > from David is merged, alloc_contig_range() used sync migration,
On Thu, May 29, 2014 at 04:24:58PM +0900, Gioh Kim wrote:
> I've not understand your code fully. Please let me ask some silly questions.
>
> 2014-05-28 오후 4:04, Joonsoo Kim 쓴 글:
> > CMA is introduced to provide physically contiguous pages at runtime.
> > For this purp
On Thu, May 29, 2014 at 02:23:08PM -0700, Eric Dumazet wrote:
> On Thu, 2014-05-29 at 13:05 -0700, Andrew Morton wrote:
> > On Thu, 29 May 2014 15:22:34 +0900 Joonsoo Kim
> > wrote:
> >
> > > Richard Yao reported a month ago that his system have a trouble
> &
On Thu, May 29, 2014 at 05:09:34PM +0900, Gioh Kim wrote:
>
> >>>+
> >>> /*
> >>>* Do the hard work of removing an element from the buddy allocator.
> >>>* Call me with the zone->lock already held.
> >>>@@ -1143,10 +1223,15 @@ __rmqueue_fallback(struct zone *zone, int order,
> >>>int st
2014-05-30 16:53 GMT+09:00 Gioh Kim :
> Joonsoo,
>
> I'm attaching a patch for combination of __rmqueue and __rmqueue_cma.
> I didn't test fully but my board is turned on and working well if no frequent
> memory allocations.
>
> I'm sorry to send not-tested code.
> I just want to report this durin
ong.
>
> commit 026b08147923142e925a7d0aaa39038055ae0156
> Author: Tomasz Stanislawski
> Date: Wed Jun 12 14:05:02 2013 -0700
Hello, Ritesh.
Thanks for notifying that.
>
> On Wed, May 28, 2014 at 12:34 PM, Joonsoo Kim wrote:
>> commit d95ea5d1('cma: fix watermark checking') introduces ALL
On Sat, May 31, 2014 at 03:04:58PM +0400, Vladimir Davydov wrote:
> On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
> > On Fri, 30 May 2014, Vladimir Davydov wrote:
> >
> > > (3) is a bit more difficult, because slabs are added to per-cpu partial
> > > lists lock-less. Fortunate
On Fri, May 30, 2014 at 05:51:11PM +0400, Vladimir Davydov wrote:
> There is no use in keeping free objects/slabs on dead memcg caches,
> because they will never be allocated. So let's make cache_reap() shrink
> as many free objects from such caches as possible.
>
> Note the difference between SLA
On Sat, May 31, 2014 at 09:02:51AM +0900, Michal Nazarewicz wrote:
> > On Thu, May 29, 2014 at 05:09:34PM +0900, Gioh Kim wrote:
> >> Is IS_ENABLED(CONFIG_CMA) necessary?
> >> What about if (migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages) ?
>
>
On Mon, Jun 02, 2014 at 02:54:30PM +0900, Gioh Kim wrote:
> I found 2 problems at my platform.
>
> 1st is occured when I set CMA size 528MB and total memory is 960MB.
> I print some values in adjust_managed_cma_page_count(),
> the total value becomes 105439 and cma value 131072.
> Finally movable
2014-06-02 21:10 GMT+09:00 Vladimir Davydov :
> On Mon, Jun 02, 2014 at 01:41:55PM +0900, Joonsoo Kim wrote:
>> According to my code reading, slabs_to_free() doesn't return number of
>> free slabs. This bug is introduced by 0fa8103b. I think that it is
>> better to fi
2014-06-02 20:47 GMT+09:00 Vladimir Davydov :
> Hi Joonsoo,
>
> On Mon, Jun 02, 2014 at 01:24:36PM +0900, Joonsoo Kim wrote:
>> On Sat, May 31, 2014 at 03:04:58PM +0400, Vladimir Davydov wrote:
>> > On Fri, May 30, 2014 at 09:57:10AM -0500, Christoph Lameter wrote:
&g
2014-06-02 19:47 GMT+09:00 Bartlomiej Zolnierkiewicz :
>
> Hi,
>
> On Monday, June 02, 2014 09:37:49 AM Ritesh Harjani wrote:
>> Hi Joonsoo,
>>
>> CC'ing the developer of the patch (Tomasz Stanislawski)
>>
>>
>> On Fri, May 30, 2014 at 8:16 PM
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c
b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81 100644
--- a/arch/powerpc
from people who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.
Thanks.
Joonsoo Kim (3):
CMA: generalize CMA reserved area management functionality
DMA, CMA: use general CMA reserved area management framework
P
Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base
ead through
this patch.
This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/Kconfig b/drivers/base/Kconfi
2014-06-03 17:16 GMT+09:00 Vladimir Davydov :
> On Mon, Jun 02, 2014 at 11:03:51PM +0900, Joonsoo Kim wrote:
>> 2014-06-02 20:47 GMT+09:00 Vladimir Davydov :
>> > Hi Joonsoo,
>> >
>> > On Mon, Jun 02, 2014 at 01:24:36PM +0900, Joonsoo Kim wrote:
>> &
the glob to include
>> files like mm/slab_common.c
>>
>> Signed-off-by: Christoph Lameter
>
> Acked-by: David Rientjes
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vg
On Thu, Jun 12, 2014 at 11:53:16AM +0200, Michal Nazarewicz wrote:
> On Thu, Jun 12 2014, Michal Nazarewicz wrote:
> > I used “function(arg1, arg2, …)” at the *beginning* of functions when
> > the arguments passed to the function were included in the message. In
> > all other cases I left it at j
On Thu, Jun 12, 2014 at 12:02:38PM +0200, Michal Nazarewicz wrote:
> On Thu, Jun 12 2014, Joonsoo Kim wrote:
> > ppc kvm's cma area management needs alignment constraint on
>
> I've noticed it earlier and cannot seem to get to terms with this. It
> should IMO be PPC,
On Thu, Jun 12, 2014 at 12:19:54PM +0200, Michal Nazarewicz wrote:
> On Thu, Jun 12 2014, Joonsoo Kim wrote:
> > ppc kvm's cma region management requires arbitrary bitmap granularity,
> > since they want to reserve very large memory and manage this region
> > with bitma
On Thu, Jun 12, 2014 at 02:37:43PM +0900, Minchan Kim wrote:
> On Thu, Jun 12, 2014 at 12:21:40PM +0900, Joonsoo Kim wrote:
> > To prepare future generalization work on cma area management code,
> > we need to separate core cma management codes from DMA APIs.
> > We w
On Sat, Jun 14, 2014 at 03:46:44PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area e
On Sat, Jun 14, 2014 at 03:35:33PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Acked-by: Michal Nazarewic
On Sat, Jun 14, 2014 at 12:55:39PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area e
On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Acked-by: Michal Nazarewic
We can remove one call sites for clear_cma_bitmap() if we first
call it before checking error number.
Acked-by: Minchan Kim
Reviewed-by: Michal Nazarewicz
Reviewed-by: Zhang Yanfei
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/cma.c b/mm/cma.c
index 0cf50da
y Ack to patch 6-7.
This patchset is based on linux-next 20140610.
Patch 1-4 prepare some features to cover PPC KVM's requirements.
Patch 5-6 generalize CMA reserved area management code and change users
to use it.
Patch 7-9 clean-up minor things.
Joonsoo Kim (9):
DMA, CMA: fix possible
h DMA APIs while extending
core functions.
v3: move decriptions to exporeted APIs (Minchan)
pass aligned base and size to dma_contiguous_early_fixup() (Minchan)
Acked-by: Michal Nazarewicz
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguou
order->alignment (Minchan)
clear code documentation by Minchan's comment (Minchan)
Acked-by: Michal Nazarewicz
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 9021762..5f62c28 100644
--- a/dri
found this possibility during code-review and, IMO,
this patch isn't suitable for stable tree.
Acked-by: Zhang Yanfei
Reviewed-by: Michal Nazarewicz
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 8
Nazarewicz
Acked-by: Paolo Bonzini
Tested-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c
b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8056107..a41e625 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -37,8
base' ahead of 'size' (Minchan)
Acked-by: Michal Nazarewicz
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c
b/arch/powerpc/kvm/book3s_hv_builtin.c
index 3960e0b..6cf498a 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin
1101 - 1200 of 2325 matches
Mail list logo