ome accessor functions to pass aligned base and size to
dma_contiguous_early_fixup() function
move MAX_CMA_AREAS to cma.h
Acked-by: Michal Nazarewicz
Acked-by: Zhang Yanfei
Acked-by: Minchan Kim
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/dma-mapping.
We don't need explicit 'CMA:' prefix, since we already define prefix
'cma:' in pr_fmt. So remove it.
Acked-by: Michal Nazarewicz
Reviewed-by: Zhang Yanfei
Signed-off-by: Joonsoo Kim
diff --git a/mm/cma.c b/mm/cma.c
index 9961120..4b251b0 100644
--- a/mm/cma.c
+++ b/mm
iewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 5f62c28..c6eeb2c 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -38,6 +38,7 @@ struct cma {
unsigned long bas
ones
Reported-by: Tetsuo Handa
Signed-off-by: Joonsoo Kim
---
Hello,
Please review this one. :)
There was some bug reports on this issue 2~3 month ago, and,
at that time, I sent the patch to fix the problem, but didn't get review.
So this issue is pending until now.
https://lkml.org/lkml/2
On Mon, Jun 16, 2014 at 11:11:35AM +0200, Marek Szyprowski wrote:
> Hello,
>
> On 2014-06-16 07:40, Joonsoo Kim wrote:
> >Currently, there are two users on CMA functionality, one is the DMA
> >subsystem and the other is the KVM on powerpc. They have their own code
> >
On Mon, Jun 16, 2014 at 03:27:19PM +0900, Minchan Kim wrote:
> Hi, Joonsoo
>
> On Mon, Jun 16, 2014 at 02:40:43PM +0900, Joonsoo Kim wrote:
> > We should free memory for bitmap when we find zone mis-match,
> > otherwise this memory will leak.
> >
> > Additiona
on, calculate_slab_order(), that wants to know
how much space per object is spent for freelist management.
Cc:
Reported-by: Dave Jones
Reported-by: Tetsuo Handa
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 9ca3b87..3070b92 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@
On Tue, Jun 17, 2014 at 11:29:33AM +0400, Vladimir Davydov wrote:
> Hi,
>
> On Tue, Jun 17, 2014 at 10:09:52AM +0900, Joonsoo Kim wrote:
> [...]
> > To fix the problem, I introduces object status buffer on each slab.
> > With this, we can track object status precisel
On Tue, Jun 24, 2014 at 04:36:47PM +1000, Michael Ellerman wrote:
> Commit e58e263 "PPC, KVM, CMA: use general CMA reserved area management
> framework" in next-20140624 removed arch/powerpc/kvm/book3s_hv_cma.c but
> neglected to update the Makefile, thus breaking the build.
>
> Signed-off-by: Mic
On Fri, Jun 13, 2014 at 12:38:22AM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of free objects/slabs for such
> caches, otherwise they will be hanging around forever.
>
> For SLAB that means we
On Fri, Jun 13, 2014 at 12:38:22AM +0400, Vladimir Davydov wrote:
> Since a dead memcg cache is destroyed only after the last slab allocated
> to it is freed, we must disable caching of free objects/slabs for such
> caches, otherwise they will be hanging around forever.
>
> For SLAB that means we
. Current implementation missed
the equal case so if we set min_partial is 0, then, at least one slab
could be cached. This is critical problem to kmemcg destroying logic
because it doesn't works properly if some slabs is cached. This patch
fixes this problem.
Signed-off-by: Joonsoo Kim
diff
rink, which is always called on memcg offline (see
> memcg_unregister_all_caches).
>
> Signed-off-by: Vladimir Davydov
> Thanks-to: Joonsoo Kim
> ---
> mm/slub.c | 11 +++
> 1 file changed, 11 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> i
On Wed, Jun 18, 2014 at 01:51:44PM -0700, Andrew Morton wrote:
> On Tue, 17 Jun 2014 10:25:07 +0900 Joonsoo Kim wrote:
>
> > > >v2:
> > > > - Although this patchset looks very different with v1, the end result,
> > > > that is, mm/cma.c is same
ompaction for the Normal zone,
> and DMA32 zones on both nodes were thus not considered for compaction.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christo
hedule
> or abort async compaction.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
> ---
> mm/co
On Tue, Jun 24, 2014 at 05:42:50PM +0200, Vlastimil Babka wrote:
> On 06/24/2014 10:33 AM, Joonsoo Kim wrote:
> >On Fri, Jun 20, 2014 at 05:49:34PM +0200, Vlastimil Babka wrote:
> >>isolate_migratepages_range() is the main function of the compaction scanner,
> >>called e
On Tue, Jun 24, 2014 at 05:29:27PM +0200, Vlastimil Babka wrote:
> On 06/24/2014 10:23 AM, Joonsoo Kim wrote:
> >On Fri, Jun 20, 2014 at 05:49:32PM +0200, Vlastimil Babka wrote:
> >>When direct sync compaction is often unsuccessful, it may become deferred
> >>for
>
On Mon, Jun 23, 2014 at 07:59:17PM +0900, Tetsuo Handa wrote:
> Joonsoo Kim wrote:
> > On Thu, Apr 10, 2014 at 08:54:37PM +0900, Tetsuo Handa wrote:
> > > Joonsoo Kim wrote:
> > > > There was another report about this problem and I have already fixed
> > >
On Fri, Jan 31, 2014 at 09:36:46AM -0800, Davidlohr Bueso wrote:
> From: Davidlohr Bueso
>
> The kernel can currently only handle a single hugetlb page fault at a time.
> This is due to a single mutex that serializes the entire path. This lock
> protects from spurious OOM errors under conditions
On Wed, Jan 29, 2014 at 05:52:41PM +0100, Vlastimil Babka wrote:
> On 01/10/2014 09:48 AM, Joonsoo Kim wrote:
> >On Thu, Jan 09, 2014 at 09:27:20AM +, Mel Gorman wrote:
> >>On Thu, Jan 09, 2014 at 04:04:40PM +0900, Joonsoo Kim wrote:
> >>>Hello,
> >>>
On Mon, Feb 03, 2014 at 02:49:32AM -0800, David Rientjes wrote:
> On Mon, 3 Feb 2014, Mel Gorman wrote:
>
> > > Page migration will fail for memory that is pinned in memory with, for
> > > example, get_user_pages(). In this case, it is unnecessary to take
> > > zone->lru_lock or isolating the pag
On Mon, Feb 03, 2014 at 05:20:46PM -0800, David Rientjes wrote:
> On Tue, 4 Feb 2014, Joonsoo Kim wrote:
>
> > I think that you need more code to skip this type of page correctly.
> > Without page_mapped() check, this code makes migratable pages be skipped,
> > sin
On Mon, Feb 03, 2014 at 06:00:56PM -0800, David Rientjes wrote:
> On Tue, 4 Feb 2014, Joonsoo Kim wrote:
>
> > Okay. It can't fix your situation. Anyway, *normal* anon pages may be mapped
> > and have positive page_count(), so your code such as
> > '!page_mappin
On Wed, Feb 05, 2014 at 12:56:40PM -0800, Hugh Dickins wrote:
> On Tue, 4 Feb 2014, David Rientjes wrote:
>
> > Page migration will fail for memory that is pinned in memory with, for
> > example, get_user_pages(). In this case, it is unnecessary to take
> > zone->lru_lock or isolating the page an
On Fri, Dec 06, 2013 at 01:37:26PM -0500, Naoya Horiguchi wrote:
> On Fri, Dec 06, 2013 at 03:42:16PM +0100, Vlastimil Babka wrote:
> > On 12/06/2013 09:41 AM, Joonsoo Kim wrote:
> > >migrate_pages() should return number of pages not migrated or error code.
> > >When un
On Fri, Dec 06, 2013 at 10:21:43AM -0200, Rafael Aquini wrote:
> On Fri, Dec 06, 2013 at 05:53:31PM +0900, Joonsoo Kim wrote:
> > Hello, Rafael.
> >
> > I looked at some compaction code and found that some oddity about
> > balloon compaction. In isolate_migra
should put back the new hugepage if
!hugepage_migration_support(). If not, we would leak hugepage memory.
Signed-off-by: Joonsoo Kim
diff --git a/mm/migrate.c b/mm/migrate.c
index c6ac87a..b1cfd01 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1011,7 +1011,7 @@ static int unmap_and_move_huge_page
age_node(), we always
try to allocate the page in exact node by referencing pm->node. So it is
sufficient to set node id of the new page in new_page_node(), instead of
unmap_and_move().
These two changes make result argument useless, so we can remove it
entirely. It makes the code more undertand
From: Naoya Horiguchi
Let's add a comment about where the failed page goes to, which makes
code more readable.
Acked-by: Christoph Lameter
Signed-off-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/mm/migrate.c b/mm/migrate.c
index 3747fcd..c6ac87a 100644
--- a/mm/migr
it on
update_pageblock_skip() to prevent from setting the wrong information.
Cc: # 3.7+
Acked-by: Vlastimil Babka
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 805165b..f58bcd0 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -134,6 +1
Here is the patchset for correcting and cleaning-up migration
related stuff. These are random correction and clean-up, so
please see each patches ;)
Thanks.
Naoya Horiguchi (1):
mm/migrate: add comment about permanent failure path
Joonsoo Kim (6):
mm/migrate: correct failure handling if
fail_migrate_page() isn't used anywhere, so remove it.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e4671f9..4308018 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -41,9 +41,6 @@ extern int migrate_page(s
now, so fix it.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f5096b5..e4671f9 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -35,7 +35,6 @@ enum migrate_reason {
#ifdef CONFIG_MIGRATION
-extern void putback_lru_pages
queue_pages_range() isolates hugetlbfs pages and putback_lru_pages() can't
handle these. We should change it to putback_movable_pages().
Naoya said that it is worth going into stable, because it can break
in-use hugepage list.
Cc: # 3.12
Reviewed-by: Naoya Horiguchi
Signed-off-by: Joonso
On Mon, Dec 09, 2013 at 08:36:23AM -0800, Davidlohr Bueso wrote:
> On Mon, 2013-09-30 at 16:47 +0900, Joonsoo Kim wrote:
> > On Mon, Sep 16, 2013 at 10:09:09PM +1000, David Gibson wrote:
> > > > >
> > > > > > + *do_dequeue = false;
> &g
On Mon, Dec 09, 2013 at 04:17:32PM +, Christoph Lameter wrote:
> On Mon, 9 Dec 2013, Joonsoo Kim wrote:
>
> > We should remove the page from the list if we fail without ENOSYS,
> > since migrate_pages() consider error cases except -ENOMEM and -EAGAIN
> > as permanen
On Tue, Dec 10, 2013 at 10:17:56AM +0800, Wanpeng Li wrote:
> Hi Joonsoo,
> On Mon, Dec 09, 2013 at 06:10:43PM +0900, Joonsoo Kim wrote:
> >We should remove the page from the list if we fail without ENOSYS,
> >since migrate_pages() consider error cases except -ENOMEM and -EAGA
> >@@ -1704,6 +1688,12 @@ int migrate_misplaced_page(struct page *page, struct
> >vm_area_struct *vma,
> > nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page,
> > node, MIGRATE_ASYNC, MR_NUMA_MISPLACED);
> > if (nr_remaining) {
> >+
On Tue, Dec 10, 2013 at 05:51:47PM +0900, Joonsoo Kim wrote:
> > >@@ -1704,6 +1688,12 @@ int migrate_misplaced_page(struct page *page,
> > >struct vm_area_struct *vma,
> > > nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page,
> >
On Mon, Dec 09, 2013 at 04:40:06PM +, Christoph Lameter wrote:
> On Mon, 9 Dec 2013, Joonsoo Kim wrote:
>
> > First, we don't use error number in fail case. Call-path related to
> > new_page_node() is shown in the following.
> >
> > do_move_pa
On Wed, Dec 11, 2013 at 04:00:56PM +, Christoph Lameter wrote:
> On Wed, 11 Dec 2013, Joonsoo Kim wrote:
>
> > In do_move_pages(), if error occurs, 'goto out_pm' is executed and the
> > page status doesn't back to userspace. So we don't need to store err
On Wed, Dec 11, 2013 at 11:24:31AM +0100, Vlastimil Babka wrote:
> Changelog since V1 (thanks to the reviewers!)
> o Included "trace compaction being and end" patch in the series (mgorman)
> o Changed variable names and comments in patches 2 and 5(mgorman)
> o More thorough measurem
To change a protection method for region tracking to find grained one,
we pass the resv_map, instead of list_head, to region manipulation
functions. This doesn't introduce any functional change, and it is just
for preparing a next step.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonso
Now, alloc_huge_page() only return -ENOSPEC if failed.
So, we don't need to worry about other return value.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d960f46..0f56bbf 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2621,7 +2
.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cf0eaff..ef70b6f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2282,15 +2282,6 @@ static void hugetlb_vm_op_open(struct vm_area_struct
*vma)
kref_get(&resv->refs);
}
-static void re
Just move down outside_reserve check and don't check
vma_need_reservation() when outside_resever is true. It is slightly
optimized implementation.
This makes code more readable.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0f
re to fine grained lock, and this difference hinder it.
So, before changing it, unify region structure handling.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index d19b30a..2040275 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/huge
Now, we have an infrastructure in order to remove a this awkward mutex
which serialize all faulting tasks, so remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 843c554..6edf423 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2595,9 +2595,7 @@ static int
get a SIGBUS signal until there is no
concurrent user, and so, we can ensure that no one get a SIGBUS if there
are enough hugepages.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ee304d1..daca347 100644
--- a/include/linux/hugetlb.h
+++ b/include
If we fail with a allocated hugepage, we need some effort to recover
properly. So, it is better not to allocate a hugepage as much as possible.
So move up anon_vma_prepare() which can be failed in OOM situation.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c
.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9927407..d960f46 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1177,13 +1177,11 @@ static void vma_commit_reservation(struct hstate *h,
}
static struct page *alloc_huge_page(struct
Current code include 'Caller expects lock to be held' in every error path.
We can clean-up it as we do error handling in one place.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1817720..a9ae7d3 100644
--- a/mm/hugetlb.
er, please let me know!
Thanks.
[1] http://lwn.net/Articles/558863/
"[PATCH] mm/hugetlb: per-vma instantiation mutexes"
[2] https://lkml.org/lkml/2013/9/4/630
Joonsoo Kim (14):
mm, hugetlb: unify region structure handling
mm, hugetlb: region manipulation functions tak
Currently, we have two variable to represent whether we can use reserved
page or not, chg and avoid_reserve, respectively. With aggregating these,
we can have more clean code. This makes no functinoal difference.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm
same as vma_has_reserves(), so remove vma_has_reserves().
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f394454..9d456d4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -469,39 +469,6 @@ void reset_vma_resv_huge_pages(struct vm_area_struct
ure, so it can be modified by two processes concurrently.
To solve this, I introduce a lock to resv_map and make region manipulation
function grab a lock before they do actual work. This makes region
tracking safe.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/hugetlb.h b/include/linux
Util now, we get a resv_map by two ways according to each mapping type.
This makes code dirty and unreadable. So unfiying it.
Reviewed-by: Aneesh Kumar K.V
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ef70b6f..f394454 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
On Mon, Dec 16, 2013 at 03:43:43PM +0100, Ludovic Desroches wrote:
> Hello,
>
> On Fri, Dec 13, 2013 at 10:59:09AM +0900, Joonsoo Kim wrote:
> > On Thu, Dec 12, 2013 at 03:36:19PM +0100, Ludovic Desroches wrote:
> > > fix mmc mailing list address error
> > >
&g
Hello, Andrew.
On Wed, Dec 18, 2013 at 04:28:58PM -0800, Andrew Morton wrote:
> On Thu, 19 Dec 2013 08:16:35 +0800 Wanpeng Li
> wrote:
>
> > page_get_anon_vma() called in page_referenced_anon() will lock and
> > increase the refcount of anon_vma, page won't be locked for anonymous
> > page. T
On Wed, Dec 18, 2013 at 05:04:29PM -0800, Andrew Morton wrote:
> On Thu, 19 Dec 2013 09:58:05 +0900 Joonsoo Kim wrote:
>
> > On Wed, Dec 18, 2013 at 04:28:58PM -0800, Andrew Morton wrote:
> > > On Thu, 19 Dec 2013 08:16:35 +0800 Wanpeng Li
> > > wrote:
> > &
> 0b 66 0f
> 1f 44 00 0
> 0 eb fe 66 0f 1f 44 00 00 f6 47 08 01 74
> [ 588.707515] RIP [] rmap_walk+0x10/0x50
> [ 588.707515] RSP
>
> Reported-by: Sasha Levin
> Signed-off-by: Wanpeng Li
Reviewed-by: Joonsoo Kim
Thanks for all relevant people. :)
--
To
On Thu, Dec 19, 2013 at 02:55:10PM +0900, Joonsoo Kim wrote:
> On Thu, Dec 19, 2013 at 01:41:55PM +0800, Wanpeng Li wrote:
> > This bug is introduced by commit 37f093cdf(mm/rmap: use rmap_walk() in
> > page_referenced()). page_get_anon_vma() called in page_referenced_anon()
&g
On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote:
> On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim wrote:
>
> > If parallel fault occur, we can fail to allocate a hugepage,
> > because many threads dequeue a hugepage to handle a fault of same address.
> >
Hello, Davidlohr.
On Thu, Dec 19, 2013 at 06:31:21PM -0800, Davidlohr Bueso wrote:
> On Thu, 2013-12-19 at 17:02 -0800, Andrew Morton wrote:
> > On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim
> > wrote:
> >
> > > If parallel fault occur, we can fail to allocate
On Thu, Dec 19, 2013 at 06:15:20PM -0800, Andrew Morton wrote:
> On Fri, 20 Dec 2013 10:58:10 +0900 Joonsoo Kim wrote:
>
> > On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote:
> > > On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim
> > > wrote:
> >
ystem: 1645 s
* After
time :: stress-highalloc 3225.51 user 732.40 system 1542.76 elapsed
time :: stress-highalloc 3524.31 user 749.63 system 1512.88 elapsed
time :: stress-highalloc 3610.55 user 757.20 system 1505.70 elapsed
avg system: 1519 s
That is 7% reduced system time.
Thanks.
Joonsoo K
isolating, retry to aquire the lock.
I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking
the criteria about dropping the lock. This has no harm 0x0 pfn, because,
at this time, locked variable would be false.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
a for highorder is pageblock order. So calling it once
within pageblock range has no problem.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index bbe1260..0d821a2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -245,6 +245,7 @@ static unsigned
.
Additionally, clean-up logic in suitable_migration_target() to simply.
There is no functional changes from this clean-up.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 3a91a2e..bbe1260 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -217,21 +217,12 @@ static
h fix this situation by updating last_pageblock_nr.
Additionally, move PageBuddy() check after pageblock unit check,
since pageblock check is the first thing we should do and makes things
more simple.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index b1ba297..985b782 10
It is just for clean-up to reduce code size and improve readability.
There is no functional change.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 985b782..7a4e3b7 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -554,11 +554,7 @@ isolate_migratepages_range
On Fri, Feb 07, 2014 at 10:14:26AM +0100, Vlastimil Babka wrote:
> On 02/07/2014 06:08 AM, Joonsoo Kim wrote:
> > This patchset is related to the compaction.
> >
> > patch 1 fixes contrary implementation of the purpose of compaction.
> > patch 2~4 are for optimizati
On Fri, Feb 07, 2014 at 10:36:13AM +0100, Vlastimil Babka wrote:
> On 02/07/2014 06:08 AM, Joonsoo Kim wrote:
> > suitable_migration_target() checks that pageblock is suitable for
> > migration target. In isolate_freepages_block(), it is called on every
> > page and this is in
On Fri, Feb 07, 2014 at 11:30:02AM +0100, Vlastimil Babka wrote:
> On 02/07/2014 06:08 AM, Joonsoo Kim wrote:
> > isolation_suitable() and migrate_async_suitable() is used to be sure
> > that this pageblock range is fine to be migragted. It isn't needed to
> > call it o
On Wed, Dec 04, 2013 at 04:33:43PM +, Christoph Lameter wrote:
> On Thu, 5 Dec 2013, Joonsoo Kim wrote:
>
> > Now we have cpu partial slabs facility, so I think that slowpath isn't
> > really
> > slow. And it doesn't much increase the management over
queue_pages_range() isolates hugetlbfs pages and putback_lru_pages() can't
handle these. We should change it to putback_movable_pages().
Signed-off-by: Joonsoo Kim
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index eca4a31..6d04d37 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1
migrate_pages() should return number of pages not migrated or error code.
When unmap_and_move return -EAGAIN, outer loop is re-execution without
initialising nr_failed. This makes nr_failed over-counted.
So this patch correct it by initialising nr_failed in outer loop.
Signed-off-by: Joonsoo Kim
it on
update_pageblock_skip() to prevent from setting the wrong information.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 805165b..f58bcd0 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -134,6 +134,10 @@ static void update_pageblock_skip(struct compact_control
now, so fix it.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index f5096b5..7782b74 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -35,7 +35,6 @@ enum migrate_reason {
#ifdef CONFIG_MIGRATION
-extern void putback_lru_pages
Hello, Rafael.
I looked at some compaction code and found that some oddity about
balloon compaction. In isolate_migratepages_range(), if we meet
!PageLRU(), we check whether this page is for balloon compaction.
In this case, code needs locked. Is the lock really needed? I can't find
any relationsh
On Thu, Dec 05, 2013 at 06:50:50PM +, Christoph Lameter wrote:
> On Thu, 5 Dec 2013, Joonsoo Kim wrote:
>
> > I could try. But my trial would not figure this out, since my machine has
> > just 4 cores which normally cannot produce heavy contention.
>
> I think that is
2013/12/6 Zhang Yanfei :
> Hello
>
> On 12/06/2013 04:41 PM, Joonsoo Kim wrote:
>> Some part of putback_lru_pages() and putback_movable_pages() is
>> duplicated, so it could confuse us what we should use.
>> We can remove putback_lru_pages() since it is not really n
On Tue, Dec 24, 2013 at 11:00:12PM +1100, David Gibson wrote:
> On Mon, Dec 23, 2013 at 10:05:17AM +0900, Joonsoo Kim wrote:
> > On Sun, Dec 22, 2013 at 12:58:19AM +1100, David Gibson wrote:
> > > On Wed, Dec 18, 2013 at 03:53:49PM +0900, Joonsoo Kim wrote:
> > > > T
On Fri, Jan 03, 2014 at 11:55:45AM -0800, Davidlohr Bueso wrote:
> Hi Joonsoo,
>
> Sorry about the delay...
>
> On Mon, 2013-12-23 at 11:11 +0900, Joonsoo Kim wrote:
> > On Mon, Dec 23, 2013 at 09:44:38AM +0900, Joonsoo Kim wrote:
> > > On Fri, Dec 20, 2013 at 10:
On Fri, Jan 03, 2014 at 03:54:04PM +0100, Ludovic Desroches wrote:
> Hi,
>
> On Tue, Dec 24, 2013 at 03:38:37PM +0900, Joonsoo Kim wrote:
>
> [...]
>
> > > > > > > I think that this commit may not introduce a bug. This patch
> > > > > &g
On Sun, Jan 05, 2014 at 05:04:56PM +0800, fengguang...@intel.com wrote:
> Hi Joonsoo,
>
> We noticed the below changes for commit 23f0d2093c ("sched: Factor out
> code to should_we_balance()") in test vm-scalability/300s-lru-file-readtwice
Hello, Fengguang.
There was a mistake in this patch and
On Fri, Jan 03, 2014 at 02:18:16PM -0800, Andrew Morton wrote:
> On Fri, 03 Jan 2014 10:01:47 -0800 Dave Hansen wrote:
>
> > This is a minor update from the last version. The most notable
> > thing is that I was able to demonstrate that maintaining the
> > cmpxchg16 optimization has _some_ value
On Sat, Jan 04, 2014 at 12:45:45PM -0500, Mikulas Patocka wrote:
> The patch 8456a648cf44f14365f1f44de90a3da2526a4776 causes crash in the
> LVM2 testsuite on PA-RISC (the crashing test is fsadm.sh). The testsuite
> doesn't crash on 3.12, crashes on 3.13-rc1 and later.
>
> Bad Address (null pointe
On Mon, Jan 06, 2014 at 03:10:07PM +0800, Fengguang Wu wrote:
> Hi Joonsoo,
>
> On Mon, Jan 06, 2014 at 09:30:52AM +0900, Joonsoo Kim wrote:
> > On Sun, Jan 05, 2014 at 05:04:56PM +0800, fengguang...@intel.com wrote:
> > > Hi Joonsoo,
> > >
> > &g
On Mon, Jan 06, 2014 at 12:54:22PM -0500, Mikulas Patocka wrote:
> Hi
>
> On Mon, 6 Jan 2014, Joonsoo Kim wrote:
>
> > Hello,
> >
> > I'm surprised that this VM_BUG_ON() has not been triggered until now. It was
> > introduced in 2007 by commit (b5
On Mon, Jan 06, 2014 at 04:19:05AM -0800, Davidlohr Bueso wrote:
> On Mon, 2014-01-06 at 09:19 +0900, Joonsoo Kim wrote:
> > On Fri, Jan 03, 2014 at 11:55:45AM -0800, Davidlohr Bueso wrote:
> > > Hi Joonsoo,
> > >
> > > Sorry about the delay...
> >
On Fri, Dec 20, 2013 at 10:48:17PM -0800, Davidlohr Bueso wrote:
> On Fri, 2013-12-20 at 14:01 +, Mel Gorman wrote:
> > On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote:
> > > On Wed, 18 Dec 2013 15:53:59 +0900 Joonsoo Kim
> > > wrote:
> > >
On Sun, Dec 22, 2013 at 12:58:19AM +1100, David Gibson wrote:
> On Wed, Dec 18, 2013 at 03:53:49PM +0900, Joonsoo Kim wrote:
> > There is a race condition if we map a same file on different processes.
> > Region tracking is protected by mmap_sem and hugetlb_instantiation_mutex.
>
On Mon, Dec 23, 2013 at 09:44:38AM +0900, Joonsoo Kim wrote:
> On Fri, Dec 20, 2013 at 10:48:17PM -0800, Davidlohr Bueso wrote:
> > On Fri, 2013-12-20 at 14:01 +, Mel Gorman wrote:
> > > On Thu, Dec 19, 2013 at 05:02:02PM -0800, Andrew Morton wrote:
> > > > On W
On Mon, Dec 23, 2013 at 12:24:02PM -0500, Sasha Levin wrote:
> Ping?
>
> I've also Cc'ed the "this page shouldn't be locked at all" team.
Hello,
I can't find the reason of this problem.
If it is reproducible, how about bisecting?
Thanks.
>
> On 12/18/2013 10:37 AM, Sasha Levin wrote:
> >Hi al
On Mon, Dec 23, 2013 at 10:01:10PM -0500, Sasha Levin wrote:
> On 12/23/2013 09:51 PM, Joonsoo Kim wrote:
> >On Mon, Dec 23, 2013 at 12:24:02PM -0500, Sasha Levin wrote:
> >>>Ping?
> >>>
> >>>I've also Cc'ed the "this page shouldn't
On Mon, Dec 23, 2013 at 11:44:35PM +0100, Ludovic Desroches wrote:
> On Fri, Dec 20, 2013 at 09:08:51AM +0100, Ludovic Desroches wrote:
> > Hello,
> >
> > On Wed, Dec 18, 2013 at 04:21:17PM +0900, Joonsoo Kim wrote:
> > > On Mon, Dec 16, 2013 at 03:43:43PM
On Tue, Dec 24, 2013 at 03:07:05PM +0900, Joonsoo Kim wrote:
> On Mon, Dec 23, 2013 at 10:01:10PM -0500, Sasha Levin wrote:
> > On 12/23/2013 09:51 PM, Joonsoo Kim wrote:
> > >On Mon, Dec 23, 2013 at 12:24:02PM -0500, Sasha Levin wrote:
> > >>>Ping?
> > &g
On Thu, Aug 21, 2014 at 09:22:35AM -0500, Christoph Lameter wrote:
> On Thu, 21 Aug 2014, Joonsoo Kim wrote:
>
> > Slab merge is good feature to reduce fragmentation. Now, it is only
> > applied to SLUB, but, it would be good to apply it to SLAB. This patch
> > is prepar
1201 - 1300 of 2325 matches
Mail list logo