time this worked? Was 4.0 ok? As it can be
easily reproduced, can the problem be bisected please?
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
in time to see if there was a point where
this was ever working. It could be a ppc64-specific bug but right now,
I'm still drawing a blank.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.o
On Mon, Nov 17, 2014 at 01:56:19PM +0530, Aneesh Kumar K.V wrote:
> Mel Gorman writes:
>
> > This is follow up from the "pipe/page fault oddness" thread.
> >
> > Automatic NUMA balancing depends on being able to protect PTEs to trap a
> > fault and gat
not using WARN_ON_ONCE. I was also not
> sure whether we want to enable that always. The reason for keeping that
> within CONFIG_DEBUG_VM is to make sure that nobody ends up depending on
> PROTFAULT outside the vma check convered. So expectations is that
> developers working on feature w
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e..a3edcdf 1
V1 failed while running under kvm-tools very quickly and a second report
indicated that it happens on bare metal as well. This version survived
an overnight run of trinity running under kvm-tools here but verification
from Sasha would be appreciated.
Changelog since V1
o ppc64 paranoia checks and
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgtable.h | 11 +++
arch/x86/include/asm/pgtable.h | 16
include
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
---
arch/powerpc/mm/copro_fault.c | 8 ++--
arch/power
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c | 10
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
---
arch/powerpc/include/asm/pgtable.h
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index f8799e0..5241332 100644
--- a/arch/x86/include/asm/pgtable.h
+++
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
On Thu, Nov 20, 2014 at 10:38:56AM +, David Laight wrote:
> From: Mel Gorman
> > Convert existing users of pte_numa and friends to the new helper. Note
> > that the kernel is broken after this patch is applied until the other
> > page table modifiers are also altered. Th
On Thu, Nov 20, 2014 at 04:50:25PM -0500, Sasha Levin wrote:
> On 11/20/2014 05:19 AM, Mel Gorman wrote:
> > V1 failed while running under kvm-tools very quickly and a second report
> > indicated that it happens on bare metal as well. This version survived
> > an overnight r
On Thu, Nov 20, 2014 at 11:54:06AM -0800, Linus Torvalds wrote:
> On Thu, Nov 20, 2014 at 2:19 AM, Mel Gorman wrote:
> > This is a preparatory patch that introduces protnone helpers for automatic
> > NUMA balancing.
>
> Oh, I hadn't noticed that you had renamed these
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e..a3edcdf 1
The main change here is to rebase on mmotm-20141119 as the series had
significant conflicts that were non-obvious to resolve. The main blockers
for merging are independent testing from Sasha (trinity), independent
testing from Aneesh (ppc64 support) and acks from Ben and Paul on the
powerpc patches
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgtable.h | 15 +++
arch/x86/include/asm/pgtable.h | 16
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
---
arch/powerpc/mm/copro_fault.c | 8 ++--
arch/power
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c | 10
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
---
arch/powerpc/include/asm/pgtable.h
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cf428a7..0dd5be3 100644
--- a/arch/x86/include/asm/pgtable.h
+++
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
On Tue, Dec 02, 2014 at 09:38:39AM +1100, Benjamin Herrenschmidt wrote:
> On Fri, 2014-11-21 at 13:57 +0000, Mel Gorman wrote:
>
> > #ifdef CONFIG_NUMA_BALANCING
> > +/*
> > + * These work without NUMA balancing but the kernel does not care. See the
> > + *
itten. I modified it to ran
> with selftest.
>
It's implied but can I assume it passed? If so, Ben and Paul, can I
consider the series to be acked by you other than the minor comment
updates?
Thanks.
--
Mel Gorman
SUSE Labs
___
Linuxp
On Thu, Dec 04, 2014 at 08:01:57AM +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2014-12-03 at 15:52 +0000, Mel Gorman wrote:
> >
> > It's implied but can I assume it passed? If so, Ben and Paul, can I
> > consider the series to be acked by you other than the minor com
On Wed, Dec 03, 2014 at 10:50:35PM +0530, Aneesh Kumar K.V wrote:
> Mel Gorman writes:
>
> > On Wed, Dec 03, 2014 at 08:53:37PM +0530, Aneesh Kumar K.V wrote:
> >> Benjamin Herrenschmidt writes:
> >>
> >> > On Tue, 2014-12-02 at 12:57 +0530, Aneesh
There are no functional changes here and I kept the mmotm-20141119 baseline
as that is what got tested but it rebases cleanly to current mmotm. The
series makes architectural changes but splitting this on a per-arch basis
would cause bisect-related brain damage. I'm hoping this can go through
Andre
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e..a3edcdf 1
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/include/asm/pgtable.h | 16
arch/x86/include/asm/pgtable.h
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/mm/copro_fault.c
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
Tested-by: Sasha Levin
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
Tested-by: Sasha Levin
---
arch/powerpc/includ
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cf428a7..0dd5be3 100644
--- a/arch/x86/include/asm/pgtable.h
+++
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index fab9b32..78baed5 1
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/include/asm/pgtable.h | 16
arch/x86/include/asm/pgtable.h
Changelog since V4
o Rebase to 3.19-rc2(mel)
Changelog since V3
o Minor comment update (benh)
o Add ack'ed bys
Changelog since V2
o Rename *_protnone_numa to _protnone and extend docs (linus)
o Rebase t
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/mm/copro_fault.c
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
Tested-by: Sasha Levin
---
arch/powerpc/includ
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
Tested-by: Sasha Levin
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b9a13e9..4673d6e 100644
--- a/arch/x86/include/asm/pgtable.h
+++
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
ysical pages are ordered
* properly.
*/
It will probably be true early in the lifetime of the system but the milage
will vary on systems with a lot of uptime. If you depend on this behaviour
for correctness then you will have a bad day.
High-order
On Thu, Jul 10, 2014 at 03:17:16PM +0200, Alexander Graf wrote:
>
> On 10.07.14 15:07, Mel Gorman wrote:
> >On Thu, Jul 10, 2014 at 01:05:47PM +0200, Alexander Graf wrote:
> >>On 09.07.14 00:59, Stewart Smith wrote:
> >>>Hi!
> >>>
> >>>
th their ppc64 equivalent.
This necessitated that a PTE bit mask be created that identified the bits
that distinguish present from NUMA pte entries but it is expected this
will only differ between arches based on _PAGE_PROTNONE. The naming for
the generic helpers was taken from x86 originally but ppc6
ks and
the rest will fall out during testing so it's ok to remove.
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue, Feb 11, 2014 at 04:04:54PM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V"
>
> So move it within the if loop
>
> Signed-off-by: Aneesh Kumar K.V
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-
ith double
> hash. We also want
> to keep set_pte_at simpler by not requiring them to do hash flush for
> performance reason.
> Hence cannot use them while updating _PAGE_NUMA bit. Add new functions for
> marking pte/pmd numa
>
> Signed-
On Tue, Feb 11, 2014 at 04:04:53PM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V"
>
> We will use this later to set the _PAGE_NUMA bit.
>
> Signed-off-by: Aneesh Kumar K.V
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
s the necessary check and also fixes the type
for LAST__PID_MASK and LAST__CPU_MASK which are currently signed instead
of unsigned integers.
Signed-off-by: Mel Gorman
Cc: sta...@vger.kernel.org
diff --git a/include/linux/page-flags-layout.h
b/include/linux/page-flags-layout.h
index da52366.
On Tue, Mar 04, 2014 at 12:45:19AM +0530, Aneesh Kumar K.V wrote:
> Mel Gorman writes:
>
> > On Wed, Feb 19, 2014 at 11:32:00PM +0530, Srikar Dronamraju wrote:
> >>
> >> On a powerpc machine with CONFIG_NUMA_BALANCING=y and
> >> CONFIG_SPARSEMEM_V
places.
>
> This does make hugetlbfs not supported (not registered at all) in this
> environment. I believe this is fine, as there are no valid hugepages and
> that won't change at runtime.
>
> Signed-off-by: Nishanth Aravamudan
Acked-by: Mel Gorman
This patch looks ok
Dave Chinner reported a problem due to excessive NUMA balancing activity and
bisected it. These are two patches that address two major issues with that
series. The first patch is almost certainly unrelated to what he saw due
to fact his vmstats showed no huge page activity but the fix is important.
The wrong value is being returned by change_huge_pmd since commit
10c1045f28e8 ("mm: numa: avoid unnecessary TLB flushes when setting
NUMA hinting entries") which allows a fallthrough that tries to adjust
non-existent PTEs. This patch corrects it.
Signed-off-by: Mel Gorman
---
mm/hug
58
NUMA hint local faults 511530 314936 371571
NUMA hint local percent 69 52 61
NUMA pages migrated 26366701 5424102 7073177
Signed-off-by: Mel Gorman
---
arch/powerpc/include/asm/pgtable-ppc64.h | 16
arch/
This code is dead since commit 9e645ab6d089 ("sched/numa: Continue PTE
scanning even if migrate rate limited") so remove it.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 5 -
mm/migrate.c| 20
2 files changed, 25 deletions(-)
di
ecessary the threshold at which we
start throttling migrations can be lowered.
Signed-off-by: Mel Gorman
---
include/linux/sched.h | 9 +
kernel/sched/fair.c | 8 ++--
mm/huge_memory.c | 3 ++-
mm/memory.c | 3 ++-
4 files changed, 15 insertions(+), 8 deletions(
addressed but beyond the scope of this series which is aimed at Dave
Chinners shrink workload that is unlikely to be affected by this issue.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 194c0f019774
Dave Chinner reported a problem due to excessive NUMA balancing activity
and bisected it. The first patch in this series corrects a major problem
that is unlikely to affect Dave but is still serious. Patch 2 is a minor
cleanup that was spotted while looking at scan rate control. Patch 3 is
minor an
The wrong value is being returned by change_huge_pmd since commit
10c1045f28e8 ("mm: numa: avoid unnecessary TLB flushes when setting
NUMA hinting entries") which allows a fallthrough that tries to adjust
non-existent PTEs. This patch corrects it.
Signed-off-by: Mel Gorman
---
mm/hug
On Sat, Mar 07, 2015 at 05:36:58PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman wrote:
>
> > Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
> >
> > Across the board the 4.0-rc1 numbers are much slower, and the
> > degradation is far
On Sat, Mar 07, 2015 at 12:31:03PM -0800, Linus Torvalds wrote:
> On Sat, Mar 7, 2015 at 7:20 AM, Mel Gorman wrote:
> >
> > if (!prot_numa || !pmd_protnone(*pmd)) {
> > - ret = 1;
> > entry = pmdp_get_and
On Sun, Mar 08, 2015 at 11:02:23AM +0100, Ingo Molnar wrote:
>
> * Linus Torvalds wrote:
>
> > On Sat, Mar 7, 2015 at 8:36 AM, Ingo Molnar wrote:
> > >
> > > And the patch Dave bisected to is a relatively simple patch. Why
> > > not simply revert it to see whether that cures much of the
> > >
On Sun, Mar 08, 2015 at 08:40:25PM +, Mel Gorman wrote:
> > Because if the answer is 'yes', then we can safely say: 'we regressed
> > performance because correctness [not dropping dirty bits] comes before
> > performance'.
> >
> > If the a
On Mon, Mar 09, 2015 at 09:02:19PM +, Mel Gorman wrote:
> On Sun, Mar 08, 2015 at 08:40:25PM +0000, Mel Gorman wrote:
> > > Because if the answer is 'yes', then we can safely say: 'we regressed
> > > performance because correctness [not dropping dirty b
On Tue, Mar 10, 2015 at 04:55:52PM -0700, Linus Torvalds wrote:
> On Mon, Mar 9, 2015 at 12:19 PM, Dave Chinner wrote:
> > On Mon, Mar 09, 2015 at 09:52:18AM -0700, Linus Torvalds wrote:
> >>
> >> What's your virtual environment setup? Kernel config, and
> >> virtualization environment to actually
On Thu, Mar 12, 2015 at 09:20:36AM -0700, Linus Torvalds wrote:
> On Thu, Mar 12, 2015 at 6:10 AM, Mel Gorman wrote:
> >
> > I believe you're correct and it matches what was observed. I'm still
> > travelling and wireless is dirt but managed to queue a test
On Wed, Mar 18, 2015 at 10:31:28AM -0700, Linus Torvalds wrote:
> > - something completely different that I am entirely missing
>
> So I think there's something I'm missing. For non-shared mappings, I
> still have the idea that pte_dirty should be the same as pte_write.
> And yet, your testing of
off a node (possible migration) with NUMA balancing trying to pull it back
(another possible migration). Small bugs there can result in excessive
migration.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
ugh to WP checks after
trapping a NUMA fault.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
cross the protection update and page fault. I'll post
it later when I stick a changelog on it.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
flushes and sync also affect placement. This is unpredictable behaviour
which is impossible to reason about so this patch makes grouping decisions
based on the VMA flags.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 13 ++---
mm/memory.c | 19 +++
2 files changed, 13
These are three follow-on patches based on the xfsrepair workload Dave
Chinner reported was problematic in 4.0-rc1 due to changes in page table
management -- https://lkml.org/lkml/2015/3/1/226.
Much of the problem was reduced by commit 53da3bc2ba9e ("mm: fix up numa
read-only thread grouping logic
h was discarded.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 9 -
mm/memory.c | 8 +++-
mm/mprotect.c| 3 +++
3 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2f12e9fcf1a2..0a42d1521aa4 100644
--- a/mm/huge_me
en the PTE scanner may scan faster if the faults continue
to be remote. This means there is higher system CPU overhead and fault
trapping at exactly the time we know that migrations cannot happen. This
patch tracks when migration failures occur and slows the PTE scanner.
Signed-off-by: Mel Gorman
--
On Tue, Mar 24, 2015 at 10:51:41PM +1100, Dave Chinner wrote:
> On Mon, Mar 23, 2015 at 12:24:00PM +0000, Mel Gorman wrote:
> > These are three follow-on patches based on the xfsrepair workload Dave
> > Chinner reported was problematic in 4.0-rc1 due to changes in page table
s patch allocates 1GB for 0.25TB/node for large system
> as it is mentioned in https://lkml.org/lkml/2015/5/1/627
>
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
> For 4GB memory: 57% is improved
> For 50GB memory: 22% is improve
>
> Signed-off-by: Li Zhang
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
I'll respin and send a patch for review.
>
Given that CONFIG_NO_BOOTMEM is not supported and bootmem is meant to be
slowly retiring, I would suggest instead making deferred memory init
depend on NO_BOOTMEM.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
ction_nr);
> goal = pfn << PAGE_SHIFT;
> - limit = section_nr_to_pfn(section_nr + 1) << PAGE_SHIFT;
> + if (size > BYTES_PER_SECTION)
> + limit = 0;
> + else
> + limit = section_nr_to_pfn(section_nr + 1) << PAGE_SHIF
On Wed, Feb 29, 2012 at 10:12:33AM -0800, Nishanth Aravamudan wrote:
>
>
> Signed-off-by: Nishanth Aravamudan
>
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://list
you have a think about this please? Can we just kill off
> pageblock_default_order() and fold its guts into
> set_pageblock_order(void)? Only ia64 and powerpc can define
> CONFIG_HUGETLB_PAGE_SIZE_VARIABLE.
>
This looks reasonable to me.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Mon, Oct 11, 2010 at 02:00:39PM -0700, Andrew Morton wrote:
> (cc linuxppc-dev@lists.ozlabs.org)
>
> On Mon, 11 Oct 2010 15:30:22 +0100
> Mel Gorman wrote:
>
> > On Sat, Oct 09, 2010 at 04:57:18AM -0500, pac...@kosh.dhis.org wrote:
> > > (What a big Cc: list...
On Wed, Oct 13, 2010 at 12:52:05PM -0500, pac...@kosh.dhis.org wrote:
> Mel Gorman writes:
> >
> > On Mon, Oct 11, 2010 at 02:00:39PM -0700, Andrew Morton wrote:
> > >
> > > It's corruption of user memory, which is unusual. I'd be wondering
Anton Blanchard wrote on 22/09/2009 03:52:35:
> If we are using 1TB segments and we are allowed to randomise the heap, we
can
> put it above 1TB so it is backed by a 1TB segment. Otherwise the heap
will be
> in the bottom 1TB which always uses 256MB segments and this may result in
a
> performance
Benjamin Herrenschmidt wrote on 22/09/2009
22:08:22:
> > Unfortunately, I am not sensitive to issues surrounding 1TB segments or
how
> > they are currently being used. However, as this clearly helps
performance
> > for large amounts of memory, is it worth providing an option to
> > libhugetlbfs t
f (page_count(pfn_to_page(iter)))
> + immobile++;
> +
> + if (arg.pages_found == immobile)
and here you compare a signed with an unsigned type. Probably harmless
but why do it?
> + ret = 0;
> + } while (
1 - 100 of 196 matches
Mail list logo