* - Package (DIE)
> + * - Package (PKG)
>
> With that:
> Acked-by: Valentin Schneider
>
No objection either, PKG is less ambiguous than DIE
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
On Thu, Jan 26, 2023 at 08:18:31AM -0800, Suren Baghdasaryan wrote:
> On Thu, Jan 26, 2023 at 7:47 AM Mel Gorman
> wrote:
> >
> > On Wed, Jan 25, 2023 at 03:35:53PM -0800, Suren Baghdasaryan wrote:
> > > In cases when VMA flags are modified after VMA was isola
On Thu, Jan 26, 2023 at 08:10:26AM -0800, Suren Baghdasaryan wrote:
> On Thu, Jan 26, 2023 at 7:10 AM Mel Gorman
> wrote:
> >
> > On Wed, Jan 25, 2023 at 03:35:51PM -0800, Suren Baghdasaryan wrote:
> > > Replace direct modifications to vma->vm_flags with calls to
amp;vm_flags);
> if (ret)
> return ret;
> + reset_vm_flags(vma, vm_flags);
Same.
Not necessary as such as there are few users of ksm_madvise and I doubt
it'll introduce new surprises.
With or without the comment;
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
chal Hocko
Acked-by: Mel Gorman
Minor comments that are safe to ignore.
I think a better name for mod_vm_flags is set_clear_vm_flags to hint that
the first flags are to be set and the second flags are to be cleared.
For this patch, it doesn't matter, but it might avoid acciden
On Wed, Jan 25, 2023 at 03:35:50PM -0800, Suren Baghdasaryan wrote:
> To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
> replace it with VM_LOCKED_MASK bitmask and convert all users.
>
> Signed-off-by: Suren Baghdasaryan
> Acked-by: Michal Hocko
Acked-by: Mel G
On Wed, Jan 25, 2023 at 03:35:50PM -0800, Suren Baghdasaryan wrote:
> To simplify the usage of VM_LOCKED_CLEAR_MASK in clear_vm_flags(),
> replace it with VM_LOCKED_MASK bitmask and convert all users.
>
> Signed-off-by: Suren Baghdasaryan
> Acked-by: Michal Hocko
Acked-by: Mel G
with such
> operations. Introduce modifier functions for vm_flags to be used whenever
> flags are updated. This way we can better check and control correct
> locking behavior during these updates.
>
> Signed-off-by: Suren Baghdasaryan
With or without the suggested rename;
Acked-by: Mel
*/
- *new = data_race(*orig);
+ data_race(memcpy(new, orig, sizeof(*new)));
INIT_LIST_HEAD(&new->anon_vma_chain);
dup_anon_vma_name(orig, new);
}
I don't see how memcpy could automagically figure out whether the memcpy
is prone to races or not in an arbitrary context.
Assuming using data_race this way is ok then
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
On Mon, Jan 24, 2022 at 11:12:07AM -0500, Zi Yan wrote:
> On 24 Jan 2022, at 9:02, Mel Gorman wrote:
>
> > On Wed, Jan 19, 2022 at 02:06:17PM -0500, Zi Yan wrote:
> >> From: Zi Yan
> >>
> >> This is done in addition to MIGRATE_ISOLATE pageblock merge avoid
ATE_UNMOVABLE,
> MIGRATE_TYPES },
> [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE,
> MIGRATE_TYPES },
> + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */
> #ifdef CONFIG_CMA
> [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */
> #endif
lock to get
the accounting right.
However, there does not appear to be any special protection against a
page in a highatomic pageblock getting merged with a buddy of another
pageblock type. The pageblock would still have the right setting but on
allocation, the pages could split to the wrong free list and be lost
until the pages belonging to MIGRATE_HIGHATOMIC were freed again.
Not sure how much of a problem that is in practice, it's been a while
since I've heard of high-order atomic allocation failures.
--
Mel Gorman
SUSE Labs
differ slightly between each generation
of Zen. The common pattern is that a single NUMA node can have multiple
L3 caches and at one point I thought it might be reasonable to allow
spillover to select a local idle CPU instead of stacking multiple tasks
on a CPU sharing cache. I never got as far as thinking how it could be
done in a way that multiple architectures would be happy with.
--
Mel Gorman
SUSE Labs
On Mon, Apr 12, 2021 at 11:06:19AM +0100, Valentin Schneider wrote:
> On 12/04/21 10:37, Mel Gorman wrote:
> > On Mon, Apr 12, 2021 at 11:54:36AM +0530, Srikar Dronamraju wrote:
> >> * Gautham R. Shenoy [2021-04-02 11:07:54]:
> >>
> >> >
> >> >
th
allows within the node with the LLC CPUs masked out. While there would be
a latency hit because cache is not shared, it would still be a CPU local
to memory that is idle. That would potentially be beneficial on Zen*
as well without having to introduce new domains in the topology hierarchy.
--
Mel Gorman
SUSE Labs
her
> hard to debug I would be tempted to mark this for stable. It should also
> be merged separately from the rest of the series.
>
> I have just one nit below
> Acked-by: Michal Hocko
>
Agreed.
--
Mel Gorman
SUSE Labs
claim decisions have to use this function rather than
> * populated_zone(). If the whole zone is reserved then we can easily
> * end up with populated_zone() && !managed_zone().
> */
>
> What do you think?
>
This makes a lot of sense. I've updated the patch and will await a test
from Srikar before reposting.
--
Mel Gorman
SUSE Labs
populated_zone() as many existing users really
need to check for present_pages. This patch introduces a managed_zone()
helper and uses it in the few cases where it is critical that the check
is made for managed pages -- zonelist constuction and page reclaim.
Signed-off-by: Mel Gorman
---
in
int populated_zone(struct zone *zone)
{
- return (!!zone->present_pages);
+ return (!!zone->managed_pages);
}
extern int movable_zone;
--
Mel Gorman
SUSE Labs
hat the page allocator uses.
> > o Most importantly of all, reclaim from node 0 with multiple zones will
> > have similar aging and reclaiming characteristics as every
> > other node.
> >
> > Signed-off-by: Mel Gorman
> > Acked-by: Johannes Weiner
> > A
On Wed, Aug 10, 2016 at 12:59:40PM -0500, Reza Arbab wrote:
> On Thu, Aug 04, 2016 at 10:24:04AM +0100, Mel Gorman wrote:
> >[1.713998] Unable to handle kernel paging request for data at address
> >0xff7a1
> >[1.714164] Faulting instruction address: 0xc027
ut is later freed to the page allocator (eg.
> initrd).
>
It would be ideal if the amount of reserved memory that is freed later
in the normal case was estimated. If it's a small percentage of memory
then the difference is unlikely to be detectable and avoids ppc64 being
special.
--
Mel Gorman
SUSE Labs
of available pages then it really should be based on that and not
just a made-up number.
--
Mel Gorman
SUSE Labs
to identify if
> the current value needs to be incremented.
>
I think the parameter is ugly and it should have been just
inc_memory_reserve but at least it works.
--
Mel Gorman
SUSE Labs
n+0x20/0xa8
>
> Register the memory reserved by fadump, so that the cache sizes are
> calculated based on the free memory (i.e Total memory - reserved
> memory).
>
> Suggested-by: Mel Gorman
I didn't suggest this specifically. While it happens to be safe on ppc64,
it potential
fixes but one potentially misses checks and another
had redundant initialisations. This version initialises per_cpu_nodestats
on a per-pgdat basis instead of on a per-zone basis.
Reported-by: Paul Mackerras
Reported-by: Reza Arbab
Signed-off-by: Mel Gorman
---
This has been compile-tested and
node where it could have the same limitations as
ZONE_HIGHMEM if necessary. It was also safe to assume that zones never
overlapped as zones were about addressing limitations. If ZONE_CMA or
ZONE_DEVICE can overlap with other zones during initialisation time then
there
I'll respin and send a patch for review.
>
Given that CONFIG_NO_BOOTMEM is not supported and bootmem is meant to be
slowly retiring, I would suggest instead making deferred memory init
depend on NO_BOOTMEM.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
> For 4GB memory: 57% is improved
> For 50GB memory: 22% is improve
>
> Signed-off-by: Li Zhang
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
s patch allocates 1GB for 0.25TB/node for large system
> as it is mentioned in https://lkml.org/lkml/2015/5/1/627
>
Acked-by: Mel Gorman
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
in time to see if there was a point where
this was ever working. It could be a ppc64-specific bug but right now,
I'm still drawing a blank.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.o
time this worked? Was 4.0 ok? As it can be
easily reproduced, can the problem be bisected please?
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Tue, Mar 24, 2015 at 10:51:41PM +1100, Dave Chinner wrote:
> On Mon, Mar 23, 2015 at 12:24:00PM +0000, Mel Gorman wrote:
> > These are three follow-on patches based on the xfsrepair workload Dave
> > Chinner reported was problematic in 4.0-rc1 due to changes in page table
en the PTE scanner may scan faster if the faults continue
to be remote. This means there is higher system CPU overhead and fault
trapping at exactly the time we know that migrations cannot happen. This
patch tracks when migration failures occur and slows the PTE scanner.
Signed-off-by: Mel Gorman
--
h was discarded.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 9 -
mm/memory.c | 8 +++-
mm/mprotect.c| 3 +++
3 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2f12e9fcf1a2..0a42d1521aa4 100644
--- a/mm/huge_me
These are three follow-on patches based on the xfsrepair workload Dave
Chinner reported was problematic in 4.0-rc1 due to changes in page table
management -- https://lkml.org/lkml/2015/3/1/226.
Much of the problem was reduced by commit 53da3bc2ba9e ("mm: fix up numa
read-only thread grouping logic
flushes and sync also affect placement. This is unpredictable behaviour
which is impossible to reason about so this patch makes grouping decisions
based on the VMA flags.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 13 ++---
mm/memory.c | 19 +++
2 files changed, 13
cross the protection update and page fault. I'll post
it later when I stick a changelog on it.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
ugh to WP checks after
trapping a NUMA fault.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
off a node (possible migration) with NUMA balancing trying to pull it back
(another possible migration). Small bugs there can result in excessive
migration.
--
Mel Gorman
SUSE Labs
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On Wed, Mar 18, 2015 at 10:31:28AM -0700, Linus Torvalds wrote:
> > - something completely different that I am entirely missing
>
> So I think there's something I'm missing. For non-shared mappings, I
> still have the idea that pte_dirty should be the same as pte_write.
> And yet, your testing of
On Thu, Mar 12, 2015 at 09:20:36AM -0700, Linus Torvalds wrote:
> On Thu, Mar 12, 2015 at 6:10 AM, Mel Gorman wrote:
> >
> > I believe you're correct and it matches what was observed. I'm still
> > travelling and wireless is dirt but managed to queue a test
On Tue, Mar 10, 2015 at 04:55:52PM -0700, Linus Torvalds wrote:
> On Mon, Mar 9, 2015 at 12:19 PM, Dave Chinner wrote:
> > On Mon, Mar 09, 2015 at 09:52:18AM -0700, Linus Torvalds wrote:
> >>
> >> What's your virtual environment setup? Kernel config, and
> >> virtualization environment to actually
On Mon, Mar 09, 2015 at 09:02:19PM +, Mel Gorman wrote:
> On Sun, Mar 08, 2015 at 08:40:25PM +0000, Mel Gorman wrote:
> > > Because if the answer is 'yes', then we can safely say: 'we regressed
> > > performance because correctness [not dropping dirty b
On Sun, Mar 08, 2015 at 08:40:25PM +, Mel Gorman wrote:
> > Because if the answer is 'yes', then we can safely say: 'we regressed
> > performance because correctness [not dropping dirty bits] comes before
> > performance'.
> >
> > If the a
On Sun, Mar 08, 2015 at 11:02:23AM +0100, Ingo Molnar wrote:
>
> * Linus Torvalds wrote:
>
> > On Sat, Mar 7, 2015 at 8:36 AM, Ingo Molnar wrote:
> > >
> > > And the patch Dave bisected to is a relatively simple patch. Why
> > > not simply revert it to see whether that cures much of the
> > >
On Sat, Mar 07, 2015 at 12:31:03PM -0800, Linus Torvalds wrote:
> On Sat, Mar 7, 2015 at 7:20 AM, Mel Gorman wrote:
> >
> > if (!prot_numa || !pmd_protnone(*pmd)) {
> > - ret = 1;
> > entry = pmdp_get_and
On Sat, Mar 07, 2015 at 05:36:58PM +0100, Ingo Molnar wrote:
>
> * Mel Gorman wrote:
>
> > Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
> >
> > Across the board the 4.0-rc1 numbers are much slower, and the
> > degradation is far
The wrong value is being returned by change_huge_pmd since commit
10c1045f28e8 ("mm: numa: avoid unnecessary TLB flushes when setting
NUMA hinting entries") which allows a fallthrough that tries to adjust
non-existent PTEs. This patch corrects it.
Signed-off-by: Mel Gorman
---
mm/hug
Dave Chinner reported a problem due to excessive NUMA balancing activity
and bisected it. The first patch in this series corrects a major problem
that is unlikely to affect Dave but is still serious. Patch 2 is a minor
cleanup that was spotted while looking at scan rate control. Patch 3 is
minor an
addressed but beyond the scope of this series which is aimed at Dave
Chinners shrink workload that is unlikely to be affected by this issue.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 194c0f019774
ecessary the threshold at which we
start throttling migrations can be lowered.
Signed-off-by: Mel Gorman
---
include/linux/sched.h | 9 +
kernel/sched/fair.c | 8 ++--
mm/huge_memory.c | 3 ++-
mm/memory.c | 3 ++-
4 files changed, 15 insertions(+), 8 deletions(
This code is dead since commit 9e645ab6d089 ("sched/numa: Continue PTE
scanning even if migrate rate limited") so remove it.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 5 -
mm/migrate.c| 20
2 files changed, 25 deletions(-)
di
58
NUMA hint local faults 511530 314936 371571
NUMA hint local percent 69 52 61
NUMA pages migrated 26366701 5424102 7073177
Signed-off-by: Mel Gorman
---
arch/powerpc/include/asm/pgtable-ppc64.h | 16
arch/
The wrong value is being returned by change_huge_pmd since commit
10c1045f28e8 ("mm: numa: avoid unnecessary TLB flushes when setting
NUMA hinting entries") which allows a fallthrough that tries to adjust
non-existent PTEs. This patch corrects it.
Signed-off-by: Mel Gorman
---
mm/hug
Dave Chinner reported a problem due to excessive NUMA balancing activity and
bisected it. These are two patches that address two major issues with that
series. The first patch is almost certainly unrelated to what he saw due
to fact his vmstats showed no huge page activity but the fix is important.
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b9a13e9..4673d6e 100644
--- a/arch/x86/include/asm/pgtable.h
+++
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
Tested-by: Sasha Levin
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
Tested-by: Sasha Levin
---
arch/powerpc/includ
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/mm/copro_fault.c
Changelog since V4
o Rebase to 3.19-rc2(mel)
Changelog since V3
o Minor comment update (benh)
o Add ack'ed bys
Changelog since V2
o Rename *_protnone_numa to _protnone and extend docs (linus)
o Rebase t
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/include/asm/pgtable.h | 16
arch/x86/include/asm/pgtable.h
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index fab9b32..78baed5 1
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cf428a7..0dd5be3 100644
--- a/arch/x86/include/asm/pgtable.h
+++
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
Tested-by: Sasha Levin
---
arch/powerpc/includ
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
Tested-by: Sasha Levin
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/mm/copro_fault.c
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
Tested-by: Sasha Levin
---
arch/powerpc/include/asm/pgtable.h | 16
arch/x86/include/asm/pgtable.h
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e..a3edcdf 1
There are no functional changes here and I kept the mmotm-20141119 baseline
as that is what got tested but it rebases cleanly to current mmotm. The
series makes architectural changes but splitting this on a per-arch basis
would cause bisect-related brain damage. I'm hoping this can go through
Andre
On Wed, Dec 03, 2014 at 10:50:35PM +0530, Aneesh Kumar K.V wrote:
> Mel Gorman writes:
>
> > On Wed, Dec 03, 2014 at 08:53:37PM +0530, Aneesh Kumar K.V wrote:
> >> Benjamin Herrenschmidt writes:
> >>
> >> > On Tue, 2014-12-02 at 12:57 +0530, Aneesh
On Thu, Dec 04, 2014 at 08:01:57AM +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2014-12-03 at 15:52 +0000, Mel Gorman wrote:
> >
> > It's implied but can I assume it passed? If so, Ben and Paul, can I
> > consider the series to be acked by you other than the minor com
itten. I modified it to ran
> with selftest.
>
It's implied but can I assume it passed? If so, Ben and Paul, can I
consider the series to be acked by you other than the minor comment
updates?
Thanks.
--
Mel Gorman
SUSE Labs
___
Linuxp
On Tue, Dec 02, 2014 at 09:38:39AM +1100, Benjamin Herrenschmidt wrote:
> On Fri, 2014-11-21 at 13:57 +0000, Mel Gorman wrote:
>
> > #ifdef CONFIG_NUMA_BALANCING
> > +/*
> > + * These work without NUMA balancing but the kernel does not care. See the
> > + *
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index cf428a7..0dd5be3 100644
--- a/arch/x86/include/asm/pgtable.h
+++
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
accidentally depending on the
PTE not being cleared and flushed. If one is missed, it'll manifest as
corruption problems that start triggering shortly after this series is
merged and only happen when NUMA balancing is enabled.
Signed-off-by: Mel Gorman
---
arch/powerpc/include/asm/pgtable.h
With PROT_NONE, the traditional page table manipulation functions are
sufficient.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar
---
include/linux/huge_mm.h | 3 +--
mm/huge_memory.c| 33 +++--
mm/memory.c | 10
ppc64 should not be depending on DSISR_PROTFAULT and it's unexpected
if they are triggered. This patch adds warnings just in case they
are being accidentally depended upon.
Signed-off-by: Mel Gorman
Acked-by: Aneesh Kumar K.V
---
arch/powerpc/mm/copro_fault.c | 8 ++--
arch/power
Convert existing users of pte_numa and friends to the new helper. Note
that the kernel is broken after this patch is applied until the other
page table modifiers are also altered. This patch layout is to make
review easier.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh
This is a preparatory patch that introduces protnone helpers for automatic
NUMA balancing.
Signed-off-by: Mel Gorman
Acked-by: Linus Torvalds
Acked-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/pgtable.h | 15 +++
arch/x86/include/asm/pgtable.h | 16
The main change here is to rebase on mmotm-20141119 as the series had
significant conflicts that were non-obvious to resolve. The main blockers
for merging are independent testing from Sasha (trinity), independent
testing from Aneesh (ppc64 support) and acks from Ben and Paul on the
powerpc patches
patch closes the race.
Signed-off-by: Mel Gorman
---
include/linux/migrate.h | 4
mm/huge_memory.c| 3 ++-
mm/migrate.c| 6 --
3 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 01aad3e..a3edcdf 1
On Thu, Nov 20, 2014 at 11:54:06AM -0800, Linus Torvalds wrote:
> On Thu, Nov 20, 2014 at 2:19 AM, Mel Gorman wrote:
> > This is a preparatory patch that introduces protnone helpers for automatic
> > NUMA balancing.
>
> Oh, I hadn't noticed that you had renamed these
On Thu, Nov 20, 2014 at 04:50:25PM -0500, Sasha Levin wrote:
> On 11/20/2014 05:19 AM, Mel Gorman wrote:
> > V1 failed while running under kvm-tools very quickly and a second report
> > indicated that it happens on bare metal as well. This version survived
> > an overnight r
On Thu, Nov 20, 2014 at 10:38:56AM +, David Laight wrote:
> From: Mel Gorman
> > Convert existing users of pte_numa and friends to the new helper. Note
> > that the kernel is broken after this patch is applied until the other
> > page table modifiers are also altered. Th
If a PTE or PMD is already marked NUMA when scanning to mark entries
for NUMA hinting then it is not necessary to update the entry and
incur a TLB flush penalty. Avoid the avoidhead where possible.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 14 --
mm/mprotect.c| 4
2
on
so do not even try recovering. It would have been more comprehensive to
check VMA flags in pte_protnone_numa but it would have made the API ugly
just for a debugging check.
Signed-off-by: Mel Gorman
---
mm/huge_memory.c | 3 +++
mm/memory.c | 3 +++
2 files changed, 6 insertions(+)
diff
e original
pte_special behaviour.
Signed-off-by: Mel Gorman
---
arch/x86/include/asm/pgtable.h | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index f8799e0..5241332 100644
--- a/arch/x86/include/asm/pgtable.h
+++
Faults on the huge zero page are pointless and there is a BUG_ON
to catch them during fault time. This patch reintroduces a check
that avoids marking the zero page PAGE_NONE.
Signed-off-by: Mel Gorman
---
include/linux/huge_mm.h | 3 ++-
mm/huge_memory.c| 13 -
mm/memory.c
1 - 100 of 196 matches
Mail list logo