Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 27 +++
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b0762c7..749b6e1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4076,22 +4076,25 @@ static void
n zone_pcp_update() is called (they will
end up being shrunk, not completely drained, later when a 0-order page
is freed in free_hot_cold_page()).
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 33 +
1 file changed, 9 insertions(+), 24 deletions(-)
diff --git
Simply moves calculation of the new 'high' value outside the
for_each_possible_cpu() loop, as it does not depend on the cpu.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_all
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5ee5ce9..038e9d2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4037,7 +4037,7 @@ static void pageset_update(struct
Creates pageset_set_batch() for use in setup_pageset().
pageset_set_batch() imitates the functionality of
setup_pagelist_highmark(), but uses the boot time
(percpu_pagelist_fraction == 0) calculations for determining ->high
based on ->batch.
Signed-off-by: Cody P Schafer
---
mm/page_a
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3447a4b..352c279a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4043,7 +4043,7 @@ static void pageset_set_batch(struct
ids ->batch ever rising above ->high.
Suggested by Gilad Ben-Yossef in these threads:
https://lkml.org/lkml/2013/4/9/23
https://lkml.org/lkml/2013/4/10/49
Also reproduces his proposed comment.
Reviewed-by: Gilad Ben-Yossef
Signed-off-by: Cody P Schafer
---
mm/page_all
pcp->batch could change at any point, avoid relying on it being a stable value.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f2929df..9dd0dc0 100644
--- a/mm/page_allo
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 352c279a..b0762c7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3683,12 +3683,12 @@ void __ref build_all_zonelists(pg_data_t
On 04/09/2013 11:22 PM, Gilad Ben-Yossef wrote:
On Wed, Apr 10, 2013 at 9:19 AM, Gilad Ben-Yossef wrote:
On Wed, Apr 10, 2013 at 2:28 AM, Cody P Schafer wrote:
In pageset_set_batch() and setup_pagelist_highmark(), ensure that batch
is always set to a safe value (1) prior to updating high
On 04/10/2013 02:23 PM, Andrew Morton wrote:
On Wed, 10 Apr 2013 11:23:28 -0700 Cody P Schafer
wrote:
"Problems" with the current code:
1. there is a lack of synchronization in setting ->high and ->batch in
percpu_pagelist_fraction_sysctl_handler()
2. s
On 04/09/2013 02:48 PM, Srivatsa S. Bhat wrote:
We need a way to decide when to trigger the worker threads to perform
region evacuation/compaction. So the strategy used is as follows:
Alloc path of page allocator:
This accurately tracks the allocations and detects t
The first 3 are simply comment fixes and clarifications on locking.
The last 1 adds additional locking when updating node_present_pages based on
the existing documentation.
Cody P Schafer (4):
mmzone: make holding lock_memory_hotplug() a requirement for updating
pgdat size
mm: fix
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 09ac172..afd0aa5 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -719,7 +719,7 @@ typedef struct
mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
guarding against changes to node_present_pages, so actually lock it when
we update node_present_pages to keep that promise.
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 5 +
1 file changed, 5 insertions
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index afd0aa5..45be383 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,6 +716,8 @@ typedef struct pglist_data
can't lock a mutex.
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5c76737..09ac172 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,6 +716,9 @@ ty
On 05/01/2013 03:26 PM, David Rientjes wrote:
On Wed, 1 May 2013, Cody P Schafer wrote:
All updaters of pgdat size (spanned_pages, start_pfn, and
present_pages) currently also hold lock_memory_hotplug() (in addition
to pgdat_resize_lock()).
Document this and make holding of that lock a
On 05/01/2013 03:29 PM, David Rientjes wrote:
On Wed, 1 May 2013, Cody P Schafer wrote:
Signed-off-by: Cody P Schafer
Nack, pgdat_resize_unlock() is unnecessary if irqs are known to be
disabled.
All this patch does is is indicate that rather than using node_size_lock
directly (as it
On 05/01/2013 03:30 PM, David Rientjes wrote:
On Wed, 1 May 2013, Cody P Schafer wrote:
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a221fac..0bdca10 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -915,6 +915,7 @@ static void node_states_set_node(int node, struct
On 05/01/2013 03:39 PM, David Rientjes wrote:
On Wed, 1 May 2013, Cody P Schafer wrote:
They are also initialized at boot without pgdat_resize_lock(), if we consider
boot time, quite a few of the statements on when locking is required are
wrong.
That said, you are correct that it is not
On 05/01/2013 03:42 PM, David Rientjes wrote:
On Wed, 1 May 2013, Cody P Schafer wrote:
Signed-off-by: Cody P Schafer
Nack, pgdat_resize_unlock() is unnecessary if irqs are known to be
disabled.
All this patch does is is indicate that rather than using node_size_lock
directly (as it
right now.
Signed-off-by: Cody P Schafer
---
fs/binfmt_misc.c | 5 +
1 file changed, 5 insertions(+)
---
If this is considered too terrible, even adding a hack to sysrq to let me
recover the system (in the future) without a system reset would be appreciated.
diff --git a/fs/binfmt_misc.c
On 05/01/2013 03:48 PM, David Rientjes wrote:
On Wed, 1 May 2013, Cody P Schafer wrote:
Guaranteed to be stable means that if I'm a reader and pgdat_resize_lock(),
node_present_pages had better not change at all until I pgdat_resize_unlock().
If nothing needs this guarantee, we should c
() (rientjes)
- make comment regarding pgdat_resize_lock()/unlock() usage more clear
(rientjes)
--
Cody P Schafer (4):
mm: fix comment referring to non-existent size_seqlock, change to
span_seqlock
mmzone: note that node_size_lock should be manipulated via
pgdat_resize_lock
mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
guarding against changes to node_present_pages, so actually lock it when
we update node_present_pages to keep that promise.
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 5 +
1 file changed, 5 insertions
mmzone.h documents node_size_lock (which pgdat_resize_lock() locks) as
guarding against changes to node_present_pages, so actually lock it when
we update node_present_pages to keep that promise.
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 4
1 file changed, 4 insertions
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fc859a0c..41557be 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,6 +716,9 @@ typedef struct pglist_data
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5c76737..fc859a0c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,7 +716,7 @@ typedef struct
On 04/10/2013 02:25 PM, Cody P Schafer wrote:
On 04/10/2013 02:23 PM, Andrew Morton wrote:
On Wed, 10 Apr 2013 11:23:28 -0700 Cody P Schafer
wrote:
"Problems" with the current code:
1. there is a lack of synchronization in setting ->high a
s refresh code
- remove holes for memlayouts to make iteration over them less of a chore.
Since v1: http://comments.gmane.org/gmane.linux.kernel.mm/95541
- Update watermarks.
- Update zone percpu pageset ->batch & ->high only when needed.
- Don't lazily adjust {pgdat,zone}->{present_pag
Create a new function grow_pgdat_and_zone() which handles locking +
growth of a zone & the pgdat which it is associated with.
Signed-off-by: Cody P Schafer
---
include/linux/memory_hotplug.h | 3 +++
mm/memory_hotplug.c| 17 +++--
2 files changed, 14 insertions(+
Signed-off-by: Cody P Schafer
---
include/linux/dnuma.h | 2 +-
include/linux/memlayout.h | 7 +
mm/Kconfig| 30
mm/Makefile | 1 +
mm/dnuma.c| 4 +-
mm/memlayout-debugfs.c| 339 ++
s are skipped &
the page is freed via free_one_page().
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f33f1bf..38a2161 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1358,6 +1358,
Provides similar functionality to
unregister_mem_block_section_under_nodes() (which was previously named
identically to the newly added funtion), but operates on all memory
sections included in the memory block, not just the specified one.
---
drivers/base/node.c | 53
eeds to be
avoided.
Signed-off-by: Cody P Schafer
---
include/linux/page-flags.h | 19 +++
mm/page_alloc.c| 3 +++
2 files changed, 22 insertions(+)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 6d53675..09dd94e 100644
--- a/include/
In free_pcppages_bulk(), check if a page needs to be moved to a new
node/zone & then perform the transplant (in a slightly defered manner).
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 36 +++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --g
__free_pages_ok() handles higher order (order != 0) pages. Transplant
hook is added here as this is where the struct zone to free to is
decided.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b
---
drivers/base/memory.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index 14f8a69..5247698 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -10,20 +10,20 @@
* SPARSEMEM should be contained here,
Signed-off-by: Cody P Schafer
---
include/linux/rbtree.h | 8
1 file changed, 8 insertions(+)
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index 2879e96..1b239ca 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -85,4 +85,12 @@ static inline void
Rename register_mem_sect_under_node() to register_mem_block_under_node() and
rename unregister_mem_sect_under_nodes() to unregister_mem_block_under_nodes()
to reflect that both of these functions are given memory_blocks instead of
mem_sections
---
drivers/base/memory.c | 4 ++--
drivers/base/n
to actually perform the
"transplant on free" are in later patches.
Signed-off-by: Cody P Schafer
---
include/linux/dnuma.h | 97 +++
include/linux/memlayout.h | 127 ++
mm/Makefile | 1 +
mm/dnuma.c| 430 +++
unregister_mem_block_under_nodes() only unregisters a single section in
the mem block under all nodes, not the entire mem block. Rename it to
unregister_mem_block_section_under_nodes(). Also rename the phys_index
param to indicate that it is a section number.
---
drivers/base/memory.c | 2 +-
dri
Properly update the sysfs info when memory blocks move between nodes
due to a Dynamic NUMA reconfiguration.
---
drivers/base/memory.c | 39 +++
include/linux/memory.h | 5 +
mm/memlayout.c | 3 +++
3 files changed, 47 insertions(+)
diff --git a/d
We need to add some functionality for use by Dynamic NUMA to pieces of
mm/, so provide the Kconfig prior to adding actual Dynamic NUMA
functionality. For details on Dynamic NUMA, see te later patch (which
adds baseline functionality):
"mm: add memlayout & dnuma to track pfn->nid & transplant page
Add return_pages_to_zone(), which uses return_page_to_zone().
It is a minimized version of __free_pages_ok() which handles adding
pages which have been removed from another zone into a new zone.
Signed-off-by: Cody P Schafer
---
mm/internal.h | 5 -
mm/page_alloc.c | 17
In dynamic numa, when onlining nodes, lock_memory_hotplug() is already
held when mem_online_node()'s functionality is needed.
Factor out the locking and create a new function __mem_online_node() to
allow reuse.
Signed-off-by: Cody P Schafer
---
include/linux/memory_hotplug.h | 1
vided by the hypervisor.
This option allows reserving some extra node ids as a percentage of the
boot time node ids. While not perfect (idealy nr_node_ids would be fully
dynamic), this allows decent functionality without invasive changes to
the SL{U,A}B allocators.
Signed-off-by: Cody P Sc
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3695ca5..4fe35b24 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -254,8 +254,11 @@ static int page_outside_zone_boundaries
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4fe35b24..cc7b332 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3514,8 +3514,8 @@ static int default_zonelist_order(void
Use the *_is_empty() helpers to be more clear about what we're actually
checking for.
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f4cb01a..a65235f 100644
---
moves the VM_BUG_ON() (which detects a change in node)
so that it follows the pfn_valid_within() check.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 17 ++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 739b405..657f773
As there will be
no pages in the exsess span that actually belong to the zone being
manipulated, I don't expect there to be issues.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 8e6658d..320d914 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1071,6 +1071,8 @@ int __mem_online_node(int nid
Add postorder iteration functions for rbtree. These are useful for
safely freeing an entire rbtree without modifying the tree at all.
Signed-off-by: Cody P Schafer
---
include/linux/rbtree.h | 4
lib/rbtree.c | 40
2 files changed, 44
Add nid_zone(), which returns the zone corresponding to a given nid & zonenum.
Signed-off-by: Cody P Schafer
---
include/linux/mm.h | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1a7f19e..2004713 100644
--- a/include/l
Signed-off-by: Cody P Schafer
---
mm/memlayout.c | 16
1 file changed, 16 insertions(+)
diff --git a/mm/memlayout.c b/mm/memlayout.c
index 8b9ba9a..3e89482 100644
--- a/mm/memlayout.c
+++ b/mm/memlayout.c
@@ -336,3 +336,19 @@ void memlayout_global_init(void
When a memlayout is tracked (ie: CONFIG_DYNAMIC_NUMA is enabled), rather
than iterate over numa_meminfo, a lookup can be done using memlayout.
Signed-off-by: Cody P Schafer
---
arch/x86/mm/numa.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch
memlayout_global_init() initializes the first memlayout, which is
assumed to match the initial page-flag nid settings.
This is done in start_kernel() as the initdata used to populate the
memlayout is purged from memory early in the boot process (XXX: When?).
Signed-off-by: Cody P Schafer
On x86, we have numa_info specifically to track the numa layout, which
is precisely the data memlayout needs, so use it to create an initial
memlayout.
Signed-off-by: Cody P Schafer
---
arch/x86/mm/numa.c | 28
1 file changed, 28 insertions(+)
diff --git a/arch/x86
Export ensure_zone_is_initialized() so that it can be used to initialize
new zones within the dynamic numa code.
Signed-off-by: Cody P Schafer
---
mm/internal.h | 8
mm/memory_hotplug.c | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/internal.h b/mm
be
skipped.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 657f773..9de55a2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -960,13 +960,16 @@ int move_freepages
On 05/08/2013 12:29 PM, Sergei Shtylyov wrote:
Although, not necessarily: it also supports CONFIG_DYNAMIC_DEBUG --
look at how pr_debug() is defined.
So this doesn't seem to be an equivalent change, and I suggest not doing
it at all.
WBR, Sergei
pr_devel() should get the same behavior: n
---
mm/vmstat.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index e1d8ed1..2b93877 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -495,6 +495,10 @@ void refresh_cpu_vm_stats(int cpu)
atomic_long_add(global_diff[i], &vm_stat[i]);
}
+/*
moves the VM_BUG_ON() (which detects a change in node)
so that it follows the pfn_valid_within() check.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 17 ++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1fbf5f2..75192eb
In dynamic numa, when onlining nodes, lock_memory_hotplug() is already
held when mem_online_node()'s functionality is needed.
Factor out the locking and create a new function __mem_online_node() to
allow reuse.
Signed-off-by: Cody P Schafer
---
include/linux/memory_hotplug.h | 1
to actually perform the
"transplant on free" are in later patches.
Signed-off-by: Cody P Schafer
---
include/linux/dnuma.h | 97 ++
include/linux/memlayout.h | 126 +
mm/Kconfig| 24 +++
mm/Makefile | 1 +
mm/dnuma.c
As there will be
no pages in the exsess span that actually belong to the zone being
manipulated, I don't expect there to be issues.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
vided by the hypervisor.
This option allows reserving some extra node ids as a percentage of the
boot time node ids. While not perfect (idealy nr_node_ids would be fully
dynamic), this allows decent functionality without invasive changes to
the SL{U,A}B allocators.
Signed-off-by: Cody P Sc
memlayout_global_init() initializes the first memlayout, which is
assumed to match the initial page-flag nid settings.
This is done in start_kernel() as the initdata used to populate the
memlayout is purged from memory early in the boot process (XXX: When?).
Signed-off-by: Cody P Schafer
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 20304cb..686d8f8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3488,8 +3488,8 @@ static int default_zonelist_order(void
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a54baa9..20304cb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -253,8 +253,11 @@ static int page_outside_zone_boundaries
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f5ea9b7..5fcd29e 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1063,6 +1063,8 @@ int __mem_online_node(int nid
When a memlayout is tracked (ie: CONFIG_DYNAMIC_NUMA is enabled), rather
than iterate over numa_meminfo, a lookup can be done using memlayout.
Signed-off-by: Cody P Schafer
---
arch/x86/mm/numa.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch
__free_pages_ok() handles higher order (order != 0) pages. Transplant
hook is added here as this is where the struct zone to free to is
decided.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b
Signed-off-by: Cody P Schafer
---
mm/memlayout.c | 16
1 file changed, 16 insertions(+)
diff --git a/mm/memlayout.c b/mm/memlayout.c
index 45e7df6..4dc6706 100644
--- a/mm/memlayout.c
+++ b/mm/memlayout.c
@@ -247,3 +247,19 @@ void memlayout_global_init(void
In free_pcppages_bulk(), check if a page needs to be moved to a new
node/zone & then perform the transplant (in a slightly defered manner).
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 36 +++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --g
On x86, we have numa_info specifically to track the numa layout, which
is precisely the data memlayout needs, so use it to create an initial
memlayout.
Signed-off-by: Cody P Schafer
---
arch/x86/mm/numa.c | 28
1 file changed, 28 insertions(+)
diff --git a/arch/x86
Signed-off-by: Cody P Schafer
---
include/linux/dnuma.h | 2 +-
include/linux/memlayout.h | 7 +
mm/Kconfig| 30
mm/Makefile | 1 +
mm/dnuma.c| 4 +-
mm/memlayout-debugfs.c| 339 ++
s are skipped &
the page is freed via free_one_page().
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f8ae178..98ac7c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1357,6 +1357,
be
skipped.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 75192eb..95e4a23 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -959,13 +959,16 @@ int move_freepages
Add nid_zone(), which returns the zone corresponding to a given nid & zonenum.
Signed-off-by: Cody P Schafer
---
include/linux/mm.h | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9ddae00..1b6abae 100644
--- a/include/l
eeds to be
avoided.
Signed-off-by: Cody P Schafer
---
include/linux/page-flags.h | 19 +++
mm/page_alloc.c| 3 +++
2 files changed, 22 insertions(+)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 6d53675..09dd94e 100644
--- a/include/
Add return_pages_to_zone(), which uses return_page_to_zone().
It is a minimized version of __free_pages_ok() which handles adding
pages which have been removed from another zone into a new zone.
Signed-off-by: Cody P Schafer
---
mm/internal.h | 5 -
mm/page_alloc.c | 17
Add postorder iteration functions for rbtree. These are useful for
safely freeing an entire rbtree without modifying the tree at all.
Signed-off-by: Cody P Schafer
---
include/linux/rbtree.h | 4
lib/rbtree.c | 40
2 files changed, 44
Create a new function grow_pgdat_and_zone() which handles locking +
growth of a zone & the pgdat which it is associated with.
Signed-off-by: Cody P Schafer
---
include/linux/memory_hotplug.h | 3 +++
mm/memory_hotplug.c| 17 +++--
2 files changed, 14 insertions(+
Export ensure_zone_is_initialized() so that it can be used to initialize
new zones within the dynamic numa code.
Signed-off-by: Cody P Schafer
---
mm/internal.h | 8
mm/memory_hotplug.c | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/internal.h b/mm
Use the *_is_empty() helpers to be more clear about what we're actually
checking for.
Signed-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index df04c36..deea8c2 100644
---
Signed-off-by: Cody P Schafer
---
include/linux/rbtree.h | 8
1 file changed, 8 insertions(+)
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index 2879e96..1b239ca 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -85,4 +85,12 @@ static inline void
potentially) propagation of updated layout knowledge into kmem_caches
(SL*B).
--
Since v1: http://comments.gmane.org/gmane.linux.kernel.mm/95541
- Update watermarks.
- Update zone percpu pageset ->batch & ->high only when needed.
- Don't lazily adjust {pgdat,zone}->{present_pag
The solution to this would be to disallow creation of files and
folders on NTFS drives containing illegal characters.
Illegal characters with respect to Windows & the like are different
from Illegal characters with respect to the NTFS filesystem structure.
Looking at ntfs-3g(8) [yes, I'm aw
On Tue 18 Sep 2012 03:02:42 AM PDT, Venu Byravarasu wrote:
-Original Message-
From: Shubhrajyoti Datta [mailto:omaplinuxker...@gmail.com]
Sent: Tuesday, September 18, 2012 3:30 PM
To: Venu Byravarasu
Cc: Shubhrajyoti D; linux-me...@vger.kernel.org; linux-
ker...@vger.kernel.org; julia.law
y, start & end define an inclusive range.
Signed-off-by: Cody P Schafer
---
tools/perf/util/symbol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 07e7bd6..df4736d 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/
If .dynsym exists but .dynstr is empty (NO_BITS or size==0), a segfault
occurs. Avoid this by checking that .dynstr is not empty.
Signed-off-by: Cody P Schafer
---
tools/perf/util/symbol.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
Previously, symtab_type would have been left at 0, or KALLSYMS, which is not
quite accurate.
Introduce DSO_SYMTAB_TYPE__VMLINUX[_GUEST].
Signed-off-by: Cody P Schafer
---
tools/perf/util/symbol.c | 9 +
tools/perf/util/symbol.h | 2 ++
2 files changed, 11 insertions(+)
diff --git a
exists in the
runtime image) is the same in both the runtime and debug/symbols
image.
Both of these are true on RHEL, but it is unclear how accurate they are
in general (on platforms with function descriptors in opd sections).
Signed-off-by: Cody P Schafer
---
tools/perf/util
map__objdump_2ip was introduced in:
ee11b90b12 perf top: Fix annotate for userspace
And it's last user removed in:
36532461a0 perf top: Ditch private annotation code, share perf annotate's
Remove it.
Signed-off-by: Cody P Schafer
---
tools/perf/util/map.c | 8
tools/perf/
duplicate code for looking up the same sections and checking for
the existence of an important section wouldn't be as clean.
Additionally, move dso__swap_init() so we can use it.
Utilized by the later patch
"perf symbol: use both runtime and debug images"
Signed-off-by: Cody P Schaf
The only site that jumps to out_fixup has (kallsyms_filename == NULL).
And all paths that reach 'if (err > 0)' without 'goto out_fixup' have
kallsyms_filename != NULL.
So skip over both the check & dso__set_long_name(), and remove the
check.
Signed-off-by: Cody P Sc
1-4,6,7 are small cleanups.
5 fixes a potential segfault.
8 fixes a use after free for dso->long_name
9 avoids a segfault in elfutils when a truncated elf is loaded.
10 properly tracks that a dso had symbols loaded from a vmlinux image
11-16 fix handling of the '.opd' section in the presence o
101 - 200 of 452 matches
Mail list logo