, the included of stdlib.h was changed
to stddef.h (as again, only NULL is needed).
Build tested on powerpc and x86.
Signed-off-by: Cody P Schafer
---
tools/perf/arch/powerpc/util/dwarf-regs.c | 5 +
tools/perf/arch/s390/util/dwarf-regs.c| 2 +-
tools/perf/arch/sh/util/dwarf-regs.c
On 02/01/2013 04:29 PM, Andrew Morton wrote:
On Fri, 1 Feb 2013 16:28:48 -0800
Andrew Morton wrote:
+ if (ret)
+ pr_debug("page %lu outside zone [ %lu - %lu ]\n",
+ pfn, start_pfn, start_pfn + sp);
+
return ret;
}
As this condition leads to
On 02/01/2013 04:20 PM, Andrew Morton wrote:
On Thu, 17 Jan 2013 14:52:53 -0800
Cody P Schafer wrote:
Instead of directly utilizing a combination of config options to determine this,
add a macro to specifically address it.
...
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -625,6
On 02/01/2013 04:39 PM, Andrew Morton wrote:
On Thu, 17 Jan 2013 14:52:52 -0800
Cody P Schafer wrote:
Summaries:
1 - avoid repeating checks for section in page flags by adding a define.
2 - add & switch to zone_end_pfn() and zone_spans_pfn()
3 - adds zone_is_initialized() & zone_is_e
Creates pageset_set_batch() for use in setup_pageset().
pageset_set_batch() imitates the functionality of
setup_pagelist_highmark(), but uses the boot time
(percpu_pagelist_fraction == 0) calculations for determining ->high
based on ->batch.
Signed-off-by: Cody P Schafer
---
mm/page_a
en a 0-order page
is freed in free_hot_cold_page()).
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 30 ++
1 file changed, 10 insertions(+), 20 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5877cf0..48f2faa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
In free_hot_cold_page(), we rely on pcp->batch remaining stable.
Updating it without being on the cpu owning the percpu pageset
potentially destroys this stability.
Change for_each_cpu() to on_each_cpu() to fix.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 21 +++--
In one case while modifying the ->high and ->batch fields of per cpu pagesets
we're unneededly using stop_machine() (patches 1 & 2), and in another we don't
have any
syncronization at all (patch 3).
This patchset fixes both of them.
Note that it results in a change to the behavior of zone_pcp_up
These patches allow the NUMA memory layout (meaning the mapping of a page to a
node) to be changed at runtime in place (without hotplugging).
= Why/when is this useful? =
In virtual machines (VMs) running on NUMA systems both [a] if/when the
hypervisor decides to move their backing memory around
On 02/18/2013 11:24 AM, Seth Jennings wrote:
On 02/15/2013 10:04 PM, Ric Mason wrote:
On 02/14/2013 02:38 AM, Seth Jennings wrote:
+/* invalidates all pages for the given swap type */
+static void zswap_frontswap_invalidate_area(unsigned type)
+{
+struct zswap_tree *tree = zswap_trees[typ
On Wed, Feb 20, 2013 at 04:04:41PM -0600, Seth Jennings wrote:
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> +#define MAX(a, b) ((a) >= (b) ? (a) : (b))
> +/* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */
> +#define ZS_MIN_ALLOC_SIZE \
> + MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >>
, the included of stdlib.h was changed
to stddef.h (as again, only NULL is needed).
Signed-off-by: Cody P Schafer
---
tools/perf/arch/arm/util/dwarf-regs.c | 5 +
tools/perf/arch/powerpc/util/dwarf-regs.c | 5 +
tools/perf/arch/s390/util/dwarf-regs.c| 2 +-
tools/perf/arch/sh/util
@@ -1802,11 +1802,11 @@ static inline unsigned interleave_nid(struct mempolicy
*pol,
/*
* Return the bit number of a random bit set in the nodemask.
- * (returns -1 if nodemask is empty)
+ * (returns NUMA_NO_NOD if nodemask is empty)
s/NUMA_NO_NOD/NUMA_NO_NODE/
*/
int node_random
On 09/11/2013 03:08 PM, Dave Hansen wrote:
I really don't know where the:
batch /= 4; /* We effectively *= 4 below */
...
batch = rounddown_pow_of_two(batch + batch/2) - 1;
came from. The round down code at *MOST* does a *= 1.5, but
*averages* out to be just
On 09/11/2013 04:08 PM, Cody P Schafer wrote:
On 09/11/2013 03:08 PM, Dave Hansen wrote:
I really don't know where the:
batch /= 4; /* We effectively *= 4 below */
...
batch = rounddown_pow_of_two(batch + batch/2) - 1;
came from. The round down code at *MOST* d
On 09/27/2013 06:16 AM, Kirill A. Shutemov wrote:
With split page table lock for PMD level we can't hold
mm->page_table_lock while updating nr_ptes.
Let's convert it to atomic_t to avoid races.
---
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 84e0c56e1e..99f19e85
Small gramar fix in rcutree comment regarding 'rcu_scheduler_active'
var.
---
kernel/rcutree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index e441b77..bfb8972 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -105,7 +105,7 @@ int
The only persistent change made by this loop is calling
memblock_set_node() once for each memblock, which is not useful (and has
no effect) as memblock_set_node() is not called with any
memblock-specific parameters.
Subsistute a single memblock_set_node().
---
arch/powerpc/mm/mem.c | 9 ++---
On 06/12/2013 02:20 PM, Andrew Morton wrote:
On Tue, 11 Jun 2013 15:12:59 -0700 Cody P Schafer
wrote:
Factor pageset_set_high_and_batch() (which contains all needed logic too
set a pageset's ->high and ->batch inrespective of system state) out of
zone_pageset_init(), which avoids
stem state) from zone_pageset_init() and using the new
pageset_set_high_and_batch() instead of zone_pageset_init() in
zone_pcp_update().
Signed-off-by: Cody P Schafer
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo
On 05/28/2013 10:17 PM, Nathan Fontenot wrote:
Update the sysfs memory code to create/delete files at the time of device
and subsystem registration.
The current code creates files in the root memory directory explicitly
through
the use of init_* routines. The files for each memory block are crea
Instead of leaving a trap for the next person who comes along and wants
to add something to mem_section, add an __aligned() and remove the
manual padding added for MEMCG.
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
---
Also
On 05/29/2013 05:54 PM, Jiang Liu wrote:
On Thu 30 May 2013 07:14:39 AM CST, Cody P Schafer wrote:
Also, does anyone know what causes this alignment to be required here? I found
this was breaking things in a patchset I'm working on (WARNs in sysfs code
about duplicate filenames when in
2"
http://lkml.indiana.edu/hypermail/linux/kernel/1205.2/03077.html
Signed-off-by: Cody P Schafer
---
Dave: Consider it resurrected.
---
include/linux/mmzone.h | 4
mm/sparse.c| 3 +++
2 files changed, 7 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mm
On 05/31/2013 01:31 PM, Nathan Fontenot wrote:
Update the sysfs memory code to create/delete files at the time of device
and subsystem registration.
The current code creates files in the root memory directory explicitly through
the use of init_* routines. The files for each memory block are crea
some funky allocations would be the result) when memory
hotplug is triggered.
Signed-off-by: Cody P Schafer
---
Unless memory hotplug is being triggered on boot, this should *not* be cause of
Valdis
Kletnieks' reported bug in -next:
"next-20130607 BUG: Bad page state in proce
On 06/24/2013 12:28 AM, Eliezer Tamir wrote:
select/poll busy-poll support.
...
diff --git a/fs/select.c b/fs/select.c
index 8c1c96c..79b876e 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -400,6 +402,8 @@ int do_select(int n, fd_set_bits *fds, struct timespec
*end_time)
poll_table *wai
On 06/27/2013 05:25 PM, Cody P Schafer wrote:
On 06/24/2013 12:28 AM, Eliezer Tamir wrote:
select/poll busy-poll support.
...
I'm seeing warnings about using smp_processor_id() while preemptable
(log included below) due to this patch. I expect the use of
ll_end_time() -> sched_clock()
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index c74092e..5c76737 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -450,7 +450,7 @@ struct zone
powerpc and x86 were opencoding copies of setup_nr_node_ids(), which
page_alloc provides but makes static. Make it avaliable to the archs in
linux/mm.h.
Signed-off-by: Cody P Schafer
---
include/linux/mm.h | 6 ++
mm/page_alloc.c| 6 +-
2 files changed, 7 insertions(+), 5 deletions
In arch/powerpc, arch/x86, and mm/page_alloc code to setup nr_node_ids based on
node_possible_map is duplicated.
This patchset switches those copies to calling the function provided by
page_alloc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
Signed-off-by: Cody P Schafer
---
arch/x86/mm/numa.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 72fe01e..a71c4e2 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -114,14 +114,11 @@ void numa_clear_node(int
Signed-off-by: Cody P Schafer
---
arch/powerpc/mm/numa.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index bba87ca..7574ae3 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -62,14 +62,11 @@ static
On 08/14/2013 02:14 PM, Seth Jennings wrote:
>An existing tool would not work
>with this patch (plus boot option) since it would not know how to
>show/hide things. It lets_part_ of those existing tools get reused
>since they only have to be taught how to show/hide things.
>
>I'd find this reall
Just check that we examine all nodes in the tree for the postorder iteration.
Signed-off-by: Cody P Schafer
---
lib/rbtree_test.c | 12
1 file changed, 12 insertions(+)
diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index 122f02f..31dd4cc 100644
--- a/lib/rbtree_test.c
+++ b
Signed-off-by: Cody P Schafer
---
mm/zswap.c | 15 ++-
1 file changed, 2 insertions(+), 13 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index deda2b6..98d99c4 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -791,25 +791,14 @@ static void zswap_frontswap_invalidate_area(unsigned
Because deletion (of the entire tree) is a relatively common use of the
rbtree_postorder iteration, and because doing it safely means fiddling
with temporary storage, provide a helper to simplify postorder rbtree
iteration.
Signed-off-by: Cody P Schafer
---
include/linux/rbtree.h | 17
No reason require rbtree test code to be a module, allow it to be
builtin (streamlines my development process)
Signed-off-by: Cody P Schafer
---
lib/Kconfig.debug | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 1501aa5..606e3c8
ode (most notably in the filesystem drivers) use a hand rolled
postorder iteration that NULLs child links as it traverses the tree. Each of
those instances could be replaced with this common implementation.
Cody P Schafer (5):
rbtree: add postorder iteration func
Add postorder iteration functions for rbtree. These are useful for
safely freeing an entire rbtree without modifying the tree at all.
Signed-off-by: Cody P Schafer
---
include/linux/rbtree.h | 4
lib/rbtree.c | 40
2 files changed, 44
On 07/29/2013 08:01 AM, Seth Jennings wrote:
On Fri, Jul 26, 2013 at 02:13:39PM -0700, Cody P Schafer wrote:
diff --git a/lib/rbtree.c b/lib/rbtree.c
index c0e31fe..65f4eff 100644
--- a/lib/rbtree.c
+++ b/lib/rbtree.c
@@ -518,3 +518,43 @@ void rb_replace_node(struct rb_node *victim, struct
On 07/29/2013 08:06 AM, Seth Jennings wrote:
On Fri, Jul 26, 2013 at 02:13:40PM -0700, Cody P Schafer wrote:
Because deletion (of the entire tree) is a relatively common use of the
rbtree_postorder iteration, and because doing it safely means fiddling
with temporary storage, provide a helper to
Add postorder iteration functions for rbtree. These are useful for
safely freeing an entire rbtree without modifying the tree at all.
Signed-off-by: Cody P Schafer
Reviewed-by: Seth Jennings
---
include/linux/rbtree.h | 4
lib/rbtree.c | 40
Signed-off-by: Cody P Schafer
Reviewed-by: Seth Jennings
---
mm/zswap.c | 16 ++--
1 file changed, 2 insertions(+), 14 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index deda2b6..5c853b2 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -790,26 +790,14 @@ static void
ree runtime tests
4 allows building the rbtree runtime tests as builtins
5 updates zswap.
--
since v1:
- spacing
- s/it's/its/
- remove now unused var in zswap code.
- Reviewed-by: Seth Jennings
Cody P Schafer (5):
rbtree: add postorder iteration
Just check that we examine all nodes in the tree for the postorder iteration.
Signed-off-by: Cody P Schafer
Reviewed-by: Seth Jennings
---
lib/rbtree_test.c | 12
1 file changed, 12 insertions(+)
diff --git a/lib/rbtree_test.c b/lib/rbtree_test.c
index 122f02f..31dd4cc 100644
Because deletion (of the entire tree) is a relatively common use of the
rbtree_postorder iteration, and because doing it safely means fiddling
with temporary storage, provide a helper to simplify postorder rbtree
iteration.
Signed-off-by: Cody P Schafer
Reviewed-by: Seth Jennings
---
include
No reason require rbtree test code to be a module, allow it to be
builtin (streamlines my development process)
Signed-off-by: Cody P Schafer
Reviewed-by: Seth Jennings
---
lib/Kconfig.debug | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/Kconfig.debug b/lib
On 08/01/2013 02:18 AM, Xishi Qiu wrote:
__offline_pages()
start_isolate_page_range()
set_migratetype_isolate()
set_pageblock_migratetype() -> this pageblock will be marked as
MIGRATE_ISOLATE
move_freepages_block() -> pages in PageBuddy will be moved into
MIGRATE_
On 08/06/2013 01:22 PM, Chris Metcalf wrote:
[...]
/**
+ * schedule_on_each_cpu - execute a function synchronously on each online CPU
+ * @func: the function to call
+ *
+ * schedule_on_each_cpu() executes @func on each online CPU using the
+ * system workqueue and blocks until all CPUs have
On 07/19/2013 12:59 AM, Tang Chen wrote:
This patch introduce early_acpi_firmware_srat() to find the
phys addr of SRAT provided by firmware. And call it in
reserve_hotpluggable_memory().
Since we have initialized acpi_gbl_root_table_list earlier,
and store all the tables' phys addrs and signatur
ntion (Gilad).
- add missing ACCESS_ONCE() on ->batch
Since v1: https://lkml.org/lkml/2013/4/5/444
- instead of using on_each_cpu(), use memory barriers (Gilad) and an update
side mutex.
- add "Problem" #3 above, and fix.
- rename function to match naming style of similar function
-
Creates pageset_set_batch() for use in setup_pageset().
pageset_set_batch() imitates the functionality of
setup_pagelist_highmark(), but uses the boot time
(percpu_pagelist_fraction == 0) calculations for determining ->high
based on ->batch.
Signed-off-by: Cody P Schafer
---
mm/page_a
off-by: Cody P Schafer
---
mm/page_alloc.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9e556e6..cea883d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -65,6 +65,9 @@
#include
#include "internal.h"
+/* prevent >1 _upda
n zone_pcp_update() is called (they will
end up being shrunk, not completely drained, later when a 0-order page
is freed in free_hot_cold_page()).
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 33 +
1 file changed, 9 insertions(+), 24 deletions(-)
diff --git
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3583281..696ce96 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4069,7 +4069,7 @@ static void pageset_set_batch(struct
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 27 +++
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 53c62c5..0c3cdbb6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4102,22 +4102,25 @@ static void
Previously, zone_pcp_update() called pageset_set_batch() directly,
essentially assuming that percpu_pagelist_fraction == 0. Correct this by
calling zone_pageset_init(), which chooses the appropriate ->batch and
->high calculations.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 4 +
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 696ce96..53c62c5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3709,12 +3709,12 @@ void __ref build_all_zonelists(pg_data_t
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 251fb5f..b335c98 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4063,7 +4063,7 @@ static void pageset_update(struct
Simply moves calculation of the new 'high' value outside the
for_each_possible_cpu() loop, as it does not depend on the cpu.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_all
ids ->batch ever rising above ->high.
Suggested by Gilad Ben-Yossef in these threads:
https://lkml.org/lkml/2013/4/9/23
https://lkml.org/lkml/2013/4/10/49
Also reproduces his proposed comment.
Reviewed-by: Gilad Ben-Yossef
Signed-off-by: Cody P Schafer
---
mm/page_all
pcp->batch could change at any point, avoid relying on it being a stable value.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7e45b91..71d843d 100644
--- a/mm/page_allo
On 05/13/2013 12:20 PM, Pekka Enberg wrote:
Hi Cody,
On Mon, May 13, 2013 at 10:08 PM, Cody P Schafer
wrote:
"Problems" with the current code:
1. there is a lack of synchronization in setting ->high and ->batch in
percpu_pagelist_fraction_sysctl_handler()
2. s
.org/gmane.linux.kernel.mm/99297):
- drop making lock_memory_hotplug() required (old patch #1)
- fix __offline_pages() in the same manner as online_pages() (rientjes)
- make comment regarding pgdat_resize_lock()/unlock() usage more clear
(rientjes)
Cody P Schafer (4):
mm: fix comment referring to
Signed-off-by: Cody P Schafer
Acked-by: David Rientjes
---
include/linux/mmzone.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 5c76737..fc859a0c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
Signed-off-by: Cody P Schafer
---
include/linux/mmzone.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index fc859a0c..41557be 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -716,6 +716,9 @@ typedef struct pglist_data
-by: Cody P Schafer
---
mm/memory_hotplug.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a221fac..0bdca10 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -915,6 +915,7 @@ static void node_states_set_node(int node, struct
-off-by: Cody P Schafer
---
mm/memory_hotplug.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0bdca10..b59a695 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1582,7 +1582,11 @@ repeat:
/* removal success
d to get the desired
behavior.
Signed-off-by: Cody P Schafer
---
scripts/checkpatch.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
index b954de5..ee24026 100755
--- a/scripts/checkpatch.pl
+++ b/scripts/checkpatch.pl
@@ -2
On 05/14/2013 09:45 AM, Cody P Schafer wrote:
Using Page*() triggers a camelcase warning, but shouldn't.
Introduced by be987d9f80354e2e919926349282facd74992f90, which added the
other Page flag users.
Pipe ('|') at the end of a grouping doesn't cause the grouping to match
Using Page*() triggers a camelcase warning, but shouldn't.
be987d9f80354e2e919926349282facd74992f90 added a spurious '"' (double
quote) breaking the regex.
Signed-off-by: Cody P Schafer
---
scripts/checkpatch.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Using Page*() triggers a camelcase warning, but shouldn't.
be987d9f80354e2e919926349282facd74992f90 added a spurious '"' (double
quote) breaking the regex.
Signed-off-by: Cody P Schafer
---
scripts/checkpatch.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
---
On 05/14/2013 10:13 AM, Joe Perches wrote:
I think you need to delete the ", leave the | and remove the ?
Dunno where that " came from.
Ya. commit goof on my part, see v3
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kerne
Using Page*() triggers a camelcase warning, but shouldn't.
be987d9f80354e2e919926349282facd74992f90 added a spurious '"' (double
quote) breaking the regex.
Signed-off-by: Cody P Schafer
---
scripts/checkpatch.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--
On 04/07/2013 08:23 AM, KOSAKI Motohiro wrote:
> (4/5/13 4:33 PM), Cody P Schafer wrote:
>> In one case while modifying the ->high and ->batch fields of per cpu pagesets
>> we're unneededly using stop_machine() (patches 1 & 2), and in another we
>> don't h
On 04/08/2013 05:20 AM, Gilad Ben-Yossef wrote:
On Fri, Apr 5, 2013 at 11:33 PM, Cody P Schafer wrote:
In free_hot_cold_page(), we rely on pcp->batch remaining stable.
Updating it without being on the cpu owning the percpu pageset
potentially destroys this stability.
Change for_each_cpu()
On 04/07/2013 08:39 AM, KOSAKI Motohiro wrote:
> (4/5/13 4:33 PM), Cody P Schafer wrote:
>> No off-cpu users of the percpu pagesets exist.
>>
>> zone_pcp_update()'s goal is to adjust the ->high and ->mark members of a
>> percpu pageset based on a zone
On 04/06/2013 06:56 PM, Simon Jeons wrote:
Hi Cody,
On 04/06/2013 04:33 AM, Cody P Schafer wrote:
In free_hot_cold_page(), we rely on pcp->batch remaining stable.
Updating it without being on the cpu owning the percpu pageset
potentially destroys this stability.
If cpu is off, can its
On 04/06/2013 06:37 PM, Simon Jeons wrote:
Hi Cody,
On 04/06/2013 04:33 AM, Cody P Schafer wrote:
Creates pageset_set_batch() for use in setup_pageset().
pageset_set_batch() imitates the functionality of
setup_pagelist_highmark(), but uses the boot time
(percpu_pagelist_fraction == 0
On 04/06/2013 06:32 PM, Simon Jeons wrote:
Hi Cody,
On 04/06/2013 04:33 AM, Cody P Schafer wrote:
In one case while modifying the ->high and ->batch fields of per cpu
pagesets
we're unneededly using stop_machine() (patches 1 & 2), and in another
we don't have any
syncroniza
On 04/08/2013 12:26 PM, KOSAKI Motohiro wrote:
(4/8/13 1:32 PM), Cody P Schafer wrote:
On 04/07/2013 08:39 AM, KOSAKI Motohiro wrote:
(4/5/13 4:33 PM), Cody P Schafer wrote:
No off-cpu users of the percpu pagesets exist.
zone_pcp_update()'s goal is to adjust the ->high and ->mark
On 04/08/2013 10:28 AM, Cody P Schafer wrote:
On 04/08/2013 05:20 AM, Gilad Ben-Yossef wrote:
On Fri, Apr 5, 2013 at 11:33 PM, Cody P Schafer
wrote:
In free_hot_cold_page(), we rely on pcp->batch remaining stable.
Updating it without being on the cpu owning the percpu pageset
potentia
On 04/08/2013 12:08 PM, KOSAKI Motohiro wrote:
> (4/8/13 1:16 PM), Cody P Schafer wrote:
>> On 04/07/2013 08:23 AM, KOSAKI Motohiro wrote:
>>> (4/5/13 4:33 PM), Cody P Schafer wrote:
>>>> In one case while modifying the ->high and ->batch fields of per cpu
On 04/08/2013 03:18 PM, KOSAKI Motohiro wrote:
(4/8/13 3:49 PM), Cody P Schafer wrote:>
If this turns out to be an issue, schedule_on_each_cpu() could be an
alternative.
no way. schedule_on_each_cpu() is more problematic and it should be removed
in the future.
schedule_on_each_cpu() can o
On 04/08/2013 11:06 PM, Gilad Ben-Yossef wrote:
On Tue, Apr 9, 2013 at 9:03 AM, Gilad Ben-Yossef wrote:
I also wonder whether there could be unexpected interactions between ->high
and ->batch not changing together atomically. For example, could adjusting
this knob cause ->batch to rise enoug
o avoid refactoring setup_pageset() into 2
funtions.
--
Changes since v1:
- instead of using on_each_cpu(), use memory barriers (Gilad) and an update
side mutex.
- add "Problem" #3 above, and fix.
- rename function to match naming style of similar function
- move unrelated comment
Creates pageset_set_batch() for use in setup_pageset().
pageset_set_batch() imitates the functionality of
setup_pagelist_highmark(), but uses the boot time
(percpu_pagelist_fraction == 0) calculations for determining ->high
based on ->batch.
Signed-off-by: Cody P Schafer
---
mm/page_a
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a2f2207..6e52e67 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3680,12 +3680,12 @@ void __ref build_all_zonelists(pg_data_t
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 334387e..66b8bc2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4018,7 +4018,7 @@ static void pageset_update_prep
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 27 +++
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6e52e67..c663e62 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4069,22 +4069,25 @@ static void
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 50a277a..a2f2207 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4030,7 +4030,7 @@ static void pageset_set_batch(struct
Previously, zone_pcp_update() called pageset_set_batch() directly,
essentially assuming that percpu_pagelist_fraction == 0. Correct this by
calling zone_pageset_init(), which chooses the appropriate ->batch and
->high calculations.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 4 +
Simply moves calculation of the new 'high' value outside the
for_each_possible_cpu() loop, as it does not depend on the cpu.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_all
off-by: Cody P Schafer
---
mm/page_alloc.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5877cf0..d259599 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -64,6 +64,9 @@
#include
#include "internal.h"
+/* prevent >1 _upda
Also reproduces his proposed comment.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d259599..a07bd4c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4007,11 +4007,26
n zone_pcp_update() is called (they will
end up being shrunk, not completely drained, later when a 0-order page
is freed in free_hot_cold_page()).
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 33 +
1 file changed, 9 insertions(+), 24 deletions(-)
diff --git
off-by: Cody P Schafer
---
mm/page_alloc.c | 8
1 file changed, 8 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5877cf0..d259599 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -64,6 +64,9 @@
#include
#include "internal.h"
+/* prevent >1 _upda
ntion (Gilad).
- add missing ACCESS_ONCE() on ->batch
Since v1: https://lkml.org/lkml/2013/4/5/444
- instead of using on_each_cpu(), use memory barriers (Gilad) and an update
side mutex.
- add "Problem" #3 above, and fix.
- rename function to match naming style of similar function
-
Previously, zone_pcp_update() called pageset_set_batch() directly,
essentially assuming that percpu_pagelist_fraction == 0. Correct this by
calling zone_pageset_init(), which chooses the appropriate ->batch and
->high calculations.
Signed-off-by: Cody P Schafer
---
mm/page_alloc.c | 4 +
1 - 100 of 452 matches
Mail list logo