ogic.
Agreed - I will continue to look into this.
Kind regards,
--
Aaron Tomlin
es) {
ret = true;
goto out;
}
Kind regards,
--
Aaron Tomlin
_IO 0x40
#define ___GFP_FS 0x80
#define ___GFP_NOWARN 0x200
#define ___GFP_RETRY_MAYFAIL0x400
#define ___GFP_COMP 0x4000
#define ___GFP_HARDWALL 0x2
#define ___GFP_DIRECT_RECLAIM 0x20
#define ___GFP_KSWAPD_RECLAIM 0x40
--
Aaron Tomlin
Hi Michal,
On Thu 2021-03-18 17:16 +0100, Michal Hocko wrote:
> On Mon 15-03-21 16:58:37, Aaron Tomlin wrote:
> > In the situation where direct reclaim is required to make progress for
> > compaction but no_progress_loops is already over the limit of
> > MAX_RECLAIM_RETRIES
In the situation where direct reclaim is required to make progress for
compaction but no_progress_loops is already over the limit of
MAX_RECLAIM_RETRIES consider invoking the oom killer.
Signed-off-by: Aaron Tomlin
---
mm/page_alloc.c | 22 ++
1 file changed, 18 insertions
On Tue 2020-11-24 13:47 +, Aaron Tomlin wrote:
> On Tue, 24 Nov 2020 at 13:36, Michal Hocko wrote:
> > This like any other user visible interface would be a much easier sell
> > if there was a clear usecase to justify it. I do not see anything
> > controversial about e
ful in an isolated situation. Having said this, I thought that the
aforementioned interface would be helpful to others, in particular, given the
known limitation.
Kind regards,
--
Aaron Tomlin
On Tue, 24 Nov 2020 at 11:26, Michal Hocko wrote:
>
> On Tue 24-11-20 10:58:36, Aaron Tomlin wrote:
> > Each memory-controlled cgroup is assigned a unique ID and the total
> > number of memory cgroups is limited to MEM_CGROUP_ID_MAX.
> >
> > This patch provides the a
children.
For example, the number of memory cgroups can be established by
reading the /sys/fs/cgroup/memory/memory.total_cnt file.
Signed-off-by: Aaron Tomlin
---
mm/memcontrol.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index
128 28-100
> Total: Before=20583249, After=20583085, chg -0.00%
>
> Signed-off-by: Matteo Croce
> ---
Nice idea.
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
04f5866e41fb70690e28397487d8bd8eea7d712a.
Yes - I'd say this is required here.
Thanks,
--
Aaron Tomlin
+
> +out:
> + return err ? err : count;
> +}
> +KSM_ATTR_WO(force_madvise);
> +
> static ssize_t sleep_millisecs_show(struct kobject *kobj,
> struct kobj_attribute *attr, char *buf)
> {
> @@ -3185,6 +3252,7 @@ static ssize_t full_scans_show(struct kobject *kobj,
> KSM_ATTR_RO(full_scans);
>
> static struct attribute *ksm_attrs[] = {
> + &force_madvise_attr.attr,
> &sleep_millisecs_attr.attr,
> &pages_to_scan_attr.attr,
> &run_attr.attr,
Looks fine to me.
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
size_t cmplen;
>
> - end = strchr(iter, ',');
> - if (!end)
> - end = iter + strlen(iter);
> + end = strchrnul(iter, ',');
>
> glob = strnchr(iter, end - iter, '*');
> if (glob)
Fair enough.
Acked-by: Aaron Tomlin
--
Aaron Tomlin
debug=,bio*,kmalloc*
Please note that a similar patch was posted by Iliyan Malchev some time ago but
was never merged:
https://marc.info/?l=linux-mm&m=131283905330474&w=2
Signed-off-by: Aaron Tomlin
---
Changes from v2 [2]:
- Add a function and kernel-doc comment
- Refact
On Fri 2018-09-21 16:34 -0700, Andrew Morton wrote:
> On Thu, 20 Sep 2018 21:00:16 +0100 Aaron Tomlin wrote:
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1283,9 +1283,37 @@ slab_flags_t kmem_cache_flags(unsigned int
> > object_size,
> > /*
> >
debug=,bio*,kmalloc*
Please note that a similar patch was posted by Iliyan Malchev some time ago but
was never merged:
https://marc.info/?l=linux-mm&m=131283905330474&w=2
Signed-off-by: Aaron Tomlin
---
Changes from v1 [1]:
- Add appropriate cast to address compiler w
debug=,bio*,kmalloc*
Please note that a similar patch was posted by Iliyan Malchev some time ago but
was never merged:
https://marc.info/?l=linux-mm&m=131283905330474&w=2
Signed-off-by: Aaron Tomlin
---
Documentation/vm/slub.rst | 12 +---
mm/slub.c
er --hex.
Regards,
--
Aaron Tomlin
signature.asc
Description: PGP signature
+0x7d7
migration_thread +0x265
kthread +0x9e
child_rip +0xa
Signed-off-by: Aaron Tomlin
---
tools/perf/Documentation/perf-report.txt | 4
tools/perf/builtin-report.c | 2 ++
tools/perf/util/srcline.c| 6 --
tools/perf/util/src
c [memstick]
> [] process_one_work+0x1f3/0x4b0
> [] worker_thread+0x48/0x4e0
> [] kthread+0xc9/0xe0
> [] ret_from_fork+0x1f/0x40
> [] 0x
>
> Signed-off-by: Luis R. Rodriguez
> ---
> kernel/module.c | 1 +
> 1 file changed, 1 insertion(+)
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
signature.asc
Description: PGP signature
> kernel/watchdog_hld.c | 3 +++
> 3 files changed, 13 insertions(+)
Looks fine to me.
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
On Thu 2016-10-27 09:49 -0400, Steven Rostedt wrote:
[ ... ]
> I also added Jessica to the Cc as I notice she will be the new module
> maintainer: http://lwn.net/Articles/704653/
Hi Jessica,
Any thoughts?
Thanks,
--
Aaron Tomlin
makes set_all_modules_text_ro() skip modules which are going
away too.
Signed-off-by: Aaron Tomlin
---
kernel/module.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/module.c b/kernel/module.c
index ff93ab8..2a383df 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1
to address this theoretical
race. Please let me know your thoughts.
Aaron Tomlin (2):
module: Ensure a module's state is set accordingly during module
coming cleanup code
module: When modifying a module's text ignore modules which are going
away too
kernel/mod
ingly to ensure anyone on the
module_notify_list waiting for a module going away notification will be
notified accordingly.
Signed-off-by: Aaron Tomlin
---
kernel/module.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/module.c b/kernel/module.c
index f57dd63..ff93ab8 100644
--- a/kernel
dules which are going away too.
Signed-off-by: Aaron Tomlin
---
kernel/module.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/module.c b/kernel/module.c
index ff93ab8..09c386b 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -1953,7 +1953,8 @@
4 +--
> arch/x86/include/asm/irq.h| 4 +--
> arch/x86/kernel/apic/hw_nmi.c | 6 ++---
> include/linux/nmi.h | 63
> ++-
> lib/nmi_backtrace.c | 15 +--
> 6 files changed, 65 insertions(+), 31 deletions(-)
Looks good to me.
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
current register state in all cases when regs == NULL is passed
> to nmi_cpu_backtrace().
>
> Signed-off-by: Chris Metcalf
> ---
> arch/arm/kernel/smp.c | 9 -
> lib/nmi_backtrace.c | 9 +
> 2 files changed, 9 insertions(+), 9 deletions(-)
Thanks Chris.
Acked-by: Aaron Tomlin
--
Aaron Tomlin
nt proc_watchdog_thresh(struct ctl_table *table, int
> write,
> /*
>* Update the sample period. Restore on failure.
>*/
> + new = ACCESS_ONCE(watchdog_thresh);
> + if (old == new)
> + goto out;
> +
> set_sample_period();
> err = proc_watchdog_update();
> if (err) {
Reviewed-by: Aaron Tomlin
tice.
This patch series does adequately address the race conditions mentioned
above. Thanks.
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
;t hold mmap_sem in
khugepaged when allocating THP") wouldn't be safe. So let's remove it.
Signed-off-by: Aaron Tomlin
---
mm/huge_memory.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index bbac913..490fa81 100644
The "vma" parameter to khugepaged_alloc_page() is unused.
It has to remain unused or the drop read lock 'map_sem' optimisation
introduce by commit 8b1645685acf ("thp: introduce khugepaged_prealloc_page
and khugepaged_alloc_page") wouldn't be possible. So let
On Wed 2015-10-28 17:05 +, Aaron Tomlin wrote:
> The "vma" parameter to khugepaged_alloc_page() is unused.
> It has to remain unused or the drop read lock 'map_sem' optimisation
> introduce by commit 8b1645685acf ("thp: introduce khugepaged_prealloc_page
>
The "vma" parameter to khugepaged_alloc_page() is unused.
It has to remain unused or the drop read lock 'map_sem' optimisation
introduce by commit 8b1645685acf ("mm, THP: don't hold mmap_sem in
khugepaged when allocating THP") wouldn't be possible. So let
.procname = "softlockup_all_cpu_backtrace",
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index f6b32b8..0a23125 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -112,7 +112,7 @@ static unsigned long soft_lockup_nmi_warn;
. RCU stall detector, and whoever else) to be
> aware of it as well, otherwise it wouldn't make too much sense.
>
> Something to add to TODO I guess.
This could indeed be worth further investigation.
--
Aaron Tomlin
--
To unsubscribe from this list: send the line "unsub
* generating interleaving traces
> + */
> + if (sysctl_hardlockup_all_cpu_backtrace &&
> + !test_and_set_bit(0, &hardlockup_allcpu_dumped))
> + trigger_allbutself_cpu_backtrace();
How does this play when 'softlockup_all_cpu_backtrace' is enabled too?
> +
> + if (hardlockup_panic)
> + panic("Hard LOCKUP");
>
> __this_cpu_write(hard_watchdog_warn, true);
> return;
This does indeed appear similar to Linus commit ed235875
("kernel/watchdog.c: print traces for all cpus on lockup detection");
albeit for the hardlockup detector.
Looks fine to me. Thanks!
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
f (ret) {
> - for_each_watchdog_cpu(cpu)
> - kthread_unpark(per_cpu(softlockup_watchdog, cpu));
> - }
> put_online_cpus();
>
> return ret;
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
sable_all_cpus();
> + pr_err("Failed to suspend lockup detectors, disabled\n");
> + watchdog_enabled = 0;
> + }
>
> mutex_unlock(&watchdog_proc_mutex);
>
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
7;watchdog_enabled' as
> + * both lockup detectors are disabled if proc_watchdog_update()
> + * returns an error.
>*/
> err = proc_watchdog_update();
> - if (err)
> - watchdog_enabled = old;
> }
> out:
> mutex_unlock(&watchdog_proc_mutex);
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
> if (watchdog_running) {
> @@ -767,6 +767,8 @@ static void watchdog_disable_all_cpus(void)
> }
> }
>
> +#ifdef CONFIG_SYSCTL
> +
> /*
> * Update the run state of the lockup detectors.
> */
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
set_sample_period();
> err = proc_watchdog_update();
> - if (err)
> + if (err) {
> watchdog_thresh = old;
> + set_sample_period();
> + }
> out:
> mutex_unlock(&watchdog_proc_mutex);
> return err;
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
upts_saved) == hrint)
> - return 1;
> + return true;
>
> __this_cpu_write(hrtimer_interrupts_saved, hrint);
> - return 0;
> + return false;
> }
> #endif
>
Fair enough with regards to readability.
Reviewed-by: Aaron Tomlin
signature.asc
Description: PGP signature
gt; - pr_info("failed to disable PMU erratum BJ122, BV98, HSD29
> workaround\n");
> + pr_debug("failed to disable PMU erratum BJ122, BV98, HSD29
> workaround\n");
> return 0;
> }
>
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
signature.asc
Description: PGP signature
* request is active (see related changes in 'proc' handlers).
> + * request is active (see related code in 'proc' handlers).
>*/
> if (watchdog_running && !watchdog_suspended)
> ret = watchdog_park_threads();
> @@ -695,7 +713,7 @@ int watchdog_suspend(void)
> /*
> * Resume the hard and soft lockup detector by unparking the watchdog
> threads.
> */
> -void watchdog_resume(void)
> +void lockup_detector_resume(void)
> {
> mutex_lock(&watchdog_proc_mutex);
>
Reviewed-by: Aaron Tomlin
--
Aaron Tomlin
signature.asc
Description: PGP signature
> watchdog: use park/unpark functions in update_watchdog_all_cpus()
> watchdog: use suspend/resume interface in fixup_ht_bug()
>
> arch/x86/kernel/cpu/perf_event_intel.c | 9 +-
> include/linux/nmi.h| 2 +
> include/linux/watchdog.h |
LL */
> + g = t->group_leader;
> + else if (g) /* continue the outer loop */
> + break;
> + else/* both dead */
> goto unlock;
>
K_UNINTERRUPTIBLE)
> + check_hung_task(t, timeout);
> }
> - /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
> - if (t->state == TASK_UNINTERRUPTIBLE)
> - check_hung_task(t,
On Tue 2015-03-17 18:09 +0100, Oleg Nesterov wrote:
> On 03/17, Aaron Tomlin wrote:
> >
> > --- a/kernel/hung_task.c
> > +++ b/kernel/hung_task.c
> > @@ -169,7 +169,7 @@ static void check_hung_uninterruptible_tasks(unsigned
> > long timeout)
> >
ess_thread().
Signed-off-by: Aaron Tomlin
---
kernel/hung_task.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index 06db124..e0f90c2 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -169,7 +169,7 @@ static void check_h
Hi Andrew,
Further work is required to improve khungtaskd. I'll do this later
but for now let's start with this trivial clean up.
Aaron Tomlin (1):
hung_task: Change hung_task.c to use for_each_process_thread()
kernel/hung_task.c | 4 ++--
1 file changed, 2 insertions(+), 2
kernel thread executes. Hence, rename it to
> 'kthread_arg'.
>
> Signed-off-by: Alex Dowad
> ---
AFAICT this clean up looks OK and should improve readability. Thanks.
--
Aaron Tomlin
pgp4xNeaAV5NO.pgp
Description: PGP signature
On Mon 2014-11-03 15:06 +0800, Fengguang Wu wrote:
> Hi Aaron,
>
> FYI your patch triggered a BUG on an existing old bug.
Oh right. Good to know :)
> Let's hope it provides more info to debug the problem.
Hopefully :)
--
Aaron Tomlin
--
To unsubscribe from this li
p && len < maxlen - 1) {
> if (get_user(c, p++))
> return -EFAULT;
> - if (c == 0 || c == '\n')
> + if (c == 0 || c == '\n' || c == '\r')
>
the noise. This deadlock was produced under a kernel whereby
the workqueue implementation is significantly less sophisticated.
--
Aaron Tomlin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo
's
> patches:
>
> BUG: failure at kernel/sched/core.c:2664/schedule_debug()!
> Kernel panic - not syncing: BUG!
>
> Tested-by: James Hogan [metag]
> Acked-by: James Hogan
OK.
Acked-by: Aaron Tomlin
> Aaron: please can you try to get this patch applied before your patch
Commit-ID: d4311ff1a8da48d609db9500f121c15580dfeeb7
Gitweb: http://git.kernel.org/tip/d4311ff1a8da48d609db9500f121c15580dfeeb7
Author: Aaron Tomlin
AuthorDate: Fri, 12 Sep 2014 14:16:17 +0100
Committer: Ingo Molnar
CommitDate: Fri, 19 Sep 2014 12:35:22 +0200
init/main.c: Give
Commit-ID: 0d9e26329b0c9263d4d9e0422d80a0e73268c52f
Gitweb: http://git.kernel.org/tip/0d9e26329b0c9263d4d9e0422d80a0e73268c52f
Author: Aaron Tomlin
AuthorDate: Fri, 12 Sep 2014 14:16:19 +0100
Committer: Ingo Molnar
CommitDate: Fri, 19 Sep 2014 12:35:24 +0200
sched: Add default
Commit-ID: a70857e46dd13e87ae06bf0e64cb6a2d4f436265
Gitweb: http://git.kernel.org/tip/a70857e46dd13e87ae06bf0e64cb6a2d4f436265
Author: Aaron Tomlin
AuthorDate: Fri, 12 Sep 2014 14:16:18 +0100
Committer: Ingo Molnar
CommitDate: Fri, 19 Sep 2014 12:35:23 +0200
sched: Add helper for task
blocked waiting to aquire an sb's s_umount for reading
>
> OK,
>
> > - The umount task is the current owner of the s_umount in
> > question but is waiting for do_sync_work to continue.
> > Thus we hit a deadlock situation.
>
> I don't this
On Wed, Sep 17, 2014 at 08:22:02PM +0200, Oleg Nesterov wrote:
> On 09/17, Aaron Tomlin wrote:
> >
> > Since do_sync_work() is a deferred function it can block indefinitely by
> > design. At present do_sync_work() is added to the global system_wq.
> > As such a deadloc
to avoid a
the described deadlock.
Signed-off-by: Aaron Tomlin
Reviewed-by: Alexander Viro
---
fs/sync.c | 13 -
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/fs/sync.c b/fs/sync.c
index bdc729d..df455d0 100644
--- a/fs/sync.c
+++ b/fs/sync.c
@@ -15,6 +15,7 @@
#inc
/?l=linux-kernel&m=127144305403241&w=2
Signed-off-by: Aaron Tomlin
Acked-by: Michael Ellerman
---
arch/powerpc/mm/fault.c| 3 +--
arch/x86/mm/fault.c| 3 +--
include/linux/sched.h | 2 ++
init/main.c| 1 +
kernel/fork.c | 12 +---
cannot be
handled.
This patch checks for a stack overrun and takes appropriate
action since the damage is already done, there is no point
in continuing.
Signed-off-by: Aaron Tomlin
---
kernel/sched/core.c | 3 +++
lib/Kconfig.debug | 12
2 files changed, 15 insertions(+)
diff
This facility is used in a few places so let's introduce
a helper function to improve code readability.
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 4 +---
arch/x86/mm/fault.c| 4 +---
include/linux/sched.h | 2 ++
kernel/trace/trace_stack.c | 2 +-
4 files ch
r Zijlstra
Aaron Tomlin (3):
init/main.c: Give init_task a canary
sched: Add helper for task stack page overrun checking
sched: BUG when stack end location is over written
arch/powerpc/mm/fault.c| 5 +
arch/x86/mm/fault.c| 5 +
include/linux/sched.h | 4
i
On Fri, Sep 12, 2014 at 04:04:51PM +1000, Michael Ellerman wrote:
> On Thu, 2014-09-11 at 16:41 +0100, Aaron Tomlin wrote:
> > Currently in the event of a stack overrun a call to schedule()
> > does not check for this type of corruption. This corruption is
> > often silent
On Fri, Sep 12, 2014 at 02:06:57PM +1000, Michael Ellerman wrote:
> On Thu, 2014-09-11 at 16:41 +0100, Aaron Tomlin wrote:
> > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > index a285900..2a8280a 100644
> > --- a/lib/Kconfig.debug
> > +++ b/lib/Kconfig.
On Thu, Sep 11, 2014 at 04:02:45PM +, David Laight wrote:
> From: Aaron Tomlin
> > Currently in the event of a stack overrun a call to schedule()
> > does not check for this type of corruption. This corruption is
> > often silent and can go unnoticed. However once the co
On Thu, Sep 11, 2014 at 05:53:03PM +0200, Peter Zijlstra wrote:
>
> What's with the threading all versions together? Please don't do that --
> also don't post a new version just for this though.
Sorry about that. Noted.
--
Aaron Tomlin
--
To unsubscribe from
cannot be
handled.
This patch checks for a stack overrun and takes appropriate
action since the damage is already done, there is no point
in continuing.
Signed-off-by: Aaron Tomlin
---
kernel/sched/core.c | 3 +++
lib/Kconfig.debug | 12
2 files changed, 15 insertions(+)
diff
This facility is used in a few places so let's introduce
a helper function to improve code readability.
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 4 +---
arch/x86/mm/fault.c| 4 +---
include/linux/sched.h | 2 ++
kernel/trace/trace_stack.c | 2 +-
4 files ch
_task - Oleg Nesterov
* Fix various code formatting issues - Peter Zijlstra
* Introduce Kconfig option - Peter Zijlstra
Aaron Tomlin (3):
init/main.c: Give init_task a canary
sched: Add helper for task stack page overrun checking
sched: BUG when stack end location is over written
arch/power
/?l=linux-kernel&m=127144305403241&w=2
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 3 +--
arch/x86/mm/fault.c| 3 +--
include/linux/sched.h | 2 ++
init/main.c| 1 +
kernel/fork.c | 12 +---
kernel/trace/trace_stac
On Thu, Sep 11, 2014 at 07:23:45AM -0500, Chuck Ebbert wrote:
> On Wed, 10 Sep 2014 14:29:33 +0100
> Aaron Tomlin wrote:
>
> > On Wed, Sep 10, 2014 at 02:26:54AM -0500, Chuck Ebbert wrote:
> > > And has this been tested on parisc and metag, which use STACK_GROWSUP
On Wed, Sep 10, 2014 at 02:26:54AM -0500, Chuck Ebbert wrote:
> On Tue, 9 Sep 2014 10:42:27 +0100
> Aaron Tomlin wrote:
>
> > +void task_stack_end_magic(struct task_struct *tsk)
> > +{
> > + unsigned long *stackend;
> > +
> > + stackend
e the damage
is already done, there is no point in continuing.
Changes since v1:
* Rebased against v3.17-rc4
* Add a canary to init_task - Oleg Nesterov
* Fix various code formatting issues - Peter Zijlstra
* Introduce Kconfig option - Peter Zijlstra
Aaron Tomlin (3):
init/main.c: Give ini
/?l=linux-kernel&m=127144305403241&w=2
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 3 +--
arch/x86/mm/fault.c| 3 +--
include/linux/sched.h | 2 ++
init/main.c| 1 +
kernel/fork.c | 12 +---
kernel/trace/trace_stac
cannot be
handled.
This patch checks for a stack overrun and takes appropriate
action since the damage is already done, there is no point
in continuing.
Signed-off-by: Aaron Tomlin
---
kernel/sched/core.c | 4
lib/Kconfig.debug | 12
2 files changed, 16 insertions(+)
diff
This facility is used in a few places so let's introduce
a helper function to improve code readability.
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 4 +---
arch/x86/mm/fault.c| 4 +---
include/linux/sched.h | 2 ++
kernel/trace/trace_stack.c | 2 +-
4 files ch
/?l=linux-kernel&m=127144305403241&w=2
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 3 +--
arch/x86/mm/fault.c| 3 +--
include/linux/sched.h | 2 ++
init/main.c| 1 +
kernel/fork.c | 12 +---
kernel/trace/trace_stac
nuing.
Changes since v1:
* Rebased against v3.17-rc4
* Add a canary to init_task - Oleg Nesterov
* Fix various code formatting issues - Peter Zijlstra
* Introduce Kconfig option - Peter Zijlstra
Aaron Tomlin (3):
init/main.c: Give init_task a canary
sched: Add helper for task stack page ov
cannot be
handled.
This patch checks for a stack overrun and takes appropriate
action since the damage is already done, there is no point
in continuing.
Signed-off-by: Aaron Tomlin
---
kernel/sched/core.c | 4
lib/Kconfig.debug | 12
2 files changed, 16 insertions(+)
diff
This facility is used in a few places so let's introduce
a helper function to improve code readability.
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 4 +---
arch/x86/mm/fault.c| 4 +---
include/linux/sched.h | 2 ++
kernel/trace/trace_stack.c | 2 +-
4 files ch
On Thu, Sep 04, 2014 at 05:32:31PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 04, 2014 at 03:50:24PM +0100, Aaron Tomlin wrote:
> > Currently in the event of a stack overrun a call to schedule()
> > does not check for this type of corruption. This corruption is
> > of
On Thu, Sep 04, 2014 at 05:02:34PM +0200, Oleg Nesterov wrote:
> On 09/04, Aaron Tomlin wrote:
> >
> > +#define task_stack_end_corrupted(task) \
> > + (*(end_of_stack(task)) != STACK_END_MAGIC)
>
> and it is always used along with "tsk != init_task&quo
cannot be
handled.
The first patch provides a helper to determine the integrity
of the canary. While the second patch checks for a stack
overrun and takes appropriate action since the damage is
already done, there is no point in continuing.
Aaron Tomlin (2):
sched: Add helper for task stack
cannot be
handled.
This patch checks for a stack overrun and takes appropriate
action since the damage is already done, there is no point
in continuing.
Signed-off-by: Aaron Tomlin
---
kernel/sched/core.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched
This facility is used in a few places so let's introduce
a helper function to improve readability.
Signed-off-by: Aaron Tomlin
---
arch/powerpc/mm/fault.c| 6 ++
arch/x86/mm/fault.c| 5 +
include/linux/sched.h | 3 +++
kernel/trace/trace_stack.c | 5 ++---
4
> clear_bit(0, &soft_lockup_nmi_warn);
> /* Barrier to sync with other cpus */
> - smp_mb__after_clear_bit();
> + smp_mb__after_atomic();
> }
>
> if (sof
enabled, let's check
for this condition and take appropriate action.
Note: init_task doesn't get its stack end location
set to STACK_END_MAGIC.
Signed-off-by: Aaron Tomlin
---
kernel/trace/trace_stack.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/trace/trace_stack.
On Wed, Mar 19, 2014 at 06:56:34PM -0700, Andi Kleen wrote:
> >
> > Why are we fixing this?
>
> Also sysctl writes are root only anyways.
>
> Protecting root against root? Seems odd.
>
> -Andi
I agree. I don't see the point here.
Regards,
--
Aaron Toml
Commit-ID: 70e0ac5f3683f48a8174a6f231a0f3097217c189
Gitweb: http://git.kernel.org/tip/70e0ac5f3683f48a8174a6f231a0f3097217c189
Author: Aaron Tomlin
AuthorDate: Mon, 27 Jan 2014 09:00:57 +
Committer: Ingo Molnar
CommitDate: Fri, 31 Jan 2014 09:24:03 +0100
hung_task/Documentation
Commit-ID: 270750dbc18a71b23d660df110e433ff9616a2d4
Gitweb: http://git.kernel.org/tip/270750dbc18a71b23d660df110e433ff9616a2d4
Author: Aaron Tomlin
AuthorDate: Mon, 20 Jan 2014 17:34:13 +
Committer: Ingo Molnar
CommitDate: Sat, 25 Jan 2014 12:13:33 +0100
hung_task: Display every
Commit-ID: 2397efb1bb17595b35f31abb40d95074ebc04f1b
Gitweb: http://git.kernel.org/tip/2397efb1bb17595b35f31abb40d95074ebc04f1b
Author: Aaron Tomlin
AuthorDate: Mon, 20 Jan 2014 17:34:12 +
Committer: Ingo Molnar
CommitDate: Sat, 25 Jan 2014 08:59:53 +0100
sysctl: Add neg_one as a
s to 0 has since changed.
When set, the reclaim code does not initiate swap until the
amount of free pages and file-backed pages, is less than the
high water mark in a zone.
Let's update the documentation to reflect this.
Signed-off-by: Aaron Tomlin
Acked-by: Rik van Riel
Acked-by:
Add neg_one to the list of standard constraints.
Signed-off-by: Aaron Tomlin
Acked-by: Rik van Riel
Acked-by: David Rientjes
---
kernel/sysctl.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 34a6047..dd531a6 100644
--- a/kernel/sysctl.c
+++ b
documentation on hung_task_warnings
Changes since v3:
- Simplify the commit message (Rik van Riel and David Rientjes)
- Document hung_task_* sysctl parameters (David Rientjes)
Aaron Tomlin (2):
sysctl: Make neg_one a standard constraint
hung_task: Display every hung task warning
Documentation
possible for hung_task_warnings
to accept a special value to print an unlimited
number of backtraces when khungtaskd detects hung
tasks.
The special value is -1. To use this value it is
necessary to change types from ulong to int.
Signed-off-by: Aaron Tomlin
Reviewed-by: Rik van Riel
Acked-by
assed to cache_alloc_node
If the nodeid is > num_online_nodes() this can cause an
Oops and a panic(). The purpose of this patch is to assert
if this condition is true to aid debugging efforts rather
than some random NULL pointer dereference or page fault.
Signed-off-by: Aaron Tomlin
Reviewed-
1 - 100 of 103 matches
Mail list logo