Hello, James.
Thank you for the meticulous test and review.
On Fri, 2017-08-11 at 18:02 +0100, James Morse wrote:
> Hi Hoeun,
>
> On 07/08/17 06:09, Hoeun Ryu wrote:
> >
> > Commit 0ee5941 : (x86/panic: replace smp_send_stop() with kdump friendly
> > versio
;= save crash dump for nonpanic cores
* crash_kexec_post_notifiers : true
panic()
crash_smp_send_stop()<= save crash dump for nonpanic cores
__crash_kexec()
machine_crash_shutdown()
crash_smp_send_stop()<= just return.
Signed-off-by: Hoeun Ryu
Revie
Hello, All.
Would you please review this patch ?
I haven't had any respond to this patch.
Thank you.
On Tue, 2017-08-08 at 10:22 +0900, Hoeun Ryu wrote:
> Commit 0ee5941 : (x86/panic: replace smp_send_stop() with kdump friendly
> version in panic path) introduced crash_smp_send_stop
+CC
sergey.senozhatsky.w...@gmail.com
pmla...@suse.com
Please review this patch.
> -Original Message-
> From: Hoeun Ryu [mailto:hoeun@lge.com.com]
> Sent: Tuesday, June 05, 2018 11:19 AM
> To: Andrew Morton ; Kees Cook
> ; Andi Kleen ; Borislav Petkov
> ; Thomas
From: Hoeun Ryu
Use printk_safe_flush_on_panic() in nmi_trigger_cpumask_backtrace().
nmi_trigger_cpumask_backtrace() can be called in NMI context. For example the
function is called in watchdog_overflow_callback() if the flag of hardlockup
backtrace (sysctl_hardlockup_all_cpu_backtrace) is true
From: Hoeun Ryu
On some SoCs like i.MX6DL/QL have only one muxed SPI for multi-core system.
On the systems, a CPU can be interrupted by overflow irq but it is possible that
the overflow actually occurs on another CPU.
This patch broadcasts the irq using smp_call_function_single_async() so that
08:20:49AM +0900, ��ȣ�� wrote:
> > Thank you for the reply.
> >
> > > -Original Message-
> > > From: Mark Rutland [mailto:mark.rutl...@arm.com]
> > > Sent: Thursday, May 10, 2018 7:21 PM
> > > To: Hoeun Ryu
> > > Cc: Will Deacon ; Hoe
I appreciate the detailed correction.
I will reflect the corrections in the next patch.
plus, the explanation in the code will be fixed.
> -Original Message-
> From: Petr Mladek [mailto:pmla...@suse.com]
> Sent: Wednesday, May 30, 2018 5:32 PM
> To: Sergey Senozhatsky
>
> -Original Message-
> From: Petr Mladek [mailto:pmla...@suse.com]
> Sent: Wednesday, May 30, 2018 5:32 PM
> To: Sergey Senozhatsky
> Cc: Hoeun Ryu ; Sergey Senozhatsky
> ; Steven Rostedt ;
> Hoeun Ryu ; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH] printk
From: Hoeun Ryu
Make printk_safe_flush() safe in NMI context.
nmi_trigger_cpumask_backtrace() can be called in NMI context. For example the
function is called in watchdog_overflow_callback() if the flag of hardlockup
backtrace (sysctl_hardlockup_all_cpu_backtrace) is true and
From: Hoeun Ryu
Many console device drivers hold the uart_port->lock spinlock with irq enabled
(using spin_lock()) while the device drivers are writing characters to their
devices,
but the device drivers just try to hold the spin lock (using spin_trylock()) if
"oops_in_progress"
From: Hoeun Ryu
Make printk_safe_flush() safe in NMI context. And printk_safe_flush_on_panic()
is
folded into this function. The prototype of printk_safe_flush() is changed to
"void printk_safe_flush(bool panic)".
nmi_trigger_cpumask_backtrace() can be called in NMI context. For e
From: Hoeun Ryu
Make printk_safe_flush() safe in NMI context.
nmi_trigger_cpumask_backtrace() can be called in NMI context. For example the
function is called in watchdog_overflow_callback() if the flag of hardlockup
backtrace (sysctl_hardlockup_all_cpu_backtrace) is true and
> -Original Message-
> From: Sergey Senozhatsky [mailto:sergey.senozhatsky.w...@gmail.com]
> Sent: Tuesday, May 29, 2018 9:13 PM
> To: Hoeun Ryu
> Cc: Petr Mladek ; Sergey Senozhatsky
> ; Steven Rostedt ;
> Hoeun Ryu ; linux-kernel@vger.kernel.org
> Subject:
From: Hoeun Ryu
Many console device drivers hold the uart_port->lock spinlock with irq disabled
(using spin_lock_irqsave()) while the device drivers are writing characters to
their
devices, but the device drivers just try to hold the spin lock (using
spin_trylock_irqsave()) instead
I misunderstood the cause of a deadlock.
I sent v2 fixing the commit message about the reason of the deadlock.
Please ignore this and review v2.
Thank you.
> -Original Message-
> From: Steven Rostedt [mailto:rost...@goodmis.org]
> Sent: Tuesday, June 05, 2018 10:44 AM
> T
From: Hoeun Ryu
On some SoCs like i.MX6DL/QL have only one muxed SPI for multi-core system.
On the systems, a CPU can be interrupted by overflow irq but it is possible that
the overflow actually occurs on another CPU.
This patch broadcasts the irq using smp_call_function() so that other CPUs
> -Original Message-
> From: Peter Zijlstra [mailto:pet...@infradead.org]
> Sent: Thursday, May 10, 2018 7:17 PM
> To: ��ȣ��
> Cc: mi...@kernel.org; aaron...@intel.com; adobri...@gmail.com;
> frede...@kernel.org; ying.hu...@intel.com; linux-kernel@vger.kernel.org
> Subject: Re: smp_call_f
> On Mar 3, 2017, at 1:02 PM, Kees Cook wrote:
>
>> On Thu, Mar 2, 2017 at 7:00 AM, Hoeun Ryu wrote:
>> This RFC is a quick and dirty arm64 implementation for Kees Cook's RFC for
>> rare_write infrastructure [1].
>
> Awesome! :)
>
>> This implement
> On Mar 4, 2017, at 5:50 AM, Andy Lutomirski wrote:
>
>> On Thu, Mar 2, 2017 at 7:00 AM, Hoeun Ryu wrote:
>> +unsigned long __rare_write_rw_alias_start = TASK_SIZE_64 / 4;
>> +
>> +__always_inline unsigned long __arch_rare_write_map(void)
>> +{
>> +
's rare write test.
[1] : http://www.openwall.com/lists/kernel-hardening/2017/02/27/5
[2] : https://lkml.org/lkml/2017/2/22/254
Signed-off-by: Hoeun Ryu
---
arch/Kconfig | 4 ++
arch/arm64/Kconfig | 2 +
arch/arm64/include/asm/pgtable.h | 12
9, 2017 at 07:04:06PM +0900, Hoeun Ryu wrote:
>
>>>> @@ -3396,8 +3399,11 @@ static noinline int do_init_module(struct module
>>>> *mod)
>>>>
>>>> do_mod_ctors(mod);
>>>> /* Start the module */
>>>> -if (mod->ini
> On Feb 15, 2017, at 5:36 AM, Kees Cook wrote:
>
>> On Mon, Feb 13, 2017 at 5:44 PM, Hoeun Ryu wrote:
>>
>>
>>>> On Feb 14, 2017, at 4:24 AM, Kees Cook wrote:
>>>>
>>>>> On Mon, Feb 13, 2017 at 10:33 AM, Kees Cook wrote:
>
On Fri, Feb 10, 2017 at 9:05 PM, Michal Hocko wrote:
> On Fri 10-02-17 17:32:07, Hoeun Ryu wrote:
> [...]
>> +static int free_vm_stack_cache(unsigned int cpu)
>> +{
>> + struct vm_struct **cached_vm_stacks = per_cpu_ptr(cached_stacks, cpu);
>> + int
On Fri, Feb 10, 2017 at 11:41 PM, Michal Hocko wrote:
> On Fri 10-02-17 23:31:41, Hoeun Ryu wrote:
>> On Fri, Feb 10, 2017 at 9:05 PM, Michal Hocko wrote:
>> > On Fri 10-02-17 17:32:07, Hoeun Ryu wrote:
> [...]
>> >> static unsigned long *alloc_thread_s
On Sat, Feb 11, 2017 at 12:32 AM, Thomas Gleixner wrote:
> On Fri, 10 Feb 2017, Michal Hocko wrote:
>> On Fri 10-02-17 23:31:41, Hoeun Ryu wrote:
>> > On Fri, Feb 10, 2017 at 9:05 PM, Michal Hocko wrote:
>> > > On Fri 10-02-17 17:32:07, Hoeun Ryu wrote:
>>
On Sat, Feb 11, 2017 at 2:51 AM, Thomas Gleixner wrote:
> On Sat, 11 Feb 2017, Hoeun Ryu wrote:
>> On Sat, Feb 11, 2017 at 12:32 AM, Thomas Gleixner wrote:
>> > On Fri, 10 Feb 2017, Michal Hocko wrote:
>> >> On Fri 10-02-17 23:31:41, Hoeun Ryu wrote:
>>
cpu hotplug callback to free the cached stacks when a cpu
goes offline, the pages of the cached stacks are not wasted.
Signed-off-by: Hoeun Ryu
---
v4:
use CPUHP_BP_PREPARE_DYN state for cpuhp setup
fix minor coding style
v3:
fix misuse of per-cpu api
fix location of function definition within
> On Feb 11, 2017, at 5:31 PM, Thomas Gleixner wrote:
>
>> On Sat, 11 Feb 2017, Hoeun Ryu wrote:
>> #define NR_CACHED_STACKS 2
>> static DEFINE_PER_CPU(struct vm_struct *, cached_stacks[NR_CACHED_STACKS]);
>> +
>> +static int free_vm_stack_cache(unsigned int
cpu hotplug callback to free the cached stacks when a cpu
goes offline, the pages of the cached stacks are not wasted.
Signed-off-by: Hoeun Ryu
---
v5:
- wrap cpuhp_setup_state() in a new function, vm_stack_cache_init() which
actually do nothing when !CONFIG_VMAP_STACK
- add __may_unused to
On Sat, Feb 11, 2017 at 6:56 PM, Hoeun Ryu wrote:
>
>> On Feb 11, 2017, at 5:31 PM, Thomas Gleixner wrote:
>>
>>> On Sat, 11 Feb 2017, Hoeun Ryu wrote:
>>> #define NR_CACHED_STACKS 2
>>> static DEFINE_PER_CPU(struct vm_struct *, cached_stacks
applied to the wrong git tree, please drop us a note to
>> help improve the system]
>>
>> url:
>> https://github.com/0day-ci/linux/commits/Hoeun-Ryu/fork-free-vmapped-stacks-in-cache-when-cpus-are-offline/20170211-183401
>> config: ia64-allmodconfig (attached as
-off-by: Hoeun Ryu
---
lib/test_user_copy.c | 17 +
1 file changed, 17 insertions(+)
diff --git a/lib/test_user_copy.c b/lib/test_user_copy.c
index 0ecef3e..54bd898 100644
--- a/lib/test_user_copy.c
+++ b/lib/test_user_copy.c
@@ -41,11 +41,18 @@ static int __init
cpu hotplug callback to free the cached stacks when a cpu
goes offline, the pages of the cached stacks are not wasted.
Signed-off-by: Hoeun Ryu
Acked-by: Michal Hocko
---
v6:
- rollback to v4, completely identical.
v5:
- wrap cpuhp_setup_state() in a new function, vm_stack_cache_init() which
> On Feb 14, 2017, at 4:24 AM, Kees Cook wrote:
>
>> On Mon, Feb 13, 2017 at 10:33 AM, Kees Cook wrote:
>>> On Sat, Feb 11, 2017 at 10:13 PM, Hoeun Ryu wrote:
>>> In the hardend usercopy, the destination buffer will be zeroed if
>>> copy_from_user/ge
cpu hotplug callback to free the cached stacks when a cpu
goes offline, the pages of the cached stacks are not wasted.
Signed-off-by: Hoeun Ryu
Acked-by: Michal Hocko
Reviewed-by: Thomas Gleixner
---
v7:
- identical to v6.
- add Reviewed-by: Thomas Gleixner
v6:
- rollback to v4, completely
in module_init/exit.
0004 patch is an example for dynamic init/deinit of a subsystem.
0005 patch is an example for __ro_mostly_after_init section modified during
module_init/exit.
0006/0007 patches are fixes for arm64 kernel mapping.
Hoeun Ryu (7):
arch: add __ro_mostly_after_init
and writable temporarily only during module_init/exit and dynamic
de/registration for a subsystem.
Signed-off-by: Hoeun Ryu
---
include/asm-generic/sections.h| 1 +
include/asm-generic/vmlinux.lds.h | 10 ++
include/linux/cache.h | 11 +++
3 files changed, 22
e via set_ro_mostly_after_init_rw/ro pair. Now that
they can be read-only except during the procedure.
Signed-off-by: Hoeun Ryu
---
security/selinux/hooks.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
index 9a
`__ro_mostly_after_init` is almost like `__ro_after_init`. The section is
read-only as same as `__ro_after_init` after kernel init. This patch makes
`__ro_mostly_after_init` section read-write temporarily only during
module_init/module_exit.
Signed-off-by: Hoeun Ryu
---
kernel/module.c | 10
It would be good that `__ro_mostly_after_init` is marked to cpuhp state
objects. They can not be simply marked as `__ro_after_init` because they
should be writable during module_init/exit. Now that they can be read-only
except during module_init/exit
Signed-off-by: Hoeun Ryu
---
kernel/cpu.c
.
Signed-off-by: Hoeun Ryu
---
include/linux/init.h | 6 ++
init/main.c | 24
2 files changed, 30 insertions(+)
diff --git a/include/linux/init.h b/include/linux/init.h
index 79af096..d68e4f7 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -131,6
Map rodata sections seperately for the new __ro_mostly_after_init section.
Attribute of memory for __ro_mostly_after_init section can be changed later
so we need a dedicated vmalloced region for set_memory_rw/ro api.
Signed-off-by: Hoeun Ryu
---
arch/arm64/mm/mmu.c | 30
Memory attribute for `__ro_mostly_after_init` section should be changed
via set_memory_rw/ro that doesn't work against vm areas which don't have
VM_ALLOC. Add this function to map `__ro_mostly_after_init` section with
VM_ALLOC flag set in map_kernel.
Signed-off-by: Hoeun Ryu
---
arc
cpu hotplug callback to free the cached stacks when a cpu
goes offline, the pages of the cached stacks are not wasted.
Signed-off-by: Hoeun Ryu
---
Changes in v2:
remove cpuhp callback for `starup`, only `teardown` callback is installed.
kernel/fork.c | 21 +
1 file changed
Introducing NR_VMAP_STACK_CACHE, the number of cached stacks for virtually
mapped kernel stack can be configurable using Kbuild system.
default value is 2.
Signed-off-by: Hoeun Ryu
---
arch/Kconfig | 8
kernel/fork.c | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git
On Thu, Feb 9, 2017 at 1:22 PM, Eric Biggers wrote:
> Hi Hoeun,
>
> On Thu, Feb 09, 2017 at 01:03:46PM +0900, Hoeun Ryu wrote:
>> +static int free_vm_stack_cache(unsigned int cpu)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < NR_CACHED_STACKS
On Thu, Feb 9, 2017 at 5:38 PM, Michal Hocko wrote:
> On Thu 09-02-17 13:03:46, Hoeun Ryu wrote:
>> Using virtually mapped stack, kernel stacks are allocated via vmalloc.
>> In the current implementation, two stacks per cpu can be cached when
>> tasks are freed and the c
On Thu, Feb 9, 2017 at 5:40 PM, Michal Hocko wrote:
> On Thu 09-02-17 13:03:47, Hoeun Ryu wrote:
>> Introducing NR_VMAP_STACK_CACHE, the number of cached stacks for virtually
>> mapped kernel stack can be configurable using Kbuild system.
>> default value is 2.
>
>
cpu hotplug callback to free the cached stacks when a cpu
goes offline, the pages of the cached stacks are not wasted.
Signed-off-by: Hoeun Ryu
---
Changes in v3:
fix misuse of per-cpu api
fix location of function definition within CONFIG_VMAP_STACK
Changes in v2:
remove cpuhp callback for
this new implementation, the array for the cached stacks are dynamically
allocted and freed by cpu hotplug callbacks and the cached stacks are freed
when cpu is down. setup for cpu hotplug is established in fork_init().
Signed-off-by: Hoeun Ryu
---
kernel/fork.c | 81
when a cpu is up, predefined number of stacks are allocated and cached
immediately.
Signed-off-by: Hoeun Ryu
---
kernel/fork.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/kernel/fork.c b/kernel/fork.c
index 50de6cf..ee4067d 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
Introducing NR_VMAP_STACK_CACHE, the number of cached stacks for virtually
mapped kernel stack can be configurable using Kbuild system.
default value is 2.
Signed-off-by: Hoeun Ryu
---
arch/Kconfig | 8
kernel/fork.c | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git
On Sat, Feb 4, 2017 at 12:39 AM, Michal Hocko wrote:
> On Sat 04-02-17 00:30:05, Hoeun Ryu wrote:
>> Using virtually mapped stack, kernel stacks are allocated via vmalloc.
>> In the current implementation, two stacks per cpu can be cached when
>> tasks are freed and the c
On Sat, Feb 4, 2017 at 2:52 AM, Andy Lutomirski wrote:
> On Fri, Feb 3, 2017 at 8:42 AM, Hoeun Ryu wrote:
>> On Sat, Feb 4, 2017 at 12:39 AM, Michal Hocko wrote:
>>> On Sat 04-02-17 00:30:05, Hoeun Ryu wrote:
>>>> Using virtually mapped stack, kernel stacks are
On Sun, Feb 5, 2017 at 7:18 PM, Michal Hocko wrote:
> On Sat 04-02-17 11:01:32, Hoeun Ryu wrote:
>> On Sat, Feb 4, 2017 at 2:52 AM, Andy Lutomirski wrote:
>> > On Fri, Feb 3, 2017 at 8:42 AM, Hoeun Ryu wrote:
>> >> On Sat, Feb 4, 2017 at 12:39 AM, Michal Hocko wr
rbtree by accident. This flags can be also used by other vmalloc APIs to
specify that the area will never go away.
This makes remove_vm_area() more robust against other kind of errors (eg.
programming errors).
Signed-off-by: Hoeun Ryu
---
v2:
- update changelog
- add description to VM_STATIC
rbtree by accident. This flags can be also used by other vmalloc APIs to
specify that the area will never go away.
This makes remove_vm_area() more robust against other kind of errors (eg.
programming errors).
Signed-off-by: Hoeun Ryu
---
v2:
- update changelog
- add description to VM_STATIC
> On Apr 18, 2017, at 3:59 PM, Michal Hocko wrote:
>
>> On Tue 18-04-17 14:48:39, Hoeun Ryu wrote:
>> vm_area_add_early/vm_area_register_early() are used to reserve vmalloc area
>> during boot process and those virtually mapped areas are never unmapped.
>> So `OR`
rbtree by accident.
Signed-off-by: Hoeun Ryu
---
include/linux/vmalloc.h | 1 +
mm/vmalloc.c| 9 ++---
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 46991ad..3df53fc 100644
--- a/include/linux/vmalloc.h
+++ b
IZE_64 / 4
+ kaslr_offset().
It passes LKDTM's rare write test.
[1] : http://www.openwall.com/lists/kernel-hardening/2017/02/27/5
[2] : https://lkml.org/lkml/2017/3/29/704
Signed-off-by: Hoeun Ryu
---
arch/arm64/Kconfig | 2 +
arch/arm64/include/asm/pgtable.h | 4 ++
arch
> On Mar 31, 2017, at 4:38 AM, Kees Cook wrote:
>
>> On Thu, Mar 30, 2017 at 7:39 AM, Hoeun Ryu wrote:
>> This patch might be a part of Kees Cook's rare_write infrastructure series
>> for [1] for arm64 architecture.
>>
>> This implementation is based
> On Apr 12, 2017, at 3:02 PM, Christoph Hellwig wrote:
>
>> On Wed, Apr 12, 2017 at 02:01:59PM +0900, Hoeun Ryu wrote:
>> vm_area_add_early/vm_area_register_early() are used to reserve vmalloc area
>> during boot process and those virtually mapped areas are never unmapp
> On Apr 13, 2017, at 1:17 PM, Anshuman Khandual
> wrote:
>
>> On 04/12/2017 10:31 AM, Hoeun Ryu wrote:
>> vm_area_add_early/vm_area_register_early() are used to reserve vmalloc area
>> during boot process and those virtually mapped areas are never unmapped.
>&
.
Making 64BIT_ATOMIC_ACCESS true, some kernel codes to access 64bit
variables can be optimized by omitting seqlock or the mimic of it.
Also make 64BIT_ATOMIC_ALIGNED_ACCESS true, the 64bit atomic access is
guarnteed only when the address is 64bit algined.
Signed-off-by: Hoeun Ryu
---
a
nk I can make more examples (mostly removing seqlock
to access the 64bit variables on the machines) if this approach is
accepted.
Hoeun Ryu (3):
arch: add 64BIT_ATOMIC_ACCESS to support 64bit atomic access on 32bit
machines
arm: enable 64BIT_ATOMIC(_ALIGNED)_ACCESS on LPAE enabled machines
sche
tribute__((aligned(8 in the way of #ifdef.
Signed-off-by: Hoeun Ryu
---
arch/Kconfig | 20
1 file changed, 20 insertions(+)
diff --git a/arch/Kconfig b/arch/Kconfig
index 21d0089..1def331 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -115,6 +115,26 @@ config UPROBES
ntime_copy' variable is used for synchronization or not.
And align 'min_vruntime' by 8 if 64BIT_ATOMIC_ALIGNED_ACCESS is true
because 64BIT_ATOMIC_ALIGNED_ACCESS enabled system can access the variable
atomically only when it' aligned.
Signed-off-by: Hoeun Ryu
---
kernel/sc
guration
PHYS_OFFSET > PAGE_OFFSET, which can happen because it depends on the
reserved area for crash kernel, reading TTBCR and using the value to OR
other bit fields might be risky because it doesn't have a reset value for
TTBCR.
Suggested-by: Robin Murphy
Signed-off-by: Hoeun Ryu
---
* v1:
Hello, Russell.
Would you please review this patch ?
Than you
> On Jun 8, 2017, at 11:16 AM, Hoeun Ryu wrote:
>
> omap_uart_phys, omap_uart_virt and omap_uart_lsr reside in .data section
> and it's right implementation. But because of this, we cannot enable
> CONFIG_DEBUG_
Hello, Russell and Robin.
Would you please review this patch ?
Than you
> On Jun 7, 2017, at 11:39 AM, Hoeun Ryu wrote:
>
> Reading TTBCR in early boot stage might return the value of the previous
> kernel's configuration, especially in case of kexec. For example, if
>
guration
PHYS_OFFSET > PAGE_OFFSET, which can happen because it depends on the
reserved area for crash kernel, reading TTBCR and using the value to OR
other bit fields might be risky because it doesn't have a reset value for
TTBCR.
Acked-by: Russell King
Suggested-by: Robin Murphy
Signed-o
INCLUDE is included in
the other kernel parts like arch/arm/kernel/*
Signed-off-by: Hoeun Ryu
---
* to mail to relevant recipients, no respond yet from them
- TO=Tony Lindgren
- CC=linux-o...@vger.kernel.org
* indentical to previous patch
arch/arm/include/debug/omap2plus.S | 11 +
ction when
it's included in the decompressor.
Signed-off-by: Hoeun Ryu
---
* mail to relevant recipients, no response yet from them.
- add TO=Tony Lindgren
- add CC=linux-o...@vger.kernel.org
* indentical to previous patch
arch/arm/Kconfig.debug | 3 +--
1 file changed, 1 insertion(+),
On Tue, 2017-06-13 at 22:27 -0700, Tony Lindgren wrote:
> Hi,
>
> * Hoeun Ryu [170612 18:18]:
> >
> > --- a/arch/arm/include/debug/omap2plus.S
> > +++ b/arch/arm/include/debug/omap2plus.S
> > @@ -58,11 +58,22 @@
> >
> > #d
INCLUDE is included in
the other kernel parts like arch/arm/kernel/*
Signed-off-by: Hoeun Ryu
---
arch/arm/include/debug/omap2plus.S | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch/arm/include/debug/omap2plus.S
b/arch/arm/include/debug/omap2plus.S
index 6d867ae..6ce6ef9 100644
---
ction when
it's included in the decompressor.
Signed-off-by: Hoeun Ryu
---
arch/arm/Kconfig.debug | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
index ba2cb63..52eb0bf 100644
--- a/arch/arm/Kconfig.debug
+++ b/arch/arm/Kcon
e doesn't have a reset value for TTBCR.T1SZ.
Signed-off-by: Hoeun Ryu
---
arch/arm/mm/proc-v7-3level.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index 5e5720e..9ac2bec 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/
On Mon, Jun 5, 2017 at 6:34 PM, Russell King - ARM Linux
wrote:
> On Mon, Jun 05, 2017 at 06:22:20PM +0900, Hoeun Ryu wrote:
>> diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
>> index 5e5720e..9ac2bec 100644
>> --- a/arch/arm/mm/proc-v7-3level.S
>
e doesn't have a reset value for TTBCR.T1SZ.
Signed-off-by: Hoeun Ryu
---
arch/arm/mm/proc-v7-3level.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index 5e5720e..81404b8 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/
>> On Jun 5, 2017, at 7:30 PM, Robin Murphy wrote:
>>
>> On 05/06/17 11:06, Hoeun Ryu wrote:
>> Clearing TTBCR.T1SZ explicitly when kernel runs on a configuration of
>> PHYS_OFFSET > PAGE_OFFSET.
>> Reading TTBCR in early boot stage might return
ease see the commit
e7273ff4 : (ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI)
Signed-off-by: Hoeun Ryu
---
arch/arm/kernel/machine_kexec.c | 37 ++---
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/arch/arm/kernel/machine_ke
Hello, Russell King.
The following patch has not merged yet.
Do you have a plan to accept and merge this patch ?
Thank you.
On Mon, 2017-06-12 at 10:47 +0900, Hoeun Ryu wrote:
> Reading TTBCR in early boot stage might return the value of the previous
> kernel's configuration, es
t implement this
function like that because of the lack of IPI slots. Please see the commit
e7273ff4 : (ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI)
Signed-off-by: Hoeun Ryu
---
v2:
- calling crash_smp_send_stop() in machine_crash_shutdown() for the case
when crash_kexec_post
crash_shutdown()
tries to save crash information for nonpanic CPUs only when
crash_kexec_post_notifiers kernel option is disabled.
Signed-off-by: Hoeun Ryu
---
arch/arm64/kernel/machine_kexec.c | 19 ++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel
2017. 8. 4. 오후 7:04 Robin Murphy 작성:
>> On 04/08/17 07:07, Hoeun Ryu wrote:
>> Hello, Russell King.
>>
>> The following patch has not merged yet.
>> Do you have a plan to accept and merge this patch ?
>
> This should probably go through the ARM tree, so p
> On 4 Aug 2017, at 7:38 PM, James Morse wrote:
>
> Hi Hoeun,
>
>> On 04/08/17 08:02, Hoeun Ryu wrote:
>> Commit 0ee5941 : (x86/panic: replace smp_send_stop() with kdump friendly
>> version in panic path) introduced crash_smp_send_stop() which is a weak
>&
> On 4 Aug 2017, at 8:43 PM, AKASHI Takahiro wrote:
>
>> On Fri, Aug 04, 2017 at 11:38:16AM +0100, James Morse wrote:
>> Hi Hoeun,
>>
>>> On 04/08/17 08:02, Hoeun Ryu wrote:
>>> Commit 0ee5941 : (x86/panic: replace smp_send_stop() with kdump frie
ash_smp_send_stop()<= save crash dump for nonpanic cores
* crash_kexec_post_notifiers : true
panic()
crash_smp_send_stop()<= save crash dump for nonpanic cores
__crash_kexec()
machine_crash_shutdown()
crash_smp_send_stop()<= just return.
Signed-o
t implement this
function like that because of the lack of IPI slots. Please see the commit
e7273ff4 : (ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI)
Signed-off-by: Hoeun Ryu
---
v3:
- remove 'WARN_ON(num_online_cpus() > 1)' in machine_crash_shutdown().
it
Hello, Russell King.
Do you have a plan to include this patch in your tree ?
Thank you.
On Mon, 2017-06-12 at 10:47 +0900, Hoeun Ryu wrote:
> Reading TTBCR in early boot stage might return the value of the
> previous
> kernel's configuration, especially in case of kexec. F
91 matches
Mail list logo