On Mon, Apr 27, 2020 at 03:40:50PM -0700, Andrew Morton wrote:
> > https://www.spinics.net/lists/kernel/msg3473847.html
> > https://www.spinics.net/lists/kernel/msg3473840.html
> > https://www.spinics.net/lists/kernel/msg3473843.html
>
> OK, but that doesn't necessitate the above monstrosity? How
Le 28/04/2020 à 09:09, Christoph Hellwig a écrit :
On Mon, Apr 27, 2020 at 03:40:50PM -0700, Andrew Morton wrote:
https://www.spinics.net/lists/kernel/msg3473847.html
https://www.spinics.net/lists/kernel/msg3473840.html
https://www.spinics.net/lists/kernel/msg3473843.html
OK, but that doesn
On Tue, Apr 28, 2020 at 09:45:46AM +0200, Christophe Leroy wrote:
>> I guess that might be a worthwhile middle ground. Still not a fan of
>> all these ifdefs..
>>
>
> Can't we move the small X32 specific part out of
> __copy_siginfo_to_user32(), in an arch specific helper that voids for other
>
A Powerpc system with multiple possible nodes and with CONFIG_NUMA
enabled always used to have a node 0, even if node 0 does not any cpus
or memory attached to it. As per PAPR, node affinity of a cpu is only
available once its present / online. For all cpus that are possible but
not present, cpu_to
Node id queried from the static device tree may not
be correct. For example: it may always show 0 on a shared processor.
Hence prefer the node id queried from vphn and fallback on the device tree
based node id if vphn query fails.
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux...@kvack.org
Cc: linux-
Changelog v1:->v2:
- Rebased to v5.7-rc3
- Updated the changelog.
Linux kernel configured with CONFIG_NUMA on a system with multiple
possible nodes, marks node 0 as online at boot. However in practice,
there are systems which have node 0 as memoryless and cpuless.
This can cause
1. numa_balancing
Currently Linux kernel with CONFIG_NUMA on a system with multiple
possible nodes, marks node 0 as online at boot. However in practice,
there are systems which have node 0 as memoryless and cpuless.
This can cause numa_balancing to be enabled on systems with only one node
with memory and CPUs. The
On Tue, 2020-04-28 at 11:57 +1000, Jordan Niethe wrote:
> A future revision of the ISA will introduce prefixed instructions. A
> prefixed instruction is composed of a 4-byte prefix followed by a
> 4-byte suffix.
>
> All prefixes have the major opcode 1. A prefix will never be a valid
> word instru
On 21-04-20, 10:29, Mian Yousaf Kaukab wrote:
> The driver has to be manually loaded if it is built as a module. It
> is neither exporting MODULE_DEVICE_TABLE nor MODULE_ALIAS. Moreover,
> no platform-device is created (and thus no uevent is sent) for the
> clockgen nodes it depends on.
>
> Conver
Cédric Le Goater writes:
> PowerNV and pSeries machines can run using the XIVE or XICS interrupt
> mode. Report this information in /proc/cpuinfo :
>
> timebase: 51200
> platform: PowerNV
> model : 9006-22C
> machine : PowerNV 9006-22C
> firmware: OPAL
Provide an option to use ELFv2 ABI for big endian builds. This works on
GCC and clang (since 2014). It is less well tested and supported by the
GNU toolchain, but it can give some useful advantages of the ELFv2 ABI
for BE (e.g., less stack usage). Some distros even build BE ELFv2
userspace.
Review
On Tue, Apr 28, 2020 at 10:19:08AM +0800, Yicong Yang wrote:
> On 2020/4/28 2:13, Bjorn Helgaas wrote:
> >
> > I'm starting to think we're approaching this backwards. I searched
> > for PCIBIOS_FUNC_NOT_SUPPORTED, PCIBIOS_BAD_VENDOR_ID, and the other
> > error values. Almost every use is a *retur
Currently the spu coredump code triggers an RCU warning:
=
WARNING: suspicious RCU usage
5.7.0-rc3-01755-g7cd49f0b7ec7 #1 Not tainted
-
include/linux/fdtable.h:95 suspicious rcu_dereference_check() usage!
other info that might he
On 4/28/20 1:03 PM, Michael Ellerman wrote:
> Cédric Le Goater writes:
>> PowerNV and pSeries machines can run using the XIVE or XICS interrupt
>> mode. Report this information in /proc/cpuinfo :
>>
>> timebase: 51200
>> platform: PowerNV
>> model : 9006-22C
>> ma
Currently, we may perform a copy_to_user (through
simple_read_from_buffer()) while holding a context's register_lock,
while accessing the context save area.
This change uses a temporary buffers for the context save area data,
which we then pass to simple_read_from_buffer.
Signed-off-by: Jeremy Ke
Aneesh increased the size of struct pt_regs by 16 bytes and started
seeing this WARN_ON:
smp: Bringing up secondary CPUs ...
[ cut here ]
WARNING: CPU: 0 PID: 0 at arch/powerpc/kernel/process.c:455
giveup_all+0xb4/0x110
Modules linked in:
CPU: 0 PID: 0 Comm: swap
There's no need to cast in task_pt_regs() as tsk->thread.regs should
already be a struct pt_regs. If someone's using task_pt_regs() on
something that's not a task but happens to have a thread.regs then
we'll deal with them later.
Signed-off-by: Michael Ellerman
---
arch/powerpc/include/asm/proce
> Hence, serialize hvc_open and check if tty->private_data is NULL before
> proceeding ahead.
How do you think about to add the tag “Fixes” because of adjustments
for the data synchronisation?
…
> +++ b/drivers/tty/hvc/hvc_console.c
…
@@ -384,6 +394,8 @@ static int hvc_open(struct tty_struct *tt
The VDSO datapage and the text pages are always located immediately
next to each other, so it can be hardcoded without an indirection
through __kernel_datapage_offset
In order to ease things, move the data page in front like other
arches, that way there is no need to know the size of the library
t
On the same way as already done on PPC32, drop __get_datapage()
function and use get_datapage inline macro instead.
See commit ec0895f08f99 ("powerpc/vdso32: inline __get_datapage()")
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso64/cacheflush.S | 9
arch/powerpc/kerne
cpu_relax() need to be in asm/vdso/processor.h to be used by
the C VDSO generic library.
Move it there.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/processor.h | 10 ++
arch/powerpc/include/asm/vdso/processor.h | 23 +++
2 files changed, 25 inse
The \tmp param is not used anymore, remove it.
Signed-off-by: Christophe Leroy
---
v7: New patch, splitted out of preceding patch
---
arch/powerpc/include/asm/vdso_datapage.h | 2 +-
arch/powerpc/kernel/vdso32/cacheflush.S | 2 +-
arch/powerpc/kernel/vdso32/datapage.S | 4 ++--
arch/power
This is the seventh version of a series to switch powerpc VDSO to
generic C implementation.
Main changes since v7 are:
- Added gettime64 on PPC32
This series applies on today's powerpc/merge branch.
See the last patches for details on changes and performance.
Christophe Leroy (8):
powerpc/vds
Prepare for switching VDSO to generic C implementation in following
patch. Here, we:
- Modify __get_datapage() to take an offset
- Prepare the helpers to call the C VDSO functions
- Prepare the required callbacks for the C VDSO functions
- Prepare the clocksource.h files to define VDSO_ARCH_CLOCKMO
Provides __kernel_clock_gettime64() on vdso32. This is the
64 bits version of __kernel_clock_gettime() which is
y2038 compliant.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso32/gettimeofday.S | 9 +
arch/powerpc/kernel/vdso32/vdso32.lds.S| 1 +
arch/powerpc/kernel/vds
For VDSO32 on PPC64, we create a fake 32 bits config, on the same
principle as MIPS architecture, in order to get the correct parts of
the different asm header files.
With the C VDSO, the performance is slightly lower, but it is worth
it as it will ease maintenance and evolution, and also brings c
When adding gettime64() to a 32 bit architecture (namely powerpc/32)
it has been noticed that GCC doesn't inline anymore
__cvdso_clock_gettime_common() because it is called twice
(Once by __cvdso_clock_gettime() and once by
__cvdso_clock_gettime32).
This has the effect of seriously degrading the p
arch/powerpc/kernel/vmlinux.lds.S has
#ifdef CONFIG_RELOCATABLE
...
.rela.dyn : AT(ADDR(.rela.dyn) - LOAD_OFFSET)
{
__rela_dyn_start = .;
*(.rela*)
}
#endif
...
DISCARDS
/DISCARD/ : {
*(*.EMB.apuinfo)
With the command-line option, -mx86-used-note=yes, the x86 assembler
in binutils 2.32 and above generates a program property note in a note
section, .note.gnu.property, to encode used x86 ISAs and features. But
kernel linker script only contains a single NOTE segment:
PHDRS {
text PT_LOAD FLAGS(
On Tue, Apr 28, 2020 at 09:48:11PM +1000, Michael Ellerman wrote:
>
> This comes from fcheck_files() via fcheck().
>
> It's pretty clearly documented that fcheck() must be wrapped with
> rcu_read_lock(), so fix it.
But for this to actually be useful you'd need the rcu read lock until
your are do
On Tue, Apr 28, 2020 at 08:02:07PM +0800, Jeremy Kerr wrote:
> Currently, we may perform a copy_to_user (through
> simple_read_from_buffer()) while holding a context's register_lock,
> while accessing the context save area.
>
> This change uses a temporary buffers for the context save area data,
>
Introduce a macro PATCH_INSN_OR_GOTO() to simplify instruction patching,
and to make the error messages more uniform and useful:
- print an error message that includes the original return value
- print the function name and line numbers, so that the offending
location is clear
- goto a label whic
Changes in v2:
- Change macro to use 'goto' instead of 'return'
- Rename macro to indicate use of 'goto'
- Convert more patch_instruction() uses in optprobes to use the new
macro.
- Drop 1st patch, which added error checking in do_patch_instruction()
since that is being covered in a separate
patch_instruction() can fail in some scenarios. Add appropriate error
checking so that such failures are caught and logged, and suitable error
code is returned.
Fixes: d07df82c43be8 ("powerpc/kprobes: Move kprobes over to
patch_instruction()")
Fixes: f3eca95638931 ("powerpc/kprobes/optprobes: Use
Hi,
On 04/28/2020 01:16 PM, Christophe Leroy wrote:
Provides __kernel_clock_gettime64() on vdso32. This is the
64 bits version of __kernel_clock_gettime() which is
y2038 compliant.
Signed-off-by: Christophe Leroy
Why does snowpatch still report upstream failure ? This is fixed in
latest po
On 4/27/20 12:33 PM, Juliet Kim wrote:
The maximum entries for H_SEND_SUB_CRQ_INDIRECT has increased on
some platforms from 16 to 128. If Live Partition Mobility is used
to migrate a running OS image from a newer source platform to an
older target platform, then H_SEND_SUB_CRQ_INDIRECT will fail
On Tue, Apr 28, 2020 at 2:05 PM Jeremy Kerr wrote:
>
> Currently, we may perform a copy_to_user (through
> simple_read_from_buffer()) while holding a context's register_lock,
> while accessing the context save area.
>
> This change uses a temporary buffers for the context save area data,
> which w
On Tue, Apr 28, 2020 at 3:16 PM Christophe Leroy
wrote:
>
> Provides __kernel_clock_gettime64() on vdso32. This is the
> 64 bits version of __kernel_clock_gettime() which is
> y2038 compliant.
>
> Signed-off-by: Christophe Leroy
Looks good to me
Reviewed-by: Arnd Bergmann
There was a bug on A
Balamuruhan S wrote:
Avoid redefining macros to encode ppc instructions instead reuse it from
ppc-opcode.h, Makefile changes are necessary to compile memcmp_64.S with
__ASSEMBLY__ defined from selftests.
Signed-off-by: Balamuruhan S
---
.../selftests/powerpc/stringloops/Makefile| 34 ++
Balamuruhan S wrote:
ppc-opcode.h have base instruction encoding wrapped with stringify_in_c()
for raw encoding to have compatibility. But there are redundant macros for
base instruction encodings in bpf, instruction emulation test infrastructure
and powerpc selftests.
Currently PPC_INST_* macro
Balamuruhan S wrote:
move macro definitions of powerpc instructions from bpf_jit.h to ppc-opcode.h
and adopt the users of the macros accordingly. `PPC_MR()` is defined twice in
bpf_jit.h, remove the duplicate one.
Signed-off-by: Balamuruhan S
---
arch/powerpc/include/asm/ppc-opcode.h | 139 +++
FYI, these little hunks reduce the difference to my version, maybe
you can fold them in?
diff --git a/arch/powerpc/platforms/cell/spufs/file.c
b/arch/powerpc/platforms/cell/spufs/file.c
index c62d77ddaf7d3..1861436a6091d 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/pla
On Mon, 27 Apr 2020 23:16:52 +0200
Mauro Carvalho Chehab wrote:
> This is the second part of a series I wrote sometime ago where I manually
> convert lots of files to be properly parsed by Sphinx as ReST files.
>
> As it touches on lot of stuff, this series is based on today's linux-next,
> at
I think I found a way to improve the x32 handling:
This is a simplification over Christoph's "[PATCH 2/7] signal: factor
copy_siginfo_to_external32 from copy_siginfo_to_user32", reducing the
x32 specifics in the common code to a single #ifdef/#endif check, in
order to keep it more readable for eve
The architecture independent routine hugetlb_default_setup sets up
the default huge pages size. It has no way to verify if the passed
value is valid, so it accepts it and attempts to validate at a later
time. This requires undocumented cooperation between the arch specific
and arch independent co
v4 -
Fixed huge page order definitions for arm64 (Qian Cai)
Removed hugepages_supported() checks in command line processing as
powerpc does not set hugepages_supported until later in boot (Sandipan)
Added Acks, Reviews and Tested (Will, Gerald, Anders, Sandipan)
v3 -
Used weak att
With all hugetlb page processing done in a single file clean up code.
- Make code match desired semantics
- Update documentation with semantics
- Make all warnings and errors messages start with 'HugeTLB:'.
- Consistently name command line parsing routines.
- Warn if !hugepages_supported() and co
The routine hugetlb_add_hstate prints a warning if the hstate already
exists. This was originally done as part of kernel command line
parsing. If 'hugepagesz=' was specified more than once, the warning
pr_warn("hugepagesz= specified twice, ignoring\n");
would be printed.
Some architectur
Now that architectures provide arch_hugetlb_valid_size(), parsing
of "hugepagesz=" can be done in architecture independent code.
Create a single routine to handle hugepagesz= parsing and remove
all arch specific routines. We can also remove the interface
hugetlb_bad_size() as this is no longer use
On 4/28/20 10:35 AM, Thomas Falcon wrote:
> On 4/27/20 12:33 PM, Juliet Kim wrote:
>> The maximum entries for H_SEND_SUB_CRQ_INDIRECT has increased on
>> some platforms from 16 to 128. If Live Partition Mobility is used
>> to migrate a running OS image from a newer source platform to an
>> older
Hi!
On Tue, Apr 28, 2020 at 09:25:17PM +1000, Nicholas Piggin wrote:
> +config BUILD_BIG_ENDIAN_ELF_V2
> + bool "Build big-endian kernel using ELFv2 ABI (EXPERIMENTAL)"
> + depends on PPC64 && CPU_BIG_ENDIAN && EXPERT
> + default n
> + select BUILD_ELF_V2
> + help
> + Thi
On Tue, 28 Apr 2020 15:08:36 +0530 Srikar Dronamraju
wrote:
> Currently Linux kernel with CONFIG_NUMA on a system with multiple
> possible nodes, marks node 0 as online at boot. However in practice,
> there are systems which have node 0 as memoryless and cpuless.
>
> This can cause numa_balanc
Excerpts from Segher Boessenkool's message of April 29, 2020 9:40 am:
> Hi!
>
> On Tue, Apr 28, 2020 at 09:25:17PM +1000, Nicholas Piggin wrote:
>> +config BUILD_BIG_ENDIAN_ELF_V2
>> +bool "Build big-endian kernel using ELFv2 ABI (EXPERIMENTAL)"
>> +depends on PPC64 && CPU_BIG_ENDIAN && EX
On Tue, Apr 28, 2020 at 03:09:51PM -0400, Jonathan Corbet wrote:
> So I'm happy to merge this set, but there is one thing that worries me a
> bit...
>
> > fs/coda/Kconfig |2 +-
>
> I'd feel a bit better if I could get an ack or two from filesystem folks
> befor
Provide an option to build big-endian kernels using the ELF V2 ABI. This works
on GCC and clang (since about 2014). it is is not officially supported by the
GNU toolchain, but it can give big-endian kernels some useful advantages of
the V2 ABI (e.g., less stack usage).
Reviewed-by: Segher Boessen
Hi Christoph,
> FYI, these little hunks reduce the difference to my version, maybe
> you can fold them in?
Sure, no problem.
How do you want to coordinate these? I can submit mine through mpe, but
that may make it tricky to synchronise with your changes. Or, you can
include this change in your s
> >
> > By marking, N_ONLINE as NODE_MASK_NONE, lets stop assuming that Node 0 is
> > always online.
> >
> > ...
> >
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -116,8 +116,10 @@ EXPORT_SYMBOL(latent_entropy);
> > */
> > nodemask_t node_states[NR_NODE_STATES] __read_mostly = {
>
There seems to be a minor typo which breaks compilation when
CONFIG_MPROFILE_KERNEL is not enabled. See the fix below.
---
arch/powerpc/kernel/trace/ftrace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/
ftrace
Hi Jordan,
I needed the below fix for building with CONFIG_STRICT_KERNEL_RWX enabled.
Hopefully it's correct, I have not yet had a chance to test it beyond building
it.
- Alistair
---
arch/powerpc/lib/code-patching.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch
When compiled with CONFIG_STRICT_KERNEL_RWX, the kernel must create
temporary mappings when patching itself. These mappings temporarily
override the strict RWX text protections to permit a write. Currently,
powerpc allocates a per-CPU VM area for patching. Patching occurs as
follows:
1. Ma
Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
mappings to other CPUs. These mappings should be kept local to the CPU
doing the patching. Use the pre-initialized temporary mm and patching
address for this purpose. Also add a check after patching to ensure the
patch succeeded.
U
When live patching with STRICT_KERNEL_RWX, the CPU doing the patching
must use a temporary mapping which allows for writing to kernel text.
During the entire window of time when this temporary mapping is in use,
another CPU could write to the same mapping and maliciously alter kernel
text. Implemen
When code patching a STRICT_KERNEL_RWX kernel the page containing the
address to be patched is temporarily mapped with permissive memory
protections. Currently, a per-cpu vmalloc patch area is used for this
purpose. While the patch area is per-cpu, the temporary page mapping is
inserted into the ke
When live patching a STRICT_RWX kernel, a mapping is installed at a
"patching address" with temporary write permissions. Provide a
LKDTM-only accessor function for this address in preparation for a LKDTM
test which attempts to "hijack" this mapping by writing to it from
another CPU.
Signed-off-by:
x86 supports the notion of a temporary mm which restricts access to
temporary PTEs to a single CPU. A temporary mm is useful for situations
where a CPU needs to perform sensitive operations (such as patching a
STRICT_KERNEL_RWX kernel) requiring temporary mappings without exposing
said mappings to
On 2020/4/26 20:59, Thomas Huth wrote:
On 23/04/2020 13.00, Christian Borntraeger wrote:
On 23.04.20 12:58, Tianjia Zhang wrote:
On 2020/4/23 18:39, Cornelia Huck wrote:
On Thu, 23 Apr 2020 11:01:43 +0800
Tianjia Zhang wrote:
On 2020/4/23 0:04, Cornelia Huck wrote:
On Wed, 22 Apr 20
Excerpts from Adhemerval Zanella's message of April 27, 2020 11:09 pm:
>
>
> On 26/04/2020 00:41, Nicholas Piggin wrote:
>> Excerpts from Rich Felker's message of April 26, 2020 9:11 am:
>>> On Sun, Apr 26, 2020 at 08:58:19AM +1000, Nicholas Piggin wrote:
Excerpts from Christophe Leroy's mes
On Tue, Apr 28, 2020 at 3:36 PM Christophe Leroy
wrote:
>
>
>
> Le 28/04/2020 à 07:30, Jordan Niethe a écrit :
> > On Tue, Apr 28, 2020 at 3:20 PM Christophe Leroy
> > wrote:
> >>
> >>
> >>
> >> Le 28/04/2020 à 03:57, Jordan Niethe a écrit :
> >>> The instructions for xmon's breakpoint are stored
On Tue, Apr 28, 2020 at 8:07 PM Balamuruhan S wrote:
>
> On Tue, 2020-04-28 at 11:57 +1000, Jordan Niethe wrote:
> > A future revision of the ISA will introduce prefixed instructions. A
> > prefixed instruction is composed of a 4-byte prefix followed by a
> > 4-byte suffix.
> >
> > All prefixes ha
On Wed, Apr 29, 2020 at 11:59 AM Alistair Popple wrote:
>
> There seems to be a minor typo which breaks compilation when
> CONFIG_MPROFILE_KERNEL is not enabled. See the fix below.
>
> ---
> arch/powerpc/kernel/trace/ftrace.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git
On Wed, Apr 29, 2020 at 12:02 PM Alistair Popple wrote:
>
> Hi Jordan,
>
> I needed the below fix for building with CONFIG_STRICT_KERNEL_RWX enabled.
> Hopefully it's correct, I have not yet had a chance to test it beyond building
> it.
Thanks, I'll get that working.
>
> - Alistair
>
> ---
> arch
Excerpts from Christian Zigotzky's message of April 29, 2020 2:53 pm:
> Hi All,
>
> The issue still exists in the RC3. (kernel config attached)
>
> Please help me to fix this issue.
Huh, looks like maybe early_init_mmu() got uninlined because the
compiler decided it was unlikely.
Does this fix
Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
x86 supports the notion of a temporary mm which restricts access to
temporary PTEs to a single CPU. A temporary mm is useful for situations
where a CPU needs to perform sensitive operations (such as patching a
STRICT_KERNEL_RWX kernel) requ
Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
x86 supports the notion of a temporary mm which restricts access to
temporary PTEs to a single CPU. A temporary mm is useful for situations
where a CPU needs to perform sensitive operations (such as patching a
STRICT_KERNEL_RWX kernel) requ
Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
mappings to other CPUs. These mappings should be kept local to the CPU
doing the patching. Use the pre-initialized temporary mm and patching
address for this purpose. Also a
x86/perf_regs.h is included by util/intel-pt.c, which will get compiled
when buiding perf on powerpc. Since x86/perf_regs.h has
`PERF_EXTENDED_REG_MASK` defined, defining `PERF_EXTENDED_REG_MASK` for
powerpc to add support for perf extended regs will result in perf build
error on powerpc.
Currentl
Patch set to add support for perf extended register capability in
powerpc. The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to
indicate the PMU which support extended registers. The generic code
define the mask of extended registers as 0 for non supported architectures.
patch 2/2 defines th
The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to indicate the
PMU which support extended registers. The generic code define the mask
of extended registers as 0 for non supported architectures.
Add support for extended registers in POWER9 architecture. For POWER9,
the extended registers a
On Wed, Apr 29, 2020 at 09:36:30AM +0800, Jeremy Kerr wrote:
> Hi Christoph,
>
> > FYI, these little hunks reduce the difference to my version, maybe
> > you can fold them in?
>
> Sure, no problem.
>
> How do you want to coordinate these? I can submit mine through mpe, but
> that may make it tri
On Wed, Apr 29, 2020 at 08:05:53AM +0200, Christoph Hellwig wrote:
> On Wed, Apr 29, 2020 at 09:36:30AM +0800, Jeremy Kerr wrote:
> > Hi Christoph,
> >
> > > FYI, these little hunks reduce the difference to my version, maybe
> > > you can fold them in?
> >
> > Sure, no problem.
> >
> > How do yo
And another one that should go on top of this one to address Al's other
compaint:
---
>From 1b7ced3de0b3a4addec61f61ac5278c3ff141657 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Wed, 22 Apr 2020 09:05:30 +0200
Subject: powerpc/spufs: stop using access_ok
Just use the proper non __-pref
Le 28/04/2020 à 21:56, Arnd Bergmann a écrit :
I think I found a way to improve the x32 handling:
This is a simplification over Christoph's "[PATCH 2/7] signal: factor
copy_siginfo_to_external32 from copy_siginfo_to_user32", reducing the
x32 specifics in the common code to a single #ifdef/#en
The same complicated sequence for juggling EE, RI, soft mask, and
irq tracing is repeated 3 times, tidy these up into one function.
This differs qiute a bit between sub architectures, so this makes
the ppc32 port cleaner as well.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/syscall_64
Here's a bunch of fixes I collected, and some that Aneesh needs for
his kuap on hash series.
Nicholas Piggin (6):
powerpc/64/kuap: move kuap checks out of MSR[RI]=0 regions of exit
code
missing isync
powerpc/64/kuap: interrupt exit kuap restore add missing isync,
conditionally restor
Any kind of WARN causes a program check that will crash with
unrecoverable exception if it occurs when RI is clear.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/syscall_64.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/syscall_6
---
arch/powerpc/include/asm/book3s/64/kup-radix.h | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h
b/arch/powerpc/include/asm/book3s/64/kup-radix.h
index 3bcef989a35d..8dc5f292b806 100644
--- a/arch/powerpc/include/asm
This fixes a missing isync before the mtspr(AMR), which ensures previous
memory accesses execute before the mtspr, so they can't slip past the
AMR check and access user memory if we are returning to a context where
kuap is allowed.
The AMR is made conditional, and only done if the AMR must change,
system reset interrupt handler locks AMR and exits with
PTION_RESTORE_REGS without restoring AMR. Similarly to the soft-NMI
ler, it needs to restore.
ed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/exceptions-64s.S | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/kernel/except
Interrupts that use fast_interrupt_return actually do lock AMR, but they
have been ones which tend to come from userspace (or kernel bugs) in
radix mode. With kuap on hash, segment interrupts are taken in kernel
often, which quickly breaks due to the missing restore.
Signed-off-by: Nicholas Piggin
Similar to the C code change, make the AMR restore conditional on
whether the register has changed.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/book3s/64/kup-radix.h | 10 +++---
arch/powerpc/kernel/entry_64.S | 8
arch/powerpc/kernel/exceptions-64s.
On Wed, Apr 29, 2020 at 08:17:22AM +0200, Christophe Leroy wrote:
>> +#ifndef CONFIG_X86_X32_ABI
>
> Can it be declared __weak instead of enclosing it in an #ifndef ?
I really hate the __weak ifdefs. But my plan was to move to a
CONFIG_ARCH_COPY_SIGINFO_TO_USER32 and have x86 select it.
Hi Christoph,
> And another one that should go on top of this one to address Al's other
> compaint:
Yeah, I was pondering that one. The access_ok() is kinda redundant, but
it does avoid forcing a SPU context save on those errors.
However, it's not like we really need to optimise for the case of
On 29 April 2020 at 07:13 am, Nicholas Piggin wrote:
Excerpts from Christian Zigotzky's message of April 29, 2020 2:53 pm:
Hi All,
The issue still exists in the RC3. (kernel config attached)
Please help me to fix this issue.
Huh, looks like maybe early_init_mmu() got uninlined because the
com
On Tue, Apr 28, 2020 at 09:56:26PM +0200, Arnd Bergmann wrote:
> I think I found a way to improve the x32 handling:
>
> This is a simplification over Christoph's "[PATCH 2/7] signal: factor
> copy_siginfo_to_external32 from copy_siginfo_to_user32", reducing the
> x32 specifics in the common code t
Hello Srikar,
On Tue, Apr 28, 2020 at 03:08:35PM +0530, Srikar Dronamraju wrote:
> Node id queried from the static device tree may not
> be correct. For example: it may always show 0 on a shared processor.
> Hence prefer the node id queried from vphn and fallback on the device tree
> based node id
Well the last series was a disaster, I'll try again sending the
patches with proper subject and changelogs written.
Nicholas Piggin (6):
powerpc/64/kuap: move kuap checks out of MSR[RI]=0 regions of exit
code
powerpc/64s/kuap: kuap_restore missing isync
powerpc/64/kuap: interrupt exit co
Any kind of WARN causes a program check that will crash with
unrecoverable exception if it occurs when RI is clear.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/syscall_64.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kernel/syscall_6
97 matches
Mail list logo