Since 01 May 2020, our email adresses have changed to @csgroup.eu
Update MAINTAINERS accordingly.
Signed-off-by: Christophe Leroy
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 2926327e4976..e8714328cc90 100644
--- a/MAINTAINE
On Tue, May 05, 2020 at 03:28:50PM -0500, Eric W. Biederman wrote:
> We probably can. After introducing a kernel_compat_siginfo that is
> the size that userspace actually would need.
>
> It isn't something I want to mess with until this code gets merged, as I
> think the set_fs cleanups are more
Wolfram Sang writes:
>> > My 'pengutronix' address is defunct for years. Merge the entries and use
>> > the proper contact address.
>>
>> Is there any point adding the new address? It's just likely to bit-rot
>> one day too.
>
> At least, this one is a group address, not an individual one, so les
On Sun, May 03, 2020 at 06:09:11PM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> To support kmap_atomic_prot(), all architectures need to support
> protections passed to their kmap_atomic_high() function. Pass
> protections into kmap_atomic_high() and change the name to
> kmap_atomic_h
On Sun, May 03, 2020 at 06:09:09PM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> We want to support kmap_atomic_prot() on all architectures and it makes
> sense to define kmap_atomic() to use the default kmap_prot.
>
> So we ensure all arch's have a globally available kmap_prot either
There are multiple similar definitions for arch_clear_hugepage_flags() on
various platforms. Lets just add it's generic fallback definition for
platforms that do not override. This help reduce code duplication.
Cc: Russell King
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Tony Luck
Cc: Fenghua Yu
There are multiple similar definitions for is_hugepage_only_range() on
various platforms. Lets just add it's generic fallback definition for
platforms that do not override. This help reduce code duplication.
Cc: Russell King
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Tony Luck
Cc: Fenghua Yu
Cc:
This series adds the following new generic fallbacks. Before that it drops
__HAVE_ARCH_HUGE_PTEP_GET from arm64 platform.
1. is_hugepage_only_range()
2. arch_clear_hugepage_flags()
This has been boot tested on arm64 and x86 platforms but built tested on
some more platforms including the changed o
On Sun, May 03, 2020 at 06:09:08PM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> Every single architecture (including !CONFIG_HIGHMEM) calls...
>
> pagefault_enable();
> preempt_enable();
>
> ... before returning from __kunmap_atomic(). Lift this code into the
> kunmap_at
Looks good,
Reviewed-by: Christoph Hellwig
On Sun, May 03, 2020 at 06:09:06PM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> During this kmap() conversion series we must maintain bisect-ability.
> To do this, kmap_atomic_prot() in x86, powerpc, and microblaze need to
> remain functional.
>
> Create a temporary inline version of
fix coccinelle warning, use ARRAY_SIZE
arch/powerpc/kernel/sysfs.c:853:34-35: WARNING: Use ARRAY_SIZE
arch/powerpc/kernel/sysfs.c:860:33-34: WARNING: Use ARRAY_SIZE
arch/powerpc/kernel/sysfs.c:868:28-29: WARNING: Use ARRAY_SIZE
arch/powerpc/kernel/sysfs.c:947:34-35: WARNING: Use ARRAY_SIZE
arch/po
On 4/29/20 11:34 AM, Anju T Sudhakar wrote:
The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to indicate the
PMU which support extended registers. The generic code define the mask
of extended registers as 0 for non supported architectures.
Add support for extended registers in POWER9 a
Excerpts from Christophe Leroy's message of April 7, 2020 3:37 pm:
>
>
> Le 07/04/2020 à 07:16, Nicholas Piggin a écrit :
>> machine_check_early is taken as an NMI, so nmi_enter is used there.
>> machine_check_exception is no longer taken as an NMI (it's invoked
>> via irq_work in the case a mach
This adds emulation support for the following prefixed Fixed-Point
Arithmetic instructions:
* Prefixed Add Immediate (paddi)
Reviewed-by: Balamuruhan S
Signed-off-by: Jordan Niethe
---
v3: Since we moved the prefixed loads/stores into the load/store switch
statement it no longer makes sense to
This adds emulation support for the following prefixed integer
load/stores:
* Prefixed Load Byte and Zero (plbz)
* Prefixed Load Halfword and Zero (plhz)
* Prefixed Load Halfword Algebraic (plha)
* Prefixed Load Word and Zero (plwz)
* Prefixed Load Word Algebraic (plwa)
* Prefixed Load
If a prefixed instruction results in an alignment exception, the
SRR1_PREFIXED bit is set. The handler attempts to emulate the
responsible instruction and then increment the NIP past it. Use
SRR1_PREFIXED to determine by how much the NIP should be incremented.
Prefixed instructions are not permitt
Do not allow inserting breakpoints on the suffix of a prefix instruction
in kprobes.
Signed-off-by: Jordan Niethe
---
v8: Add this back from v3
---
arch/powerpc/kernel/kprobes.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel
Do not allow placing xmon breakpoints on the suffix of a prefix
instruction.
Signed-off-by: Jordan Niethe
---
v8: Add this back from v3
---
arch/powerpc/xmon/xmon.c | 29 +++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/
Expand the feature-fixups self-tests to includes tests for prefixed
instructions.
Signed-off-by: Jordan Niethe
---
v6: New to series
v8: Use OP_PREFIX
---
arch/powerpc/lib/feature-fixups-test.S | 69
arch/powerpc/lib/feature-fixups.c | 73 ++
Expand the code-patching self-tests to includes tests for patching
prefixed instructions.
Signed-off-by: Jordan Niethe
---
v6: New to series
v8: Use OP_PREFIX
---
arch/powerpc/lib/Makefile | 2 +-
arch/powerpc/lib/code-patching.c | 21 +
arch/powerpc/lib/tes
For powerpc64, redefine the ppc_inst type so both word and prefixed
instructions can be represented. On powerpc32 the type will remain the
same. Update places which had assumed instructions to be 4 bytes long.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v4: New to series
v5:
Add the BOUNDARY SRR1 bit definition for when the cause of an alignment
exception is a prefixed instruction that crosses a 64-byte boundary.
Add the PREFIXED SRR1 bit definition for exceptions caused by prefixed
instructions.
Bit 35 of SRR1 is called SRR1_ISI_N_OR_G. This name comes from it being
From: Alistair Popple
Prefix instructions have their own FSCR bit which needs to enabled via
a CPU feature. The kernel will save the FSCR for problem state but it
needs to be enabled initially.
If prefixed instructions are made unavailable by the [H]FSCR, attempting
to use them will cause a faci
test_translate_branch() uses two pointers to instructions within a
buffer, p and q, to test patch_branch(). The pointer arithmetic done on
them assumes a size of 4. This will not work if the instruction length
changes. Instead do the arithmetic relative to the void * to the buffer.
Reviewed-by: Al
When a new breakpoint is created, the second instruction of that
breakpoint is patched with a trap instruction. This assumes the length
of the instruction is always the same. In preparation for prefixed
instructions, remove this assumption. Insert the trap instruction at the
same time the first ins
Currently in xmon, mread() is used for reading instructions. In
preparation for prefixed instructions, create and use a new function,
mread_instr(), especially for reading instructions.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v5: New to series, seperated from "Add prefixed
Currently all instructions have the same length, but in preparation for
prefixed instructions introduce a function for returning instruction
length.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v6: - feature-fixups.c: do_final_fixups(): use here
- ppc_inst_len(): change retu
Define specific __get_user_instr() and __get_user_instr_inatomic()
macros for reading instructions from user space.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
arch/powerpc/include/asm/uaccess.h | 5 +
arch/powerpc/kernel/align.c | 2 +-
arch/powerpc/kernel/hw_bre
Instead of using memcpy() and flush_icache_range() use
patch_instruction() which not only accomplishes both of these steps but
will also make it easier to add support for prefixed instructions.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v6: New to series.
---
arch/powerpc/ker
Introduce a probe_kernel_read_inst() function to use in cases where
probe_kernel_read() is used for getting an instruction. This will be
more useful for prefixed instructions.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v6: - This was previously just in ftrace.c
---
arch/power
Introduce a probe_user_read_inst() function to use in cases where
probe_user_read() is used for getting an instruction. This will be more
useful for prefixed instructions.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v6: - New to series
---
arch/powerpc/include/asm/inst.h | 3
Prefixed instructions will mean there are instructions of different
length. As a result dereferencing a pointer to an instruction will not
necessarily give the desired result. Introduce a function for reading
instructions from memory into the instruction data type.
Reviewed-by: Alistair Popple
Si
Currently unsigned ints are used to represent instructions on powerpc.
This has worked well as instructions have always been 4 byte words.
However, a future ISA version will introduce some changes to
instructions that mean this scheme will no longer work as well. This
change is Prefixed Instruction
In preparation for an instruction data type that can not be directly
used with the '==' operator use functions for checking equality.
Reviewed-by: Balamuruhan S
Signed-off-by: Jordan Niethe
---
v5: Remove ppc_inst_null()
v7: Fix compilation issue in expected_nop_sequence() when no
CONFIG_MPR
Use a function for byte swapping instructions in preparation of a more
complicated instruction type.
Reviewed-by: Balamuruhan S
Signed-off-by: Jordan Niethe
---
arch/powerpc/include/asm/inst.h | 5 +
arch/powerpc/kernel/align.c | 2 +-
2 files changed, 6 insertions(+), 1 deletion(-)
di
In preparation for using a data type for instructions that can not be
directly used with the '>>' operator use a function for getting the op
code of an instruction.
Reviewed-by: Alistair Popple
Signed-off-by: Jordan Niethe
---
v4: New to series
v6: - Rename ppc_inst_primary() to ppc_inst_primary
In preparation for introducing a more complicated instruction type to
accommodate prefixed instructions use an accessor for getting an
instruction as a u32.
Signed-off-by: Jordan Niethe
---
v4: New to series
v5: Remove references to 'word' instructions
v6: - test_emulate_step.c: execute_compute_i
In preparation for instructions having a more complex data type start
using a macro, ppc_inst(), for making an instruction out of a u32. A
macro is used so that instructions can be used as initializer elements.
Currently this does nothing, but it will allow for creating a data type
that can repres
create_branch(), create_cond_branch() and translate_branch() return the
instruction that they create, or return 0 to signal an error. Separate
these concerns in preparation for an instruction type that is not just
an unsigned int. Fill the created instruction to a pointer passed as
the first param
A modulo operation is used for calculating the current offset from a
breakpoint within the breakpoint table. As instruction lengths are
always a power of 2, this can be replaced with a bitwise 'and'. The
current check for word alignment can be replaced with checking that the
lower 2 bits are not se
The instructions for xmon's breakpoint are stored bpt_table[] which is in
the data section. This is problematic as the data section may be marked
as no execute. Move bpt_table[] to the text section.
Signed-off-by: Jordan Niethe
---
v6: - New to series. Was part of the previous patch.
- Make B
To execute an instruction out of line after a breakpoint, the NIP is set
to the address of struct bpt::instr. Here a copy of the instruction that
was replaced with a breakpoint is kept, along with a trap so normal flow
can be resumed after XOLing. The struct bpt's are located within the
data sectio
For modifying instructions in xmon, patch_instruction() can serve the
same role that store_inst() is performing with the advantage of not
being specific to xmon. In some places patch_instruction() is already
being using followed by store_inst(). In these cases just remove the
store_inst(). Otherwis
A future revision of the ISA will introduce prefixed instructions. A
prefixed instruction is composed of a 4-byte prefix followed by a
4-byte suffix.
All prefixes have the major opcode 1. A prefix will never be a valid
word instruction. A suffix may be an existing word instruction or a
new instruc
On Wed, 2020-04-29 at 10:00:48 UTC, Xiongfeng Wang wrote:
> Move the static keyword to the front of declaration of 'vuart_bus_priv',
> and resolve the following compiler warning that can be seen when
> building with warnings enabled (W=1):
>
> drivers/ps3/ps3-vuart.c:867:1: warning: âstaticâ i
On Wed, 2020-04-22 at 09:26:12 UTC, "Naveen N. Rao" wrote:
> Currently, it is possible to have CONFIG_FUNCTION_TRACER disabled, but
> CONFIG_MPROFILE_KERNEL enabled. Though all existing users of
> MPROFILE_KERNEL are doing the right thing, it is weird to have
> MPROFILE_KERNEL enabled when the func
On Mon, 2020-04-20 at 08:56:09 UTC, Hari Bathini wrote:
> At times, memory ranges have to be looked up during early boot, when
> kernel couldn't be initialized for dynamic memory allocation. In fact,
> reserved-ranges look up is needed during FADump memory reservation.
> Without accounting for rese
On Tue, 2020-04-07 at 08:47:39 UTC, "Gautham R. Shenoy" wrote:
> From: "Gautham R. Shenoy"
>
> Currently prior to entering an idle state on a Linux Guest, the
> pseries cpuidle driver implement an idle_loop_prolog() and
> idle_loop_epilog() functions which ensure that idle_purr is correctly
> com
Hi
On Fri, May 1, 2020 at 6:23 PM Mark Brown wrote:
>
> On Fri, May 01, 2020 at 04:12:05PM +0800, Shengjiu Wang wrote:
> > The difference for esai on imx8qm is that DMA device is EDMA.
> >
> > EDMA requires the period size to be multiple of maxburst. Otherwise
> > the remaining bytes are not tran
Segher Boessenkool writes:
> On Tue, May 05, 2020 at 05:40:21PM +0200, Christophe Leroy wrote:
>> >>+#define __put_user_asm_goto(x, addr, label, op) \
>> >>+ asm volatile goto( \
>> >>+ "1: " op "%U1%X1 %0,%1 # put_user\n"
Hi Nicholas,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on linus/master v5.7-rc4 next-20200505]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '-
Excerpts from Segher Boessenkool's message of May 6, 2020 8:11 am:
> Hi!
>
> On Thu, Apr 30, 2020 at 02:02:02PM +1000, Nicholas Piggin wrote:
>> Add support for the scv instruction on POWER9 and later CPUs.
>
> Looks good to me in general :-)
Thanks for taking a look.
>> For now this implements
Segher Boessenkool writes:
> Hi!
>
> On Wed, May 06, 2020 at 12:27:58AM +1000, Michael Ellerman wrote:
>> Christophe Leroy writes:
>> > unsafe_put_user() is designed to take benefit of 'asm goto'.
>> >
>> > Instead of using the standard __put_user() approach and branch
>> > based on the returned
Hi,
Le 05/05/2020 à 16:27, Michael Ellerman a écrit :
Christophe Leroy writes:
unsafe_put_user() is designed to take benefit of 'asm goto'.
Instead of using the standard __put_user() approach and branch
based on the returned error, use 'asm goto' and make the
exception code branch directly to
I am still slowly wrapping my head around XIVE and it's interaction with KVM
but from what I can see this looks good and is needed so we can enable
StoreEOI support in future so:
Reviewed-by: Alistair Popple
On Thursday, 20 February 2020 7:15:06 PM AEST Cédric Le Goater wrote:
> When an interr
Hi,
On 2019-12-02 07:57, Christophe Leroy wrote:
> clock_getres returns hrtimer_res for all clocks but coarse ones
> for which it returns KTIME_LOW_RES.
>
> return EINVAL for unknown clocks.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/kernel/asm-offsets.c | 3 +++
> arch/
Hi!
On Thu, Apr 30, 2020 at 02:02:02PM +1000, Nicholas Piggin wrote:
> Add support for the scv instruction on POWER9 and later CPUs.
Looks good to me in general :-)
> For now this implements the zeroth scv vector 'scv 0', as identical
> to 'sc' system calls, with the exception that lr is not pre
Hi!
On Wed, Apr 29, 2020 at 12:39:22PM +1000, Nicholas Piggin wrote:
> Excerpts from Adhemerval Zanella's message of April 27, 2020 11:09 pm:
> >> Right, I'm just talking about those comments -- it seems like the kernel
> >> vdso should contain an .opd section with function descriptors in it for
On Tue, May 05, 2020 at 10:42:58PM +0200, Christoph Hellwig wrote:
> On Tue, May 05, 2020 at 09:34:46PM +0100, Al Viro wrote:
> > Looks good. Want me to put it into vfs.git? #work.set_fs-exec, perhaps?
>
> Sounds good.
Applied, pushed and added into #for-next
On Tue, May 05, 2020 at 09:34:46PM +0100, Al Viro wrote:
> Looks good. Want me to put it into vfs.git? #work.set_fs-exec, perhaps?
Sounds good.
On Tue, May 05, 2020 at 12:12:49PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> this series gets rid of playing with the address limit in the exec and
> coredump code. Most of this was fairly trivial, the biggest changes are
> those to the spufs coredump code.
>
> Changes since v5:
> - fix uac
Linus Torvalds writes:
> On Tue, May 5, 2020 at 3:13 AM Christoph Hellwig wrote:
>>
>> this series gets rid of playing with the address limit in the exec and
>> coredump code. Most of this was fairly trivial, the biggest changes are
>> those to the spufs coredump code.
>
> Ack, nice, and looks
On Tue, 5 May 2020 08:21:34 +0530 Anshuman Khandual
wrote:
> >>> static inline void arch_clear_hugepage_flags(struct page *page)
> >>> {
> >>>
> >>> }
> >>> #define arch_clear_hugepage_flags arch_clear_hugepage_flags
> >>>
> >>> It's a small difference - mainly to avoid adding two variables t
On 5/5/20 6:18 AM, Guenter Roeck wrote:
> On 5/4/20 8:39 AM, Mike Rapoport wrote:
>> On Sun, May 03, 2020 at 11:43:00AM -0700, Guenter Roeck wrote:
>>> On Sun, May 03, 2020 at 10:41:38AM -0700, Guenter Roeck wrote:
Hi,
On Wed, Apr 29, 2020 at 03:11:23PM +0300, Mike Rapoport wrote:
>>
Adding Stefan Raspl, who has done a lot of kvm_stat work in the past.
On 05.05.20 19:21, Paolo Bonzini wrote:
> On 05/05/20 19:07, David Rientjes wrote:
>>> I am totally in favor of having a binary format, but it should be
>>> introduced as a separate series on top of this one---and preferably by
On 05/05/20 19:07, David Rientjes wrote:
>> I am totally in favor of having a binary format, but it should be
>> introduced as a separate series on top of this one---and preferably by
>> someone who has already put some thought into the problem (which
>> Emanuele and I have not, beyond ensuring tha
On 05/05/20 18:53, Jim Mattson wrote:
>>> Since this is becoming a generic API (good!!), maybe we can discuss
>>> possible ways to optimize gathering of stats in mass?
>> Sure, the idea of a binary format was considered from the beginning in
>> [1], and it can be done either together with the curre
On Tue, May 5, 2020 at 3:13 AM Christoph Hellwig wrote:
>
> this series gets rid of playing with the address limit in the exec and
> coredump code. Most of this was fairly trivial, the biggest changes are
> those to the spufs coredump code.
Ack, nice, and looks good.
The only part I dislike is
> > My 'pengutronix' address is defunct for years. Merge the entries and use
> > the proper contact address.
>
> Is there any point adding the new address? It's just likely to bit-rot
> one day too.
At least, this one is a group address, not an individual one, so less
likey.
> I figure the git
On Tue, May 05, 2020 at 05:40:21PM +0200, Christophe Leroy wrote:
> >>+#define __put_user_asm_goto(x, addr, label, op)\
> >>+ asm volatile goto( \
> >>+ "1: " op "%U1%X1 %0,%1 # put_user\n" \
> >>+ EX_TABLE(1b
Hi!
On Wed, May 06, 2020 at 12:27:58AM +1000, Michael Ellerman wrote:
> Christophe Leroy writes:
> > unsafe_put_user() is designed to take benefit of 'asm goto'.
> >
> > Instead of using the standard __put_user() approach and branch
> > based on the returned error, use 'asm goto' and make the
> >
On Tue, 28 Apr 2020 13:01:28 -0600
Jonathan Corbet wrote:
> So I'm happy to merge this set, but there is one thing that worries me a
> bit...
>
> > fs/cachefiles/Kconfig |4 +-
> > fs/coda/Kconfig |2 +-
> > fs/configfs/inode.c
Christophe Leroy writes:
> unsafe_put_user() is designed to take benefit of 'asm goto'.
>
> Instead of using the standard __put_user() approach and branch
> based on the returned error, use 'asm goto' and make the
> exception code branch directly to the error label. There is
> no code anymore in t
On Tue, May 05, 2020 at 06:18:11AM -0700, Guenter Roeck wrote:
> On 5/4/20 8:39 AM, Mike Rapoport wrote:
> > On Sun, May 03, 2020 at 11:43:00AM -0700, Guenter Roeck wrote:
> >> On Sun, May 03, 2020 at 10:41:38AM -0700, Guenter Roeck wrote:
> >>> Hi,
> >>>
> >>> On Wed, Apr 29, 2020 at 03:11:23PM +0
On 5/4/20 8:39 AM, Mike Rapoport wrote:
> On Sun, May 03, 2020 at 11:43:00AM -0700, Guenter Roeck wrote:
>> On Sun, May 03, 2020 at 10:41:38AM -0700, Guenter Roeck wrote:
>>> Hi,
>>>
>>> On Wed, Apr 29, 2020 at 03:11:23PM +0300, Mike Rapoport wrote:
From: Mike Rapoport
Some architec
Hi Anju,
Minor neats...
/*
diff --git a/arch/powerpc/include/uapi/asm/perf_regs.h
b/arch/powerpc/include/uapi/asm/perf_regs.h
index f599064dd8dc..604b831378fe 100644
--- a/arch/powerpc/include/uapi/asm/perf_regs.h
+++ b/arch/powerpc/include/uapi/asm/perf_regs.h
@@ -48,6 +48,17 @@ enum perf_e
There is no logic in elf_fdpic_core_dump itself or in the various arch
helpers called from it which use uaccess routines on kernel pointers
except for the file writes thate are nicely encapsulated by using
__kernel_write in dump_emit.
Signed-off-by: Christoph Hellwig
---
fs/binfmt_elf_fdpic.c |
There is no logic in elf_core_dump itself or in the various arch helpers
called from it which use uaccess routines on kernel pointers except for
the file writes thate are nicely encapsulated by using __kernel_write in
dump_emit.
Signed-off-by: Christoph Hellwig
---
fs/binfmt_elf.c | 16 +
From: "Eric W. Biederman"
The code in binfmt_elf.c is differnt from the rest of the code that
processes siginfo, as it sends siginfo from a kernel buffer to a file
rather than from kernel memory to userspace buffers. To remove it's
use of set_fs the code needs some different siginfo helpers.
Ad
Factor out a copy_siginfo_to_external32 helper from
copy_siginfo_to_user32 that fills out the compat_siginfo, but does so
on a kernel space data structure. With that we can let architectures
override copy_siginfo_to_user32 with their own implementations using
copy_siginfo_to_external32. That allo
Replace the coredump ->read method with a ->dump method that must call
dump_emit itself. That way we avoid a buffer allocation an messing with
set_fs() to call into code that is intended to deal with user buffers.
For the ->get case we can now use a small on-stack buffer and avoid
memory allocatio
Just use the proper non __-prefixed get/put_user variants where that is
not done yet.
Signed-off-by: Christoph Hellwig
Signed-off-by: Jeremy Kerr
Signed-off-by: Christoph Hellwig
---
arch/powerpc/platforms/cell/spufs/file.c | 42 +---
1 file changed, 8 insertions(+), 34 del
From: Jeremy Kerr
Currently, we may perform a copy_to_user (through
simple_read_from_buffer()) while holding a context's register_lock,
while accessing the context save area.
This change uses a temporary buffer for the context save area data,
which we then pass to simple_read_from_buffer.
Inclu
Hi all,
this series gets rid of playing with the address limit in the exec and
coredump code. Most of this was fairly trivial, the biggest changes are
those to the spufs coredump code.
Changes since v5:
- fix uaccess under spinlock in spufs (Jeremy)
- remove use of access_ok in spufs
Changes
Hi Prakhar,
On Mon, May 04, 2020 at 01:38:27PM -0700, Prakhar Srivastava wrote:
> IMA during kexec(kexec file load) verifies the kernel signature and measures
> the signature of the kernel. The signature in the logs can be used to verfiy
> the
> authenticity of the kernel. The logs don not get c
On 5/4/20 11:37 PM, David Rientjes wrote:
On Mon, 4 May 2020, Emanuele Giuseppe Esposito wrote:
In this patch series I introduce statsfs, a synthetic ram-based virtual
filesystem that takes care of gathering and displaying statistics for the
Linux kernel subsystems.
This is exciting, we
Hi Tianjia,
On 2020-04-27 05:35, Tianjia Zhang wrote:
In the current kvm version, 'kvm_run' has been included in the
'kvm_vcpu'
structure. For historical reasons, many kvm-related function parameters
retain the 'kvm_run' and 'kvm_vcpu' parameters at the same time. This
patch does a unified clea
On Tue, May 5, 2020 at 5:15 PM Alistair Popple wrote:
>
> Hmm, I was hoping to add a tested by but I'm seeing the following failure in
> Mambo:
>
> [1.475459] feature-fixups: test failed at line 730
>
> Based on the name of the test it looks like you probably made a copy/paste
> error in ftr_f
On Tue, May 5, 2020 at 5:08 PM Michael Ellerman wrote:
>
> Jordan Niethe writes:
> > A modulo operation is used for calculating the current offset from a
> > breakpoint within the breakpoint table. As instruction lengths are
> > always a power of 2, this can be replaced with a bitwise 'and'. The
On Tue, May 05, 2020 at 05:20:54PM +1000, Michael Ellerman wrote:
> Christoph Hellwig writes:
> > powerpc mantainers,
>
> There's only one of me.
>
> > are you going to pick this up for the next -rc1? I'm waiting for it to
> > hit upstream before resending the coredump series.
>
> I thought yo
Christoph Hellwig writes:
> powerpc mantainers,
There's only one of me.
> are you going to pick this up for the next -rc1? I'm waiting for it to
> hit upstream before resending the coredump series.
I thought you were going to take it in your series.
Otherwise you'll be waiting 4 or more week
MADV_DONTNEED holds mmap_sem in read mode and that implies a
parallel page fault is possible and the kernel can end up with a level 1 PTE
entry (THP entry) converted to a level 0 PTE entry without flushing
the THP TLB entry.
Most architectures including POWER have issues with kernel instantiating
We will use this in later patch to do tlb flush when clearing pmd entries.
Cc: kir...@shutemov.name
Cc: a...@linux-foundation.org
Signed-off-by: Aneesh Kumar K.V
---
arch/s390/include/asm/pgtable.h | 4 ++--
include/asm-generic/pgtable.h | 4 ++--
mm/huge_memory.c| 4 ++--
3 fi
Now that all the lockless page table walk is careful w.r.t the PTE
address returned, we can now revert
commit: 13bd817bb884 ("powerpc/thp: Serialize pmd clear against a linux page
table walk.")
We also drop the equivalent IPI from other pte updates routines. We still keep
IPI in hash pmdp collaps
This adds _PAGE_PTE check and makes sure we validate the pte value returned via
find_kvm_host_pte.
NOTE: this also considers _PAGE_INVALID to the software valid bit.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kvm_book3s_64.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_hv_rm_mmu.c | 32 ++---
1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 83e987fecf97..3b168c69d503 100644
--- a/arch
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 70c4025406d8..271f1c3d8443 100644
--- a/arch/powerpc/kvm/bo
We now depend on kvm->mmu_lock
Cc: Alexey Kardashevskiy
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_vio_hv.c | 38 +++--
1 file changed, 9 insertions(+), 29 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c
b/arch/powerpc/kvm/book3s_64_vi
Current code just hold rmap lock to ensure parallel page table update is
prevented. That is not sufficient. The kernel should also check whether
a mmu_notifer callback was running in parallel.
Cc: Alexey Kardashevskiy
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_64_vio_hv.c | 30
1 - 100 of 119 matches
Mail list logo