On 15/10/2021 04:23, Nicholas Piggin wrote:
Excerpts from Laurent Vivier's message of October 13, 2021 7:30 pm:
On 13/10/2021 01:18, Michael Ellerman wrote:
Laurent Vivier writes:
Commit 112665286d08 moved guest_exit() in the interrupt protected
area to avoid wrong context warning (or worse),
Excerpts from Fabiano Rosas's message of October 16, 2021 11:45 pm:
> Nicholas Piggin writes:
>
>> This reduces the number of mtmsrd required to enable facility bits when
>> saving/restoring registers, by having the KVM code set all bits up front
>> rather than using individual facility functions
Excerpts from Fabiano Rosas's message of October 16, 2021 10:38 pm:
> Nicholas Piggin writes:
>
>> Provide a config option that controls the workaround added by commit
>> 63279eeb7f93 ("KVM: PPC: Book3S HV: Always save guest pmu for guest
>> capable of nesting"). The option defaults to y for now,
Excerpts from Christophe Leroy's message of October 19, 2021 6:05 pm:
>
>
> Le 15/10/2021 à 17:46, Nicholas Piggin a écrit :
>> Introduce a new option CONFIG_PPC_64S_HASH_MMU which allows the 64s hash
>> MMU code to be compiled out if radix is selected and the minimum
>> supported CPU type is POW
Excerpts from Christophe Leroy's message of October 19, 2021 3:09 am:
>
>
> Le 15/10/2021 à 17:46, Nicholas Piggin a écrit :
>> slb.c is hash-specific SLB management, but do_bad_slb_fault deals with
>> segment interrupts that occur with radix MMU as well.
>> ---
>> arch/powerpc/include/asm/inte
On Mon, Oct 18, 2021 at 10:07 PM wrote:
>
> From: Meng Li
>
> In orininal code, use 2 function spin_lock() and local_irq_save() to
> protect the critical zone. But when enable the kernel debug config,
> there are below inconsistent lock state detected.
>
> WARNING
On Mon, Oct 18, 2021 at 9:46 PM wrote:
>
> From: Meng Li
>
> When enable debug kernel configs,there will be calltrace as below:
>
> BUG: using smp_processor_id() in preemptible [] code: swapper/0/1
> caller is debug_smp_processor_id+0x20/0x30
> CPU: 6 PID: 1 Comm: swapper/0 Not tainted 5.
On 10/19/21 2:36 PM, Nathan Lynch wrote:
> Hi Tyrel, thanks for the detailed review.
>
> Tyrel Datwyler writes:
>> On 10/18/21 9:34 AM, Nathan Lynch wrote:
>>> On VMs with NX encryption, compression, and/or RNG offload, these
>>> capabilities are described by nodes in the ibm,platform-facilities
On Mon, Oct 04, 2021 at 11:29:27PM +0530, Naveen Naidu wrote:
> The (PCIe r5.0, sec 7.6.4.3, Table 7-101) and (PCIe r5.0, sec 7.8.4.6,
> Table 7-104)
s/7.6.4.3/7.8.4.3/
Cite it like this:
Per PCIe r5.0, sec 7.8.4.3 and sec 7.8.4.6, the default values ...
> states that the default values for
Laurent Dufour writes:
> Le 19/10/2021 à 00:37, Tyrel Datwyler a écrit :
>> On 10/18/21 9:34 AM, Nathan Lynch wrote:
>> The reality is that '/ibm,platform-facilities' and 'cache' nodes are the only
>> LPM scoped device tree nodes that allow node delete/add. So, as a one-off
>> workaround to deal w
Hi Tyrel, thanks for the detailed review.
Tyrel Datwyler writes:
> On 10/18/21 9:34 AM, Nathan Lynch wrote:
>> On VMs with NX encryption, compression, and/or RNG offload, these
>> capabilities are described by nodes in the ibm,platform-facilities device
>> tree hierarchy:
>>
>> $ tree -d /sys/
On Tue, Oct 19, 2021 at 06:19:25AM -0600, Tim Gardner wrote:
> Coverity complains of unsigned compare against 0. There are 2 cases in
> this function:
>
> 1821itp = (irq_holdoff * 1000) / p->desc->qman_256_cycles_per_ns;
>
> CID 121131 (#1 of 1): Unsigned compared against 0 (NO_EFFECT)
>
Em Mon, Oct 18, 2021 at 05:19:46PM +0530, Athira Rajeev escreveu:
> Patch set adds PMU registers namely Sampled Instruction Address Register
> (SIAR) and Sampled Data Address Register (SDAR) as part of extended regs
> in PowerPC. These registers provides the instruction/data address and
> adding th
On Fri, Oct 15, 2021 at 05:13:48PM -0700, Dan Williams wrote:
> On Fri, Oct 15, 2021 at 4:53 PM Luis Chamberlain wrote:
> >
> > If nd_integrity_init() fails we'd get del_gendisk() called,
> > but that's not correct as we should only call that if we're
> > done with device_add_disk(). Fix this by p
On Tue, 19 Oct 2021 08:41:23 +0200
Petr Mladek wrote:
> Feel free to postpone this change. I do not want to complicate
> upstreaming the fix for stable. I am sorry if I already
> complicated it.
>
No problem. It's not that complicated of a merge fix. I'm sure Linus can
handle it ;-)
-- Steve
Hi Scott,
Le 23/10/2015 à 05:54, Scott Wood a écrit :
Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to
be loaded at once. This avoids the need to keep the translation that
code is executing from in the same TLB entry in the final TLB
configuration as during early boot, which
Coverity complains of unsigned compare against 0. There are 2 cases in
this function:
1821itp = (irq_holdoff * 1000) / p->desc->qman_256_cycles_per_ns;
CID 121131 (#1 of 1): Unsigned compared against 0 (NO_EFFECT)
unsigned_compare: This less-than-zero comparison of an unsigned value is ne
On Tue, Oct 19, 2021 at 12:20:01PM +0200, Christophe Leroy wrote:
> Add LANG=C at the beginning of the wrapper script in order to get the
> output expected by the script:
This doesn't use if any LC_* are set. Use LC_ALL=C to override all user
choices.
Segher
On Mon 2021-10-18 22:02:03, Steven Rostedt wrote:
> On Mon, 18 Oct 2021 12:19:20 +0200
> Petr Mladek wrote:
>
> > > -
> > > bit = trace_get_context_bit() + start;
> > > if (unlikely(val & (1 << bit))) {
> > > /*
> > >* It could be that preempt_count has not been updated
While trying to build a simple Image for ACADIA platform, I got the
following error:
WRAParch/powerpc/boot/simpleImage.acadia
INFO: Uncompressed kernel (size 0x6ae7d0) overlaps the address of the
wrapper(0x40)
INFO: Fixing the link_address of wrapper to (0x70
>
> Subject: [PATCH][linux-next] soc: fsl: dpio: Unsigned compared against 0 in
>
> Coverity complains of unsigned compare against 0. There are 2 cases in
> this function:
>
> 1821itp = (irq_holdoff * 1000) / p->desc->qman_256_cycles_per_ns;
>
> CID 121131 (#1 of 1): Unsigned compared a
Le 19/10/2021 à 00:37, Tyrel Datwyler a écrit :
On 10/18/21 9:34 AM, Nathan Lynch wrote:
On VMs with NX encryption, compression, and/or RNG offload, these
capabilities are described by nodes in the ibm,platform-facilities device
tree hierarchy:
$ tree -d /sys/firmware/devicetree/base/ibm,pla
Le 15/10/2021 à 17:46, Nicholas Piggin a écrit :
Introduce a new option CONFIG_PPC_64S_HASH_MMU which allows the 64s hash
MMU code to be compiled out if radix is selected and the minimum
supported CPU type is POWER9 or higher, and KVM is not selected.
This saves 128kB kernel image size (90kB
On booke/40x we don't have segments like book3s/32.
On booke/40x we don't have access protection groups like 8xx.
Use the PID register to provide user access protection.
Kernel address space can be accessed with any PID.
User address space has to be accessed with the PID of the user.
User PID is a
This adds KUAP support to 85xx in 32 bits mode.
This is done by reading the content of SPRN_MAS1 and checking
the TID at the time user pgtable is loaded.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_fsl_booke.S | 12
arch/powerpc/platforms/Kconfig.cputype | 1 +
2
We have many functionnalities common to 40x and BOOKE, it leads to
many places with #if defined(CONFIG_BOOKE) || defined(CONFIG_40x).
We are going to add a few more with KUAP for booke/40x, so create
a new symbol which is defined when either BOOKE or 40x is defined.
Signed-off-by: Christophe Lero
Make the following functions generic to all platforms.
- bad_kuap_fault()
- kuap_assert_locked()
- kuap_save_and_lock() (PPC32 only)
- kuap_kernel_restore()
- kuap_get_and_assert_locked()
And for all platforms except book3s/64
- allow_user_access()
- prevent_user_access()
- prevent_user_access_ret
This adds KUAP support to book3e/64.
This is done by reading the content of SPRN_MAS1 and checking
the TID at the time user pgtable is loaded.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/tlb_low_64e.S | 40 ++
arch/powerpc/platforms/Kconfig.cputype | 1 +
This adds KUAP support to 44x. This is done by checking
the content of SPRN_PID at the time it is read and written
into SPRN_MMUCR.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_44x.S | 16
arch/powerpc/platforms/Kconfig.cputype | 1 +
2 files changed, 17
In order to reuse it on booke/4xx, move KUAP
setup routine out of 8xx.c
Make them usable on SMP by removing the __init tag
as it is called for each CPU.
And use __prevent_user_access() instead of hard
coding initial lock.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/8xx.c| 21
On booke/40x we don't have segments like book3s/32.
On booke/40x we don't have access protection groups like 8xx.
Use the PID register to provide user access protection.
Kernel address space can be accessed with any PID.
User address space has to be accessed with the PID of the user.
User PID is a
On 44x, KUEP is implemented by clearing SX bit during TLB miss
for user pages. The impact is minimal and not worth neither
boot time nor build time selection.
Activate it at all time.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-44x.h | 1 -
arch/powerpc/kernel/he
This adds KUAP support to 40x. This is done by checking
the content of SPRN_PID at the time user pgtable is loaded.
40x doesn't have KUEP, but KUAP implies KUEP because when the
PID doesn't match the page's PID, the page cannot be read nor
executed.
So KUEP is now automatically selected when KUAP
Also call kuap_lock() and kuap_save_and_lock() from
interrupt functions with CONFIG_PPC64.
For book3s/64 we keep them empty as it is done in assembly.
Also do the locked assert when switching task unless it is
book3s/64.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/64/ku
All platforms now have KUAP and KUEP so remove CONFIG_PPC_HAVE_KUAP
and CONFIG_PPC_HAVE_KUEP.
Signed-off-by: Christophe Leroy
---
arch/powerpc/platforms/Kconfig.cputype | 21 -
1 file changed, 21 deletions(-)
diff --git a/arch/powerpc/platforms/Kconfig.cputype
b/arch/powerp
PPC_KUAP_DEBUG is supported by all platforms doing PPC_KUAP,
it doesn't depend on Radix on book3s/64.
This will avoid adding one more dependency when implementing
KUAP on book3e/64.
Signed-off-by: Christophe Leroy
---
v2: New
---
arch/powerpc/platforms/Kconfig.cputype | 2 +-
1 file changed, 1
Today, every platform checks that KUAP is not de-activated
before doing the real job.
Move the verification out of platform specific functions.
Signed-off-by: Christophe Leroy
---
v2: Added missing check in bad_kuap_fault()
---
arch/powerpc/include/asm/book3s/32/kup.h | 34 +++-
This reverts commit 1791ebd131c46539b024c0f2ebf12b6c88a265b9.
setup_kup() was inlined to manage conflict between PPC32 marking
setup_{kuap/kuep}() __init and PPC64 not marking them __init.
But in fact PPC32 has removed the __init mark for all but 8xx
in order to properly handle SMP.
In order to
__kuap_assert_locked() is redundant with
__kuap_get_and_assert_locked().
Move the verification of CONFIG_PPC_KUAP_DEBUG in kuap_assert_locked()
and make it call __kuap_get_and_assert_locked() directly.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/kup.h | 5 -
a
On the 8xx, there is absolutely no runtime impact with KUEP. Protection
against execution of user code in kernel mode is set up at boot time
by configuring the groups with contain all user pages as having swapped
protection rights, in extenso EX for user and NA for supervisor.
Configure KUEP at st
When interrupt and syscall entries where converted to C, KUEP locking
and unlocking was also converted. It improved performance by unrolling
the loop, and allowed easily implementing boot time deactivation of
KUEP.
However, null_syscall selftest shows that KUEP is still heavy
(361 cycles with KUEP
Calling 'mfsr' to get the content of segment registers is heavy,
in addition it requires clearing of the 'reserved' bits.
In order to avoid this operation, save it in mm context and in
thread struct.
The saved sr0 is the one used by kernel, this means that on
locking entry it can be used as is.
On book3e,
- When using 64 bits PTE: User pages don't have the SX bit defined
so KUEP is always active.
- When using 32 bits PTE: Implement KUEP by clearing SX bit during
TLB miss for user pages. The impact is minimal and worth neither
boot time nor build time selection.
Activate it at all time.
Disabling KUEP at boottime makes things unnecessarily complex.
Still allow disabling KUEP at build time, but when it's built-in
it is always there.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/kup.h | 3 +--
arch/powerpc/mm/book3s32/kuep.c | 10 ++
2
Add kuap_lock() and call it when entering interrupts from user.
It is called kuap_lock() as it is similar to kuap_save_and_lock()
without the save.
However book3s/32 already have a kuap_lock(). Rename it
kuap_lock_addr().
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/k
Deactivating KUEP at boot time is unrelevant for PPC32 and BOOK3E/64.
Remove it.
It allows to refactor setup_kuep() via a __weak function
that only PPC64s will overide for now.
Signed-off-by: Christophe Leroy
---
Documentation/admin-guide/kernel-parameters.txt | 2 +-
arch/powerpc/include/asm
46 matches
Mail list logo