On Tue, May 09, 2023 at 09:39:13PM -0700, Hugh Dickins wrote:
> Two: pte_offset_map() will need to do an rcu_read_lock(), with the
> corresponding rcu_read_unlock() in pte_unmap(). But most architectures
> never supported CONFIG_HIGHPTE, so some don't always call pte_unmap()
> after pte_offset_map
To keep balance in future, remember to pte_unmap() after a successful
pte_offset_map(). And (might as well) pretend that get_pte_for_vaddr()
really needed a map there, to read the pteval before "unmapping".
Signed-off-by: Hugh Dickins
---
arch/xtensa/mm/tlb.c | 5 -
1 file changed, 4 insert
sme_populate_pgd() is an __init function for sme_encrypt_kernel():
it should use pte_offset_kernel() instead of pte_offset_map(), to avoid
the question of whether a pte_unmap() will be needed to balance.
Signed-off-by: Hugh Dickins
---
arch/x86/mm/mem_encrypt_identity.c | 2 +-
1 file changed, 1
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins
---
arch/x86/kernel/ldt.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/ldt.c b/arch
iounit_alloc() and sbus_iommu_alloc() are working from pmd_off_k(),
so should use pte_offset_kernel() instead of pte_offset_map(), to avoid
the question of whether a pte_unmap() will be needed to balance.
Signed-off-by: Hugh Dickins
---
arch/sparc/mm/io-unit.c | 2 +-
arch/sparc/mm/iommu.c | 2
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins
---
arch/sparc/kernel/signal32.c | 2 ++
arch/sparc/mm/fault_64.c | 3 +++
arch/sparc/mm/tlb.c | 2 ++
3 files chan
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead; with pte_offset_huge() a better name for pte_offset_kernel().
Signed-off-by: Hugh Dickins
---
arch/sparc/mm/hugetlbpage.c | 4 ++--
1 file ch
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead; with pte_offset_huge() a better name for pte_offset_kernel().
Signed-off-by: Hugh Dickins
---
arch/sh/mm/hugetlbpage.c | 4 ++--
1 file chang
pte_alloc_map_lock() expects to be followed by pte_unmap_unlock(): to
keep balance in future, pass ptep as well as ptl to gmap_pte_op_end(),
and use pte_unmap_unlock() instead of direct spin_unlock() (even though
ptep ends up unused inside the macro).
Signed-off-by: Hugh Dickins
---
arch/s390/mm
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins
---
arch/s390/kernel/uv.c | 2 ++
arch/s390/mm/gmap.c| 2 ++
arch/s390/mm/pgtable.c | 12 +---
3 files changed, 1
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead; with pte_offset_huge() a better name for pte_offset_kernel().
Signed-off-by: Hugh Dickins
---
arch/riscv/mm/hugetlbpage.c | 4 ++--
1 file ch
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead. huge_pte_offset() is using __find_linux_pte(), which is using
pte_offset_kernel() - don't rename that to _huge, it's more complicated.
Signed-
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Balance successful pte_offset_map() with pte_unmap() where omitted.
Signed-off-by: Hugh Dickins
---
arch/powerpc/mm/book3s64/hash_tlb.c | 4
arch/p
kvmppc_unmap_free_pmd() use pte_offset_kernel(), like everywhere else
in book3s_64_mmu_radix.c: instead of pte_offset_map(), which will come
to need a pte_unmap() to balance it.
But note that this is a more complex case than most: see those -EAGAINs
in kvmppc_create_pte(), which is coping with kvm
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead; with pte_offset_huge() a better name for pte_offset_kernel().
Signed-off-by: Hugh Dickins
---
arch/parisc/mm/hugetlbpage.c | 4 ++--
1 file c
unmap_uncached_pte() is working from pgd_offset_k(vaddr), so it should
use pte_offset_kernel() instead of pte_offset_map(), to avoid the
question of whether a pte_unmap() will be needed to balance.
Signed-off-by: Hugh Dickins
---
arch/parisc/kernel/pci-dma.c | 2 +-
1 file changed, 1 insertion(+
To keep balance in future, remember to pte_unmap() after a successful
get_ptep(). And (we might as well) pretend that flush_cache_pages()
really needed a map there, to read the pfn before "unmapping".
Signed-off-by: Hugh Dickins
---
arch/parisc/kernel/cache.c | 26 +-
1
Don't make update_mmu_cache() a wrapper around __update_tlb(): call it
directly, and use the ptep (or pmdp) provided by the caller, instead of
re-calling pte_offset_map() - which would raise a question of whether a
pte_unmap() is needed to balance it.
Check whether the "ptep" provided by the calle
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins
---
arch/microblaze/kernel/signal.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/microblaze/ker
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Restructure cf_tlb_miss() with a pte_unmap() (previously omitted)
at label out, followed by one local_irq_restore() for all.
Signed-off-by: Hugh Dickins
---
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead; with pte_offset_huge() a better name for pte_offset_kernel().
Signed-off-by: Hugh Dickins
---
arch/ia64/mm/hugetlbpage.c | 4 ++--
1 file cha
pte_alloc_map() expects to be followed by pte_unmap(), but hugetlb omits
that: to keep balance in future, use the recently added pte_alloc_huge()
instead; with pte_offset_huge() a better name for pte_offset_kernel().
Signed-off-by: Hugh Dickins
---
arch/arm64/mm/hugetlbpage.c | 11 ++-
1
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins
---
arch/arm64/mm/fault.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
i
In rare transient cases, not yet made possible, pte_offset_map() and
pte_offset_map_lock() may not find a page table: handle appropriately.
Signed-off-by: Hugh Dickins
---
arch/arm/lib/uaccess_with_memcpy.c | 3 +++
arch/arm/mm/fault-armv.c | 5 -
arch/arm/mm/fault.c
Here is a series of patches to various architectures, based on v6.4-rc1:
preparing for changes expected to follow in mm, affecting pte_offset_map()
and pte_offset_map_lock().
In a week or two, I intend to post a separate series, of equivalent
preparations in mm. These two series are "independent"
The kopald thread handles opal events as they appear, but by polling a
static bit-vector in last_outstanding_events. Annotate these data races
accordingly. We are not at risk of missing events, but use of READ_ONCE,
WRITE_ONCE will assist readers in seeing that kopald only consumes the
events it is
KCSAN revealed that while irq_data entries are written to either from
behind a mutex, or otherwise atomically, accesses to irq_data->hwirq can
occur asynchronously, without volatile annotation. Mark these accesses
with READ_ONCE to avoid unfortunate compiler reorderings and remove
KCSAN warnings.
Checks to see if the [H]SRR registers have been clobbered by (soft)
NMI interrupts imply the possibility for a data race on the
[h]srr_valid entries in the PACA. Annotate accesses to these fields with
READ_ONCE, removing the need for the barrier.
The diagnostic can use plain-access reads and write
Mark writes to hypervisor ipi state so that KCSAN recognises these
asynchronous issue of kvmppc_{set,clear}_host_ipi to be intended, with
atomic writes. Mark asynchronous polls to this variable in
kvm_ppc_read_one_intr().
Signed-off-by: Rohan McLure
---
v2: Add read-side annotations to both polli
Prior to this patch, data races are detectable by KCSAN of the following
forms:
[1] Asynchronous calls to mmiowb_set_pending() from an interrupt context
or otherwise outside of a critical section
[2] Interrupted critical sections, where the interrupt will itself
acquire a lock
In case [1]
IPI message flags are observed and consequently consumed in the
smp_ipi_demux_relaxed function, which handles these message sources
until it observes none more arriving. Mark the checked loop guard with
READ_ONCE, to signal to KCSAN that the read is known to be volatile, and
that non-determinism is
The idle_state entry in the PACA on PowerNV features a bit which is
atomically tested and set through ldarx/stdcx. to be used as a spinlock.
This lock then guards access to other bit fields of idle_state. KCSAN
cannot differentiate between any of these bitfield accesses as they all
are implemented
The power_save callback can be overwritten by another core at boot time.
Specifically, null values will be replaced exactly once with the callback
suitable for the particular platform (PowerNV / pseries lpars), making
this value a good candidate for __ro_after_init.
Even with this the case, KCSAN
The powerpc implemenation of qspinlocks will both poll and spin on the
bitlock guarding a qnode. Mark these accesses with READ_ONCE to convey
to KCSAN that polling is intentional here.
Signed-off-by: Rohan McLure
Reviewed-by: Nicholas Piggin
---
arch/powerpc/lib/qspinlock.c | 4 ++--
1 file cha
The opal-async.c unit contains code for polling event sources, which
implies intentional data races. Ensure that the compiler will atomically
access such variables by means of {READ,WRITE}_ONCE calls, which in turn
inform KCSAN that polling behaviour is intended.
Signed-off-by: Rohan McLure
---
Annotate the release barrier and memory clobber (in effect, producing a
compiler barrier) in the publish_tail_cpu call. These barriers have the
effect of ensuring that qnode attributes are all written to prior to
publish the node to the waitqueue.
Even while the initial write to the 'locked' attri
v1 of this patch series available here:
Link:
https://lore.kernel.org/linuxppc-dev/20230508020120.218494-1-rmcl...@linux.ibm.com/
The KCSAN sanitiser notifies programmers of instances where unmarked
accesses to shared state has lead to a data race, or when the compiler
has liberty to reorder an u
Hi Tejun,
On Tue, 9 May 2023 05:33:02 -1000 Tejun Heo wrote:
>
> On Tue, May 09, 2023 at 05:09:43PM +1000, Michael Ellerman wrote:
> > Stephen Rothwell writes:
> > > Hi all,
> > >
> > > Today's qemu test boot (powerpc pseries_le_defconfig) produced this
> > > warning:
> > >
> > > [2.048588
> On 9 May 2023, at 12:26 pm, Nicholas Piggin wrote:
>
> On Mon May 8, 2023 at 12:01 PM AEST, Rohan McLure wrote:
>> The idle_state entry in the PACA on PowerNV features a bit which is
>> atomically tested and set through ldarx/stdcx. to be used as a spinlock.
>> This lock then guards access to o
As of now, in tce_freemulti_pSeriesLP(), there is no limit on how many TCEs
are passed to H_STUFF_TCE hcall. PAPR is enforcing this to be limited to
512 TCEs.
Signed-off-by: Gaurav Batra
Reviewed-by: Brian King
---
arch/powerpc/platforms/pseries/iommu.c | 12 ++--
1 file changed, 10 ins
On Mon, Mar 6, 2023 at 11:01 AM David Matlack wrote:
>
> This series refactors the KVM stats macros to reduce duplication and
> adds the support for choosing custom names for stats.
Hi Paolo,
I just wanted to double-check if this series is on your radar
(probably for 6.5)?
On 5/9/23 21:07, Andreas Schwab wrote:
That does not work with UEFI booting:
Loading Linux 6.4.0-rc1-1.g668187d-default ...
Loading initial ramdisk ...
Unhandled exception: Instruction access fault
EPC: 80016d56 RA: 8020334e TVAL: 007f80016d56
EPC: 002d1d56 RA: 00
On Mon, May 08, 2023 at 09:45:59PM +, Frank Li wrote:
> > > > Subject: [EXT] Re: [PATCH v2 1/1] PCI: layerscape: Add the endpoint
> > linkup
> > > > notifier support
> >
> > All these quoted headers are redundant clutter since we've already
> > seen them when Manivannan sent his comments. It
That does not work with UEFI booting:
Loading Linux 6.4.0-rc1-1.g668187d-default ...
Loading initial ramdisk ...
Unhandled exception: Instruction access fault
EPC: 80016d56 RA: 8020334e TVAL: 007f80016d56
EPC: 002d1d56 RA: 004be34e reloc adjusted
Unhandled excep
On Tue, Apr 04, 2023 at 11:11:01AM -0500, Bjorn Helgaas wrote:
> On Thu, Mar 30, 2023 at 07:24:27PM +0300, Andy Shevchenko wrote:
> > Provide two new helper macros to iterate over PCI device resources and
> > convert users.
> Applied 2-7 to pci/resource for v6.4, thanks, I really like this!
This
Nicholas Piggin wrote:
-mprofile-kernel is an optimised calling convention for mcount that
Linux has only implemented with the ELFv2 ABI, so it was disabled for
big endian kernels. However it does work with ELFv2 big endian, so let's
allow that if the compiler supports it.
Cc: Naveen N. Rao
Su
On Tue, May 09, 2023 at 05:09:43PM +1000, Michael Ellerman wrote:
> Stephen Rothwell writes:
> > Hi all,
> >
> > Today's qemu test boot (powerpc pseries_le_defconfig) produced this
> > warning:
> >
> > [2.048588][T1] ipr: IBM Power RAID SCSI Device Driver version:
> > 2.6.4 (March 14, 201
On 5/9/23 09:00, Vinod Koul wrote:
> On 08-05-23, 11:31, Sean Anderson wrote:
>> On 5/8/23 05:15, Vinod Koul wrote:
>
>> >> +int lynx_clks_init(struct device *dev, struct regmap *regmap,
>> >> +struct clk *plls[2], struct clk *ex_dlys[2], bool compat);
>> >
>> > so you have an exporte
On 08-05-23, 11:31, Sean Anderson wrote:
> On 5/8/23 05:15, Vinod Koul wrote:
> >> +int lynx_clks_init(struct device *dev, struct regmap *regmap,
> >> + struct clk *plls[2], struct clk *ex_dlys[2], bool compat);
> >
> > so you have an exported symbol for clk driver init in phy driver
On Tue, May 9, 2023, at 09:05, Tiezhu Yang wrote:
> Now we specify the minimal version of GCC as 5.1 and Clang/LLVM as 11.0.0
> in Documentation/process/changes.rst, __CHAR_BIT__ and __SIZEOF_LONG__ are
> usable, just define __BITS_PER_LONG as (__CHAR_BIT__ * __SIZEOF_LONG__) in
> asm-generic uapi
https://bugzilla.kernel.org/show_bug.cgi?id=217350
Coiby (coiby...@gmail.com) changed:
What|Removed |Added
CC||c...@kaod.org,
When JUMP_LABEL=n, the tracepoint refcount test in the pre-call stores
the refcount value to the stack, so the same value can be used for the
post-call (presumably to avoid racing with the value concurrently
changing).
On little-endian (ELFv2) that might have just worked by luck, because
32(r1) is
With JUMP_LABEL=n, hcall_tracepoint_refcount's address is being tested
instead of its value. This results in the tracing slowpath always being
taken unnecessarily.
Fixes: 9a10ccb29c0a2 ("powerpc/pseries: move hcall_tracepoint_refcount out of
.toc")
Signed-off-by: Nicholas Piggin
---
arch/powerp
On Fri May 5, 2023 at 6:49 PM AEST, Christophe Leroy wrote:
>
>
> Le 05/05/2023 à 09:18, Nicholas Piggin a écrit :
> > User code must still support ELFv1, e.g., see is_elf2_task().
> >
> > This one should wait a while until ELFv2 fallout settles, so
> > just posting it out of interest.
>
> Can't E
Hello,
On Thu, Apr 13, 2023 at 08:16:42AM +0200, Uwe Kleine-König wrote:
> While mpc5200b.dtsi contains a device that this driver can bind to, the
> only purpose of a bound device is to be used by the four exported functions
> mpc52xx_lpbfifo_submit(), mpc52xx_lpbfifo_abort(), mpc52xx_lpbfifo_poll
Li Yang writes:
> On Mon, May 8, 2023 at 8:44 AM Uwe Kleine-König
> wrote:
>>
>> Hello Leo,
>>
>> On Thu, Apr 13, 2023 at 08:00:04AM +0200, Uwe Kleine-König wrote:
>> > On Wed, Apr 12, 2023 at 09:30:05PM +, Leo Li wrote:
>> > > > On Fri, Mar 10, 2023 at 11:41:22PM +0100, Uwe Kleine-König wrot
On Tuesday 09 May 2023 17:14:48 Michael Ellerman wrote:
> Randy Dunlap writes:
> > Hi--
> >
> > Just a heads up. This patch can cause build errors.
> > I sent a patch for these on 2023-APR-28:
> >
> > https://lore.kernel.org/linuxppc-dev/20230429043519.19807-1-rdun...@infradead.org/
> >
> > Mic
Randy Dunlap writes:
> Hi--
>
> Just a heads up. This patch can cause build errors.
> I sent a patch for these on 2023-APR-28:
>
> https://lore.kernel.org/linuxppc-dev/20230429043519.19807-1-rdun...@infradead.org/
>
> Michael, I think this is your area if I'm not mistaken.
Yes. The fix is in m
Stephen Rothwell writes:
> Hi all,
>
> Today's qemu test boot (powerpc pseries_le_defconfig) produced this
> warning:
>
> [2.048588][T1] ipr: IBM Power RAID SCSI Device Driver version: 2.6.4
> (March 14, 2017)
> [2.051560][T1] [ cut here ]
> [2.052297][
Now we specify the minimal version of GCC as 5.1 and Clang/LLVM as 11.0.0
in Documentation/process/changes.rst, __CHAR_BIT__ and __SIZEOF_LONG__ are
usable, just define __BITS_PER_LONG as (__CHAR_BIT__ * __SIZEOF_LONG__) in
asm-generic uapi bitsperlong.h, simpler, works everywhere.
Remove all the
Hi Arnd,
On Mon, Mar 6, 2023 at 11:05 AM Alexandre Ghiti wrote:
>
> This all came up in the context of increasing COMMAND_LINE_SIZE in the
> RISC-V port. In theory that's a UABI break, as COMMAND_LINE_SIZE is the
> maximum length of /proc/cmdline and userspace could staticly rely on
> that to be
61 matches
Mail list logo