On Mon, Mar 14, 2022 at 07:07:44PM -0400, Boris Ostrovsky wrote:
>> +swiotlb_init_remap(true, x86_swiotlb_flags, xen_swiotlb_fixup);
>
>
> I think we need to have SWIOTLB_ANY set in x86_swiotlb_flags here.
Yes.
> Notice that we don't do remap() after final update to nslabs. We should.
Indeed
On Mon, Mar 14, 2022 at 06:39:21PM -0400, Boris Ostrovsky wrote:
> This is IO_TLB_MIN_SLABS, isn't it? (Xen code didn't say so but that's what
> it meant to say I believe)
Yes, that makes much more sense. I've switched the patch to use
IO_TLB_MIN_SLABS and drop the 2MB comment in both places.
C
Currently the aer_irq() handler returns IRQ_NONE for cases without bits
PCI_ERR_ROOT_UNCOR_RCV or PCI_ERR_ROOT_COR_RCV are set. But this
assumption is incorrect.
Consider a scenario where aer_irq() is triggered for a correctable
error, and while we process the error and before we clear the error
s
Hi all,
Today's linux-next merge of the tip tree got a conflict in:
arch/powerpc/include/asm/livepatch.h
between commit:
a4520b252765 ("powerpc/ftrace: Add support for livepatch to PPC32")
from the powerpc tree and commit:
a557abfd1a16 ("x86/livepatch: Validate __fentry__ location")
fr
On Mon, 14 Mar 2022, Christoph Hellwig wrote:
> Reuse the generic swiotlb initialization for xen-swiotlb. For ARM/ARM64
> this works trivially, while for x86 xen_swiotlb_fixup needs to be passed
> as the remap argument to swiotlb_init_remap/swiotlb_init_late.
>
> Signed-off-by: Christoph Hellwig
On 3/10/22 16:57, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> The number of pkeys supported on x86 and powerpc are much smaller than a
> u16 value can hold. It is desirable to standardize on the type for
> pkeys. powerpc currently supports the most pkeys at 32. u8 is plenty
> large for th
On 3/14/22 3:31 AM, Christoph Hellwig wrote:
@@ -314,6 +293,7 @@ void __init swiotlb_init(bool addressing_limit, unsigned
int flags)
int swiotlb_init_late(size_t size, gfp_t gfp_mask,
int (*remap)(void *tlb, unsigned long nslabs))
{
+ struct io_tlb_mem *mem = &io_tlb_
On 3/14/22 3:31 AM, Christoph Hellwig wrote:
-
static void __init pci_xen_swiotlb_init(void)
{
if (!xen_initial_domain() && !x86_swiotlb_enable)
return;
x86_swiotlb_enable = true;
- xen_swiotlb = true;
- xen_swiotlb_init_early();
+ swiotlb_i
On Sat, Mar 12, 2022 at 6:30 PM Christophe Leroy
wrote:
>
> Hi Jordan
>
> Le 10/11/2021 à 01:37, Jordan Niethe a écrit :
> > From: "Christopher M. Riedl"
> >
> > Rework code-patching with STRICT_KERNEL_RWX to prepare for a later patch
> > which uses a temporary mm for patching under the Book3s64
On Wed, Feb 23, 2022 at 1:34 AM Christophe Leroy
wrote:
>
>
>
> Le 02/06/2020 à 07:27, Jordan Niethe a écrit :
> > Currently prefixed instructions are treated as two word instructions by
> > show_user_instructions(), treat them as a single instruction. '<' and
> > '>' are placed around the instruc
The kernel changes needed to add support for crash hotplug support for
kexec_load system calls are similar to kexec_file_load (which has already
been implemented in earlier patches). Since kexec segment array is
prepared by kexec tool in the userspace, the kernel does aware of which
index FDT segme
Two major changes are done to enable the crash CPU hotplug handler.
Firstly, updated the kexec_load path to prepare kimage for hotplug
changes and secondly, implemented the crash hotplug handler itself.
On the kexec load path, memsz allocation of crash FDT segment is
updated to ensure that it has
The option CRASH_HOTPLUG enables, in kernel update to kexec segments on
hotplug events.
All the updates needed on the capture kernel load path in the kernel for
both kexec_load and kexec_file_load system will be kept under this config.
Signed-off-by: Sourabh Jain
---
arch/powerpc/Kconfig | 11 +
Two new members fdt_index and fdt_index_valid are added in kimage struct
to track the FDT kexec segment. These new members of kimage struct will
help the crash hotplug handler to easily access the FDT segment from the
kexec segment array. Otherwise, we have to loop through all kexec segments
to fin
This patch series implements the crash hotplug handler on PowerPC introduced
by https://lkml.org/lkml/2022/2/9/1406 patch series.
The Problem:
Post hotplug/DLPAR events the capture kernel holds stale information about the
system. Dump collection with stale capture kernel might end up
Make the update_cpus_node function non-static and export it for
usage in other kexec components.
The update_cpus_node definition is moved to core_64.c so that it
can be used with both kexec_load and kexec_file_load system calls.
Signed-off-by: Sourabh Jain
---
arch/powerpc/include/asm/kexec.h
On 3/14/22 3:31 AM, Christoph Hellwig wrote:
-void __init swiotlb_init(bool addressing_limit, unsigned int flags)
+void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
+ int (*remap)(void *tlb, unsigned long nslabs))
{
- size_t bytes = PAGE_ALIGN(defau
On 08/03/2022, 14:50:43, Nicholas Piggin wrote:
> Use the same calling and rets return convention with the raw rtas
> call rather than requiring callers to load and byteswap return
> values out of rtas_args.
>
> Signed-off-by: Nicholas Piggin
Despite a minor comment below
Reviewed-by: Laurent Du
On 08/03/2022, 14:50:42, Nicholas Piggin wrote:
> PAPR specifies that RTAS may be called with MSR[RI] enabled if the
> calling context is recoverable, and RTAS will manage RI as necessary.
> Call the rtas entry point with RI enabled, and add a check to ensure
> the caller has RI enabled.
>
> Signe
On 08/03/2022, 14:50:41, Nicholas Piggin wrote:
> This moves MSR save/restore and some real-mode juggling out of asm and
> into C code, simplifying things.
>
> Signed-off-by: Nicholas Piggin
> ---
> arch/powerpc/kernel/rtas.c | 15 ---
> arch/powerpc/kernel/rtas_entry.S | 32 ++
On 08/03/2022, 14:50:40, Nicholas Piggin wrote:
> On 64-bit, PACA is saved in a SPRG so it does not need to be saved on
> stack. We also don't need to mask off the top bits for real mode
> addresses because the architecture does this for us.
>
> Signed-off-by: Nicholas Piggin
Reviewed-by: Lauren
On 08/03/2022, 14:50:39, Nicholas Piggin wrote:
> Rather than adjust the current MSR value to find the rtas entry
> MSR on 64-bit, load the explicit value we want as 32-bit does.
>
> This prevents some facilities (e.g., VEC and VSX) from being left
> enabled which doesn't seem to cause a problem b
On 08/03/2022, 14:50:38, Nicholas Piggin wrote:
> mtmsrd L=1 can clear MSR[RI] without the previous MSR value; it does
> not require sync; it can be moved later to before SRRs are live.
>
> Signed-off-by: Nicholas Piggin
Reviewed-by: Laurent Dufour
> ---
> arch/powerpc/kernel/rtas_entry.S | 6
On 08/03/2022, 14:50:37, Nicholas Piggin wrote:
> Disable MSR[EE] in C code rather than asm.
>
> Signed-off-by: Nicholas Piggin
FWIW,
Reviewed-by: Laurent Dufour
> ---
> arch/powerpc/kernel/rtas.c | 4
> arch/powerpc/kernel/rtas_entry.S | 17 +
> 2 files changed, 5
On 3/13/22 07:59, Randy Dunlap wrote:
__setup() handlers should return 1 to obsolete_checksetup() in
init/main.c to indicate that the boot option has been handled.
A return of 0 causes the boot option/value to be listed as an Unknown
kernel parameter and added to init's (limited) argument or envi
System.map shows that vmlinux contains several instances of
__static_call_return0():
c0004fc0 t __static_call_return0
c0011518 t __static_call_return0
c00d8160 t __static_call_return0
arch_static_call_transform() uses the middle one to check whether we are
setting a call t
Only DEFINE_STATIC_CALL use __DEFINE_STATIC_CALL macro now when
CONFIG_HAVE_STATIC_CALL is selected.
Only keep __DEFINE_STATIC_CALL() for the generic fallback, and
also use it to implement DEFINE_STATIC_CALL_NULL() in that case.
Signed-off-by: Christophe Leroy
---
include/linux/static_call.h |
When a static call is updated with __static_call_return0() as target,
arch_static_call_transform() set it to use an optimised set of
instructions which are meant to lay in the same cacheline.
But when initialising a static call with DEFINE_STATIC_CALL_RET0(),
we get a branch to the real __static_c
gets pulled in by all drivers using the DMA API.
Remove x86 internal variables and unnecessary includes from it.
Signed-off-by: Christoph Hellwig
---
arch/x86/include/asm/dma-mapping.h | 11 ---
arch/x86/include/asm/iommu.h | 2 ++
2 files changed, 2 insertions(+), 11 deletions(-
No users left.
Signed-off-by: Christoph Hellwig
---
include/linux/swiotlb.h | 2 -
kernel/dma/swiotlb.c| 85 +++--
2 files changed, 30 insertions(+), 57 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 7b50c82f84ce9..7ed3
Reuse the generic swiotlb initialization for xen-swiotlb. For ARM/ARM64
this works trivially, while for x86 xen_swiotlb_fixup needs to be passed
as the remap argument to swiotlb_init_remap/swiotlb_init_late.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 21 +++---
arch
To shared more code between swiotlb and xen-swiotlb, offer a
swiotlb_init_remap interface and add a remap callback to
swiotlb_init_late that will allow Xen to remap the buffer the
buffer without duplicating much of the logic.
Signed-off-by: Christoph Hellwig
---
arch/x86/pci/sta2x11-fixup.c | 2
Let the caller chose a zone to allocate from. This will be used
later on by the xen-swiotlb initialization on arm.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
arch/x86/pci/sta2x11-fixup.c | 2 +-
include/linux/swiotlb.h | 2 +-
kernel/dma/swiotlb.c | 7 ++--
Power SVM wants to allocate a swiotlb buffer that is not restricted to
low memory for the trusted hypervisor scheme. Consolidate the support
for this into the swiotlb_init interface by adding a new flag.
Signed-off-by: Christoph Hellwig
---
arch/powerpc/include/asm/svm.h | 4
arch/p
Pass a bool to pass if swiotlb needs to be enabled based on the
addressing needs and replace the verbose argument with a set of
flags, including one to force enable bounce buffering.
Note that this patch removes the possibility to force xen-swiotlb
use using swiotlb=force on the command line on x8
Move enabling SWIOTLB_FORCE for guest memory encryption into common code.
Signed-off-by: Christoph Hellwig
---
arch/x86/kernel/cpu/mshyperv.c | 8
arch/x86/kernel/pci-dma.c | 8
arch/x86/mm/mem_encrypt_amd.c | 3 ---
3 files changed, 8 insertions(+), 11 deletions(-)
diff
The IOMMU table tries to separate the different IOMMUs into different
backends, but actually requires various cross calls.
Rewrite the code to do the generic swiotlb/swiotlb-xen setup directly
in pci-dma.c and then just call into the IOMMU drivers.
Signed-off-by: Christoph Hellwig
---
arch/ia64
Use the generic swiotlb initialization helper instead of open coding it.
Signed-off-by: Christoph Hellwig
Acked-by: Thomas Bogendoerfer
---
arch/mips/cavium-octeon/dma-octeon.c | 15 ++-
arch/mips/pci/pci-octeon.c | 2 +-
2 files changed, 3 insertions(+), 14 deletions(-)
From: Stefano Stabellini
It used to be that Linux enabled swiotlb-xen when running a dom0 on ARM.
Since f5079a9a2a31 "xen/arm: introduce XENFEAT_direct_mapped and
XENFEAT_not_direct_mapped", Linux detects whether to enable or disable
swiotlb-xen based on the new feature flags: XENFEAT_direct_mapp
swiotlb_late_init_with_default_size is an overly verbose name that
doesn't even catch what the function is doing, given that the size is
not just a default but the actual requested size.
Rename it to swiotlb_init_late.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
arch/x8
Remove the bogus Xen override that was usually larger than the actual
size and just calculate the value on demand. Note that
swiotlb_max_segment still doesn't make sense as an interface and should
eventually be removed.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
driver
If force bouncing is enabled we can't release the buffers.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
kernel/dma/swiotlb.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 908eac2527cb1..af9d257501a64 100644
--- a/
Use the more specific is_swiotlb_active check instead of checking the
global swiotlb_force variable.
Signed-off-by: Christoph Hellwig
Reviewed-by: Anshuman Khandual
---
kernel/dma/direct.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/dma/direct.h b/kernel/dma/direc
Hi all,
this series tries to clean up the swiotlb initialization, including
that of swiotlb-xen. To get there is also removes the x86 iommu table
infrastructure that massively obsfucates the initialization path.
Git tree:
git://git.infradead.org/users/hch/misc.git swiotlb-init-cleanup
Gitw
System.map shows that vmlinux contains several instances of
__static_call_return0():
c0004fc0 t __static_call_return0
c0011518 t __static_call_return0
c00d8160 t __static_call_return0
arch_static_call_transform() uses the middle one to check whether we are
setting a call t
45 matches
Mail list logo