On Fri, Sep 25 2020 at 17:49, Peter Zijlstra wrote:
> Here it looks like this:
>
> [1.830276] BUG: kernel NULL pointer dereference, address:
> [1.838043] #PF: supervisor instruction fetch in kernel mode
> [1.844357] #PF: error_code(0x0010) - not-present page
> [1.85
On 9/25/20 1:30 AM, Jarkko Sakkinen wrote:
> On Thu, Sep 24, 2020 at 10:58:28AM -0400, Ross Philipson wrote:
>> The Trenchboot project focus on boot security has led to the enabling of
>> the Linux kernel to be directly invocable by the x86 Dynamic Launch
>> instruction(s) for establishing a Dynami
When Mechanical Retention Lock (MRL) is present, Linux doesn't process
those change events.
The following changes need to be enabled when MRL is present.
1. Subscribe to MRL change events in SlotControl.
2. When MRL is closed,
- If there is no ATTN button, then POWER on the slot.
- If there
On Tue, Sep 22, 2020 at 2:28 AM Marek Szyprowski
wrote:
>
> Hi Alex,
>
> On 22.09.2020 01:15, Alex Goins wrote:
> > Tested-by: Alex Goins
> >
> > This change fixes a regression with drm_prime_sg_to_page_addr_arrays() and
> > AMDGPU in v5.9.
>
> Thanks for testing!
>
> > Commit 39913934 similarly
On Fri, Sep 25, 2020 at 2:56 AM John Garry wrote:
>
> This series contains a patch to solve the longterm IOVA issue which
> leizhen originally tried to address at [0].
>
> I also included the small optimisation from Cong Wang, which never seems
> to be have been accepted [1]. There was some debate
On Fri, Sep 25, 2020 at 10:56:43AM -0400, Ross Philipson wrote:
> On 9/24/20 1:38 PM, Arvind Sankar wrote:
> > On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
> >
> >> diff --git a/arch/x86/boot/compressed/head_64.S
> >> b/arch/x86/boot/compressed/head_64.S
> >> index 97d37f0..420
Presently, the default domain of an iommu group is allocated during boot time
and it cannot be changed later. So, the device would typically be either in
identity (pass_through) mode or the device would be in DMA mode as long as the
system is up and running. There is no way to change the default do
From: Sai Praneeth Prakhya
The default domain type of an iommu group can be changed by writing to
"/sys/kernel/iommu_groups//type" file. Hence, document it's usage
and more importantly spell out its limitations.
Cc: Christoph Hellwig
Cc: Joerg Roedel
Cc: Ashok Raj
Cc: Will Deacon
Cc: Lu Baol
From: Sai Praneeth Prakhya
"/sys/kernel/iommu_groups//type" file could be read to find out the
default domain type of an iommu group. The default domain of an iommu group
doesn't change after booting and hence could be read directly. But,
after addding support to dynamically change iommu group de
From: Sai Praneeth Prakhya
Presently, the default domain of an iommu group is allocated during boot
time and it cannot be changed later. So, the device would typically be
either in identity (also known as pass_through) mode or the device would be
in DMA mode as long as the machine is up and runni
Hi Christoph,
On Tue, Sep 15, 2020 at 05:51:05PM +0200, Christoph Hellwig wrote:
> From: Sergey Senozhatsky
>
> The patch partially reverts some of the UAPI bits of the buffer
> cache management hints. Namely, the queue consistency (memory
> coherency) user-space hint because, as it turned out,
Hi Christoph,
On Tue, Sep 15, 2020 at 05:51:21PM +0200, Christoph Hellwig wrote:
> Implement the alloc_noncoherent method to provide memory that is neither
> coherent not contiguous.
>
> Signed-off-by: Christoph Hellwig
> ---
> drivers/iommu/dma-iommu.c | 41 +++-
Hi Joerg,
thanks!
On Fri, Sep 25, 2020 at 09:34:23AM +0200, Joerg Roedel wrote:
> Hi Ashok,
>
> On Thu, Sep 24, 2020 at 10:21:48AM -0700, Raj, Ashok wrote:
> > Just trying to followup on this series.
> >
> > Sai has moved out of Intel, hence I'm trying to followup on his behalf.
> >
> > Let me
IOMMU user API header was introduced to support nested DMA translation and
related fault handling. The current UAPI data structures consist of three
areas that cover the interactions between host kernel and guest:
- fault handling
- cache invalidation
- bind guest page tables, i.e. guest PASID
IOMMU UAPI is newly introduced to support communications between guest
virtual IOMMU and host IOMMU. There has been lots of discussions on how
it should work with VFIO UAPI and userspace in general.
This document is intended to clarify the UAPI design and usage. The
mechanics of how future extensi
IOMMU generic layer already does sanity checks on UAPI data for version
match and argsz range based on generic information.
This patch adjusts the following data checking responsibilities:
- removes the redundant version check from VT-d driver
- removes the check for vendor specific data size
- ad
User APIs such as iommu_sva_unbind_gpasid() may also be used by the
kernel. Since we introduced user pointer to the UAPI functions,
in-kernel callers cannot share the same APIs. In-kernel callers are also
trusted, there is no need to validate the data.
We plan to have two flavors of the same API f
IOMMU UAPI data size is filled by the user space which must be validated
by the kernel. To ensure backward compatibility, user data can only be
extended by either re-purpose padding bytes or extend the variable sized
union at the end. No size change is allowed before the union. Therefore,
the minim
IOMMU user APIs are responsible for processing user data. This patch
changes the interface such that user pointers can be passed into IOMMU
code directly. Separate kernel APIs without user pointers are introduced
for in-kernel users of the UAPI functionality.
IOMMU UAPI data has a user filled args
As IOMMU UAPI gets extended, user data size may increase. To support
backward compatibiliy, this patch introduces a size field to each UAPI
data structures. It is *always* the responsibility for the user to fill in
the correct size. Padding fields are adjusted to ensure 8 byte alignment.
Specific
> #define DMA_ATTR_PRIVILEGED (1UL << 9)
> +/*
> + * DMA_ATTR_LOW_ADDRESS: used to indicate that the buffer should be allocated
> + * at the lowest possible DMA address, usually just at the beginning of the
> + * DMA/IOVA address space ('first-fit' allocation algorithm).
> + */
> +#define
On Fri, Sep 25, 2020 at 12:15:37PM +0100, Robin Murphy wrote:
> On 2020-09-15 16:51, Christoph Hellwig wrote:
> [...]
>> +These APIs allow to allocate pages in the kernel direct mapping that are
>> +guaranteed to be DMA addressable. This means that unlike
>> dma_alloc_coherent,
>> +virt_to_page c
On 25/09/2020 15:34, John Garry wrote:
Indeed, I think that the mainline code has a bug:
If the initial allocation for the loaded/prev magazines fail (give NULL)
in init_iova_rcaches(), then in __iova_rcache_insert():
if (!iova_magazine_full(cpu_rcache->loaded)) {
can_insert = true;
If
Hi Jean-Philippe,
On Fri, 25 Sep 2020 11:46:36 +0200, Jean-Philippe Brucker
wrote:
> On Thu, Sep 24, 2020 at 12:24:19PM -0700, Jacob Pan wrote:
> > IOMMU user APIs are responsible for processing user data. This patch
> > changes the interface such that user pointers can be passed into IOMMU
> >
wapper/0 Tainted: G I
>5.9.0-rc6-next-20200925 #2
> [8.503987][T0] Hardware name: HPE ProLiant DL560 Gen10/ProLiant DL560
> Gen10, BIOS U34 11/13/2019
> [8.513238][T0] RIP: 0010:0x0
> [8.516562][T0] Code: Bad RIP v
Here it looks like t
On Fri, Sep 25, 2020 at 10:12:43AM +0200, Jean-Philippe Brucker wrote:
> On Thu, Sep 24, 2020 at 10:22:03AM -0500, Bjorn Helgaas wrote:
> > On Fri, Aug 21, 2020 at 03:15:39PM +0200, Jean-Philippe Brucker wrote:
> > > + /* Perform the init sequence before we can read the config */
> > > + ret = vio
ervisor instruction fetch in kernel mode
[8.480669][T0] #PF: error_code(0x0010) - not-present page
[8.486518][T0] PGD 0 P4D 0
[8.489757][T0] Oops: 0010 [#1] SMP KASAN PTI
[8.494476][T0] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G I
5.9.0-rc6-next-20200925 #2
[
On 9/24/20 10:08 PM, Randy Dunlap wrote:
> On 9/24/20 7:58 AM, Ross Philipson wrote:
>> Initial bits to bring in Secure Launch functionality. Add Kconfig
>> options for compiling in/out the Secure Launch code.
>>
>> Signed-off-by: Ross Philipson
>
> Hi,
> from Documentation/process/coding-style.r
On 9/24/20 1:38 PM, Arvind Sankar wrote:
> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
>> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
>> later AMD SKINIT) to vector to during the late launch. The symbol
>> sl_stub_entry is that entry point and its off
On Wed, 2020-08-26 at 13:17 +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> The arch_.*_msi_irq[s] fallbacks are compiled in whether an architecture
> requires them or not. Architectures which are fully utilizing hierarchical
> irq domains should never call into that code.
>
> It's not
On 25/09/2020 12:53, Robin Murphy wrote:
---
drivers/iommu/iova.c | 25 -
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 45a251da5453..05e0b462e0d9 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/io
Hi,
> Many power platforms are OF based, thus without ACPI or DT support.
pseries has lots of stuff below /proc/device-tree. Dunno whenever that
is the same kind of device tree we have on arm ...
take care,
Gerd
___
iommu mailing list
iommu@lists
Exynos4-IS driver relied on the way the ARM DMA-IOMMU glue code worked -
mainly it relied on the fact that the allocator used first-fit algorithm
and the first allocated buffer were at 0x0 DMA/IOVA address. This is not
true for the generic IOMMU-DMA glue code that will be used for ARM
architecture
Change the parameters passed to iommu_dma_alloc_iova(): the dma_limit can
be easily extracted from the parameters of the passed struct device, so
replace it with a flags parameter, which can later hold more information
about the way the IOVA allocator should do it job. While touching the
parameter
Implement support for the DMA_ATTR_LOW_ADDRESS DMA attribute. If it has
been set, call alloc_iova_first_fit() instead of the alloc_iova_fast() to
allocate the new IOVA from the beginning of the address space.
Signed-off-by: Marek Szyprowski
---
drivers/iommu/dma-iommu.c | 50
This driver always operates on the DMA/IOVA addresses, so calling them
physicall addresses is misleading, although when no IOMMU is used they
equal each other. Fix this by renaming all such entries to 'addr' and
adjusting comments.
Signed-off-by: Marek Szyprowski
---
.../media/platform/exynos4-i
S5P-MFC driver relied on the way the ARM DMA-IOMMU glue code worked -
mainly it relied on the fact that the allocator used first-fit algorithm
and the first allocated buffer were at 0x0 DMA/IOVA address. This is not
true for the generic IOMMU-DMA glue code that will be used for ARM
architecture soo
Zero is a valid DMA and IOVA address on many architectures, so adjust the
IOVA management code to properly handle it. A new value IOVA_BAD_ADDR
(~0UL) is introduced as a generic value for the error case. Adjust all
callers of the alloc_iova_fast() function for the new return value.
Signed-off-by:
Some devices require to allocate a special buffer (usually for the
firmware) just at the beginning of the address space to ensure that all
further allocations can be expressed as a positive offset from that
special buffer. When IOMMU is used for managing the DMA address space,
such requirement can
Add support for the 'first-fit' allocation algorithm. It will be used for
the special case of implementing DMA_ATTR_LOW_ADDRESS, so this path
doesn't use IOVA cache.
Signed-off-by: Marek Szyprowski
---
drivers/iommu/iova.c | 78
include/linux/iova.h |
n the idea proposed
by Robin Murphy in [2] after the discussion of the workaround implemented
directly in the mentioned drivers [3].
Here is a git branch with this patchset and [1] patches applied on top of
linux next-20200925:
https://github.com/mszyprow/linux/tree/v5.9-next-20200925-arm-dma-iommu-l
25.09.2020 15:39, Robin Murphy пишет:
...
>> Yes, my understanding that this is what Robin suggested here:
>>
>> https://lore.kernel.org/linux-iommu/cb12808b-7316-19db-7413-b7f852a6f...@arm.com/
>>
>
> Just to clarify, what I was talking about there is largely orthogonal to
> the issue here. That
On Fri, Sep 25, 2020 at 01:26:29PM +0200, Jean-Philippe Brucker wrote:
> On Fri, Sep 25, 2020 at 06:22:57AM -0400, Michael S. Tsirkin wrote:
> > On Fri, Sep 25, 2020 at 10:48:06AM +0200, Jean-Philippe Brucker wrote:
> > > On Fri, Aug 21, 2020 at 03:15:34PM +0200, Jean-Philippe Brucker wrote:
> > >
25.09.2020 15:39, Robin Murphy пишет:
...
>> IIRC, in the past Robin Murphy was suggesting to read out hardware state
>> early during kernel boot in order to find what regions are in use by
>> hardware.
>
> I doubt I suggested that in general, because I've always firmly believed
> it to be a terri
On Fri, Sep 25, 2020 at 11:45:05AM +, Suravee Suthikulpanit wrote:
> When using 128-bit interrupt-remapping table entry (IRTE) (a.k.a GA mode),
> current driver disables interrupt remapping when it updates the IRTE
> so that the upper and lower 64-bit values can be updated safely.
>
> However,
On 2020-09-24 17:23, Dmitry Osipenko wrote:
24.09.2020 17:01, Thierry Reding пишет:
On Thu, Sep 24, 2020 at 04:23:59PM +0300, Dmitry Osipenko wrote:
04.09.2020 15:59, Thierry Reding пишет:
From: Thierry Reding
Reserved memory regions can be marked as "active" if hardware is
expected to acces
On 2020-09-25 10:51, John Garry wrote:
Leizhen reported some time ago that IOVA performance may degrade over time
[0], but unfortunately his solution to fix this problem was not given
attention.
To summarize, the issue is that as time goes by, the CPU rcache and depot
rcache continue to grow. As
When using 128-bit interrupt-remapping table entry (IRTE) (a.k.a GA mode),
current driver disables interrupt remapping when it updates the IRTE
so that the upper and lower 64-bit values can be updated safely.
However, this creates a small window, where the interrupt could
arrive and result in IO_P
On Fri, Sep 25, 2020 at 06:22:57AM -0400, Michael S. Tsirkin wrote:
> On Fri, Sep 25, 2020 at 10:48:06AM +0200, Jean-Philippe Brucker wrote:
> > On Fri, Aug 21, 2020 at 03:15:34PM +0200, Jean-Philippe Brucker wrote:
> > > Add a topology description to the virtio-iommu driver and enable x86
> > > pl
On 2020-09-15 16:51, Christoph Hellwig wrote:
[...]
+These APIs allow to allocate pages in the kernel direct mapping that are
+guaranteed to be DMA addressable. This means that unlike dma_alloc_coherent,
+virt_to_page can be called on the resulting address, and the resulting
Nit: if we explici
On Fri, Sep 25, 2020 at 10:48:06AM +0200, Jean-Philippe Brucker wrote:
> On Fri, Aug 21, 2020 at 03:15:34PM +0200, Jean-Philippe Brucker wrote:
> > Add a topology description to the virtio-iommu driver and enable x86
> > platforms.
> >
> > Since [v2] we have made some progress on adding ACPI suppo
On Thu, Sep 24, 2020 at 02:50:46PM +0200, Joerg Roedel wrote:
> On Thu, Sep 24, 2020 at 08:41:21AM -0400, Michael S. Tsirkin wrote:
> > But this has nothing to do with Linux. There is also no guarantee that
> > the two committees will decide to use exactly the same format. Once one
> > of them set
Robin,
On 9/24/20 7:25 PM, Robin Murphy wrote:
+struct io_pgtable_ops *amd_iommu_setup_io_pgtable_ops(struct iommu_dev_data
*dev_data,
+ struct protection_domain *domain)
+{
+domain->iop.pgtbl_cfg = (struct io_pgtable_cfg) {
+.pgsize_bitmap= AMD_IOMMU
From: Cong Wang
Both find_iova() and __free_iova() take iova_rbtree_lock,
there is no reason to take and release it twice inside
free_iova().
Fold them into one critical section by calling the unlock
versions instead.
Signed-off-by: Cong Wang
Reviewed-by: Robin Murphy
Tested-by: Xiang Chen
S
Leizhen reported some time ago that IOVA performance may degrade over time
[0], but unfortunately his solution to fix this problem was not given
attention.
To summarize, the issue is that as time goes by, the CPU rcache and depot
rcache continue to grow. As such, IOVA RB tree access time also cont
This series contains a patch to solve the longterm IOVA issue which
leizhen originally tried to address at [0].
I also included the small optimisation from Cong Wang, which never seems
to be have been accepted [1]. There was some debate of the other patches
in that series, but this one is quite st
On Thu, Sep 24, 2020 at 12:24:19PM -0700, Jacob Pan wrote:
> IOMMU user APIs are responsible for processing user data. This patch
> changes the interface such that user pointers can be passed into IOMMU
> code directly. Separate kernel APIs without user pointers are introduced
> for in-kernel users
On Fri, Aug 21, 2020 at 03:15:34PM +0200, Jean-Philippe Brucker wrote:
> Add a topology description to the virtio-iommu driver and enable x86
> platforms.
>
> Since [v2] we have made some progress on adding ACPI support for
> virtio-iommu, which is the preferred boot method on x86. It will be a
>
On Thu, Sep 24, 2020 at 10:22:03AM -0500, Bjorn Helgaas wrote:
> On Fri, Aug 21, 2020 at 03:15:39PM +0200, Jean-Philippe Brucker wrote:
> > Platforms without device-tree nor ACPI can provide a topology
> > description embedded into the virtio config space. Parse it.
> >
> > Use PCI FIXUP to probe
Hi Ashok,
On Thu, Sep 24, 2020 at 10:21:48AM -0700, Raj, Ashok wrote:
> Just trying to followup on this series.
>
> Sai has moved out of Intel, hence I'm trying to followup on his behalf.
>
> Let me know if you have queued this for the next release.
Not yet, but I think this is mostly ready. Ca
60 matches
Mail list logo