On Tue, 8 Jan 2019 11:26:30 +0100
Eric Auger wrote:
> This patch adds a new 64kB region aiming to report nested mode
> translation faults.
>
> The region contains a header with the size of the queue,
> the producer and consumer indices and then the actual
> fault queue data. The producer is upd
On Fri, 11 Jan 2019 16:02:44 -0700
Alex Williamson wrote:
> On Tue, 8 Jan 2019 11:26:18 +0100
> Eric Auger wrote:
>
> > This patch adds the VFIO_IOMMU_BIND_MSI ioctl which aims at
> > passing the guest MSI binding to the host.
> >
> > Signed-off-by: Eric Auger
> >
> > ---
> >
> > v2 -> v3:
On Tue, 8 Jan 2019 11:26:18 +0100
Eric Auger wrote:
> This patch adds the VFIO_IOMMU_BIND_MSI ioctl which aims at
> passing the guest MSI binding to the host.
>
> Signed-off-by: Eric Auger
>
> ---
>
> v2 -> v3:
> - adapt to new proto of bind_guest_msi
> - directly use vfio_iommu_for_each_dev
On Tue, 8 Jan 2019 11:26:16 +0100
Eric Auger wrote:
> From: "Liu, Yi L"
>
> This patch adds VFIO_IOMMU_SET_PASID_TABLE ioctl which aims at
> passing the virtual iommu guest configuration to the VFIO driver
> downto to the iommu subsystem.
>
> Signed-off-by: Jacob Pan
> Signed-off-by: Liu, Yi
On Tue, 8 Jan 2019 11:26:15 +0100
Eric Auger wrote:
> On ARM, MSI are translated by the SMMU. An IOVA is allocated
> for each MSI doorbell. If both the host and the guest are exposed
> with SMMUs, we end up with 2 different IOVAs allocated by each.
> guest allocates an IOVA (gIOVA) to map onto t
On Tue, 8 Jan 2019 11:26:14 +0100
Eric Auger wrote:
> From: "Liu, Yi L"
>
> In any virtualization use case, when the first translation stage
> is "owned" by the guest OS, the host IOMMU driver has no knowledge
> of caching structure updates unless the guest invalidation activities
> are trappe
Em Fri, 11 Jan 2019 19:17:31 +0100
Christoph Hellwig escreveu:
> vb2_dc_get_userptr pokes into arm direct mapping details to get the
> resemblance of a dma address for a a physical address that does is
> not backed by a page struct. Not only is this not portable to other
> architectures with dma
On Tue, 8 Jan 2019 11:26:13 +0100
Eric Auger wrote:
> From: Jacob Pan
>
> In virtualization use case, when a guest is assigned
> a PCI host device, protected by a virtual IOMMU on a guest,
> the physical IOMMU must be programmed to be consistent with
> the guest mappings. If the physical IOMMU
Use WARN_ON_ONCE to print a stack trace and return a proper error
code instead.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-mapping.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index d3087829a6df..91a
vb2_dc_get_userptr pokes into arm direct mapping details to get the
resemblance of a dma address for a a physical address that does is
not backed by a page struct. Not only is this not portable to other
architectures with dma direct mapping offsets, but also not to uses
of IOMMUs of any kind. Swi
Hi all,
this series fixes a rather gross layering violation in videobuf2, which
pokes into arm DMA mapping internals to get a DMA address for memory that
does not have a page structure, and to do so fixes up the dma_map_resource
implementation to be practically useful.
On 08/01/2019 10:26, Eric Auger wrote:
> From: Jacob Pan
>
> In virtualization use case, when a guest is assigned
> a PCI host device, protected by a virtual IOMMU on a guest,
> the physical IOMMU must be programmed to be consistent with
> the guest mappings. If the physical IOMMU supports two
>
Linus,
any chance you could take this before -rc2? That should avoid a lot
of churn going forward. Any fine tuning of the memset-removal
cochinnelle scripts can be queued up with normal updates.
On Tue, Jan 08, 2019 at 08:06:58AM -0500, Christoph Hellwig wrote:
> Hi Linus and world,
>
> We've
Just returning the physical address when not map_resource method is
present is highly dangerous as it doesn't take any offset in the
direct mapping into account and does the completely wrong thing for
IOMMUs. Instead provide a proper implementation in the direct mapping
code, and also wire it up f
On 08/01/2019 10:26, Eric Auger wrote:
> When a stage 1 related fault event is read from the event queue,
> let's propagate it to potential external fault listeners, ie. users
> who registered a fault handler.
>
> Signed-off-by: Eric Auger
> ---
> drivers/iommu/arm-smmu-v3.c | 124 ++
On 08/01/2019 10:26, Eric Auger wrote:
> Implement IOMMU_INV_TYPE_TLB invalidations. When
> nr_pages is null we interpret this as a context
> invalidation.
>
> Signed-off-by: Eric Auger
>
> ---
>
> The user API needs to be refined to discriminate context
> invalidations from NH_VA invalidations
Hi Eric,
On 08/01/2019 10:26, Eric Auger wrote:
> To allow nested stage support, we need to store both
> stage 1 and stage 2 configurations (and remove the former
> union).
>
> arm_smmu_write_strtab_ent() is modified to write both stage
> fields in the STE.
>
> We add a nested_bypass field to th
Convert to use vm_insert_range() to map range of kernel
memory to user vma.
Signed-off-by: Souptick Joarder
---
drivers/iommu/dma-iommu.c | 12 +---
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..802de67
Previouly drivers have their own way of mapping range of
kernel pages/memory into user vma and this was done by
invoking vm_insert_page() within a loop.
As this pattern is common across different drivers, it can
be generalized by creating new functions and use it across
the drivers.
vm_insert_ran
Previouly drivers have their own way of mapping range of
kernel pages/memory into user vma and this was done by
invoking vm_insert_page() within a loop.
As this pattern is common across different drivers, it can
be generalized by creating new functions and use it across
the drivers.
vm_insert_ran
On Tue, 1 Jan 2019 12:51:04 +0800, Yong Wu wrote:
> After adding device_link between the consumer with the smi-larbs,
> if the consumer call its owner pm_runtime_get(_sync), the
> pm_runtime_get(_sync) of smi-larb and smi-common will be called
> automatically. Thus, the consumer don't need the prop
On 11/01/2019 12:28, Joerg Roedel wrote:
> Hi Jean-Philippe,
>
> On Thu, Dec 13, 2018 at 12:50:29PM +, Jean-Philippe Brucker wrote:
>> We already do deferred flush: UNMAP requests are added to the queue by
>> iommu_unmap(), and then flushed out by iotlb_sync(). So we switch to the
>> host only
Hi Jean-Philippe,
On Thu, Dec 13, 2018 at 12:50:29PM +, Jean-Philippe Brucker wrote:
> We already do deferred flush: UNMAP requests are added to the queue by
> iommu_unmap(), and then flushed out by iotlb_sync(). So we switch to the
> host only on iotlb_sync(), or when the request queue is ful
On Fri, Jan 11, 2019 at 01:04:57PM +0800, Lu Baolu wrote:
> From: Jacob Pan
>
> VT-d Rev3.0 has made a few changes to the page request interface,
>
> 1. widened PRQ descriptor from 128 bits to 256 bits;
> 2. removed streaming response type;
> 3. introduced private data that requires page respons
On Wed, Jan 02, 2019 at 11:16:57PM +0200, Sakari Ailus wrote:
> Drivers such as the Intel IPU3 ImgU driver use the IOVA library to manage
> the device's own virtual address space while not implementing the IOMMU
> API. Currently the IOVA library is only compiled if the IOMMU support is
> enabled, r
On Mon, Jan 07, 2019 at 05:04:50PM +, Robin Murphy wrote:
> Whilst iommu_probe_device() does check for non-NULL ops as the previous
> code did, it does not do so in the same order relative to the other
> checks, and as a result means that -EPROBE_DEFER returned by of_xlate()
> (plus any real er
On Sun, Dec 30, 2018 at 04:53:15PM +0100, Julia Lawall wrote:
> Delete tab aligning a statement with the right hand side of a
> preceding assignment rather than the left hand side.
>
> Found with the help of Coccinelle.
>
> Signed-off-by: Julia Lawall
Applied, thanks.
__
Hi,
this looks a bit confusing to me because I can see no checking whether
the device actually supports scalable mode. More below:
On Thu, Jan 10, 2019 at 11:00:21AM +0800, Lu Baolu wrote:
> +static int intel_iommu_enable_auxd(struct device *dev)
> +{
> + struct device_domain_info *info;
> +
On 10/01/2019 18:45, Jacob Pan wrote:
> On Tue, 8 Jan 2019 11:26:26 +0100
> Eric Auger wrote:
>
>> From: Jacob Pan
>>
>> Device faults detected by IOMMU can be reported outside IOMMU
>> subsystem for further processing. This patch intends to provide
>> a generic device fault data such that devi
Hi Geert,
On Thu, Dec 20, 2018 at 04:42:17PM +0100, Geert Uytterhoeven wrote:
> > - return ops->add_device(dev);
> > + if (ops)
>
> Is this sufficient? The old code checked for ops->add_device != NULL,
> too.
Robin brought up that all iommu-ops implementations support the
add_devic
On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote:
> Does anyone have any further comment on this series? If not, which
> maintainer is going to pick this up? I assume Andrew Morton?
Probably, yes. I don't like to carry the mm-changes in iommu-tree, so
this should go through mm.
Reg
On Thu, Dec 20, 2018 at 03:47:28PM +, Tom Murphy wrote:
> Ah shoot, it looks like I forgot to change flush_tlb_all -> flush_iotlb_all
>
> Should I submit another patch?
Yes, please.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://li
On Fri, Jan 11, 2019 at 11:29:31AM +0800, Jason Wang wrote:
> Just wonder if my understanding is correct IOMMU_PLATFORM must be set for
> all virtio devices under AMD-SEV guests?
Yes, that is correct. Emulated DMA can only happen on the SWIOTLB
aperture, because that memory is not encrypted. The g
On Thu, Jan 10, 2019 at 12:02:05PM -0500, Konrad Rzeszutek Wilk wrote:
> Why not use swiotlb_nr_tbl ? That is how drivers/gpu/drm use to figure if they
> need to limit the size of pages.
That function just exports the overall size of the swiotlb aperture, no?
What I need here is the maximum size f
34 matches
Mail list logo