> From: Jacob Pan
> Sent: Wednesday, January 29, 2020 2:02 PM
> Subject: [PATCH V9 06/10] iommu/vt-d: Add svm/sva invalidate function
>
> When Shared Virtual Address (SVA) is enabled for a guest OS via vIOMMU, we
> need to provide invalidation support at IOMMU API and driver level. This patch
> a
On 2020/2/21 上午12:06, Halil Pasic wrote:
Currently if one intends to run a memory protection enabled VM with
virtio devices and linux as the guest OS, one needs to specify the
VIRTIO_F_IOMMU_PLATFORM flag for each virtio device to make the guest
linux use the DMA API, which in turn handles the m
On 2020/2/21 上午10:59, David Gibson wrote:
On Thu, Feb 20, 2020 at 05:13:09PM +0100, Christoph Hellwig wrote:
On Thu, Feb 20, 2020 at 05:06:06PM +0100, Halil Pasic wrote:
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 867c7ebd3f10..fafc8f924955 100644
--- a/drive
On Thu, Feb 20, 2020 at 05:17:48PM -0800, Ram Pai wrote:
> On Thu, Feb 20, 2020 at 03:55:14PM -0500, Michael S. Tsirkin wrote:
> > On Thu, Feb 20, 2020 at 05:06:06PM +0100, Halil Pasic wrote:
> > > Currently the advanced guest memory protection technologies (AMD SEV,
> > > powerpc secure guest tech
On Thu, Feb 20, 2020 at 05:31:35PM +0100, Christoph Hellwig wrote:
> On Thu, Feb 20, 2020 at 05:23:20PM +0100, Christian Borntraeger wrote:
> > >From a users perspective it makes absolutely perfect sense to use the
> > bounce buffers when they are NEEDED.
> > Forcing the user to specify iommu_plat
On Thu, Feb 20, 2020 at 05:13:09PM +0100, Christoph Hellwig wrote:
> On Thu, Feb 20, 2020 at 05:06:06PM +0100, Halil Pasic wrote:
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 867c7ebd3f10..fafc8f924955 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b
On Thu, Feb 20, 2020 at 03:55:14PM -0500, Michael S. Tsirkin wrote:
> On Thu, Feb 20, 2020 at 05:06:06PM +0100, Halil Pasic wrote:
> > Currently the advanced guest memory protection technologies (AMD SEV,
> > powerpc secure guest technology and s390 Protected VMs) abuse the
> > VIRTIO_F_IOMMU_PLATF
Hi,
On 2020/1/15 18:28, Zhenzhong Duan wrote:
When base address in RHSA structure doesn't match base address in
each DRHD structure, the base address in last DRHD is printed out.
This doesn't make sense when there are multiple DRHD units, fix it
by printing the buggy RHSA's base address.
Signe
On Thu, 20 Feb 2020, Robin Murphy wrote:
> > > > > Add CONFIG_IOMMU_LIMIT_IOVA_ALIGNMENT to limit the alignment of
> > > > > IOVAs to some desired PAGE_SIZE order, specified by
> > > > > CONFIG_IOMMU_IOVA_ALIGNMENT. This helps reduce the impact of
> > > > > fragmentation caused by the current IOVA
On Thu, Feb 20, 2020 at 05:06:04PM +0100, Halil Pasic wrote:
> For vhost-net the feature VIRTIO_F_IOMMU_PLATFORM has the following side
> effect The vhost code assumes it the addresses on the virtio descriptor
> ring are not guest physical addresses but iova's, and insists on doing a
> translation
On Thu, Feb 20, 2020 at 05:06:04PM +0100, Halil Pasic wrote:
> * This usage is not congruent with standardised semantics of
> VIRTIO_F_IOMMU_PLATFORM. Guest memory protected is an orthogonal reason
> for using DMA API in virtio (orthogonal with respect to what is
> expressed by VIRTIO_F_IOMMU_PLAT
On Thu, Feb 20, 2020 at 05:06:06PM +0100, Halil Pasic wrote:
> Currently the advanced guest memory protection technologies (AMD SEV,
> powerpc secure guest technology and s390 Protected VMs) abuse the
> VIRTIO_F_IOMMU_PLATFORM flag to make virtio core use the DMA API, which
> is in turn necessary,
On Thu, Feb 20, 2020 at 05:06:04PM +0100, Halil Pasic wrote:
> Currently if one intends to run a memory protection enabled VM with
> virtio devices and linux as the guest OS, one needs to specify the
> VIRTIO_F_IOMMU_PLATFORM flag for each virtio device to make the guest
> linux use the DMA API, wh
intel_iommu_iova_to_phys() has a bug when it translates an IOVA for a huge
page onto its corresponding physical address. This commit fixes the bug by
accomodating the level of page entry for the IOVA and adds IOVA's lower
address to the physical address.
Signed-off-by: Yonghyun Hwang
---
Changes
Hi Yonghyun,
On Thu, Feb 20, 2020 at 11:44:31AM -0800, Yonghyun Hwang wrote:
> intel_iommu_iova_to_phys() has a bug when it translates an IOVA for a huge
> page onto its corresponding physical address. This commit fixes the bug by
> accomodating the level of page entry for the IOVA and adds IOVA's
On 02/20 08:45 am, Will Deacon wrote:
> On Wed, Feb 19, 2020 at 12:06:28PM -0800, isa...@codeaurora.org wrote:
> > On 2020-02-19 03:15, Will Deacon wrote:
> > > On Tue, Feb 18, 2020 at 05:57:18PM -0800, isa...@codeaurora.org wrote:
> > > > Does this mean that the driver should be managing the IOVA
On 20/02/2020 6:38 am, isa...@codeaurora.org wrote:
On 2020-02-17 08:03, Robin Murphy wrote:
On 14/02/2020 11:06 pm, Isaac J. Manjarres wrote:
From: Liam Mark
Using the best-fit algorithm, instead of the first-fit
algorithm, may reduce fragmentation when allocating
IOVAs.
What kind of patho
The main DRM device is actually a virtual device so it doesn't have the
iommus property, which is instead on the DMA masters, in this case the
mixers.
Add a call to of_dma_configure with the mixers DT node but on the DRM
virtual device to configure it in the same way than the mixers.
Signed-off-b
The Allwinner H6 has introduced an IOMMU. Let's add a device tree binding
for it.
Reviewed-by: Rob Herring
Signed-off-by: Maxime Ripard
---
Documentation/devicetree/bindings/iommu/allwinner,sun50i-h6-iommu.yaml | 61
+
1 file changed,
Hi,
Here's a series adding support for the IOMMU introduced in the Allwinner
H6. The driver from Allwinner hints at more SoCs using it in the future
(with more masters), so we can bet on that IOMMU becoming pretty much
standard in new SoCs from Allwinner.
One thing I wasn't really sure about was
Now that we have a driver for the IOMMU, let's start using it.
Signed-off-by: Maxime Ripard
---
arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi | 10 ++
1 file changed, 10 insertions(+)
diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
b/arch/arm64/boot/dts/allwinner/sun50i-h6.dts
The Allwinner H6 has introduced an IOMMU for a few DMA controllers, mostly
video related: the display engine, the video decoders / encoders, the
camera capture controller, etc.
The design is pretty simple compared to other IOMMUs found in SoCs: there's
a single instance, controlling all the master
On Mon, Feb 17, 2020 at 1:17 PM Robin Murphy wrote:
>
> On 13/02/2020 9:49 pm, Rob Herring wrote:
> > On Thu, Jan 30, 2020 at 11:34 AM Robin Murphy wrote:
> >>
> >> On 30/01/2020 3:06 pm, Auger Eric wrote:
> >>> Hi Rob,
> >>> On 1/17/20 10:16 PM, Rob Herring wrote:
> Arm SMMUv3.2 adds suppor
On 19/02/2020 11:22 pm, Liam Mark wrote:
On Wed, 19 Feb 2020, Will Deacon wrote:
On Mon, Feb 17, 2020 at 04:46:14PM +, Robin Murphy wrote:
On 14/02/2020 8:30 pm, Liam Mark wrote:
When the IOVA framework applies IOVA alignment it aligns all
IOVAs to the smallest PAGE_SIZE order which is g
On Thu Feb 20 20, Jerry Snitselaar wrote:
On Thu Feb 20 20, Lu Baolu wrote:
Hi Jerry,
On 2020/2/20 7:55, Jerry Snitselaar wrote:
Is it possible for a device to end up with dev->archdata.iommu == NULL
on iommu_need_mapping in the following instance:
1. iommu_group has dma domain for default
2.
On 2/18/20 11:13 AM, Rob Herring wrote:
Cc: "Rafael J. Wysocki"
Cc: Viresh Kumar
Cc: linux...@vger.kernel.org
Signed-off-by: Rob Herring
---
Acked-by: Mark Langsdorf
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfound
On 2/18/20 11:13 AM, Rob Herring wrote:
Cc: Jens Axboe
Cc: linux-...@vger.kernel.org
Signed-off-by: Rob Herring
---
Acked-by: Mark Langsdorf
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo
On 20/02/2020 5:01 pm, Christoph Hellwig wrote:
We currently only support remapping memory as uncached through vmap
or a magic uncached segment provided by some architectures. But there
is a simpler and much better way available on some architectures where
we can just remap the memory in place.
Hi all,
this series provides support for remapping places uncached in-place in
the generic dma-direct code, and moves openrisc over from its own
in-place remapping scheme. The arm64 folks also had interest in such
a scheme to avoid problems with speculating into cache aliases.
Also all architect
Switch openrisc to use the dma-direct allocator and just provide the
hooks for setting memory uncached or cached.
Signed-off-by: Christoph Hellwig
---
arch/openrisc/Kconfig | 1 +
arch/openrisc/kernel/dma.c | 51 +-
2 files changed, 7 insertions(+), 45 d
We currently only support remapping memory as uncached through vmap
or a magic uncached segment provided by some architectures. But there
is a simpler and much better way available on some architectures where
we can just remap the memory in place. The advantages are:
1) no aliasing is possible,
On 20.02.20 17:31, Christoph Hellwig wrote:
> On Thu, Feb 20, 2020 at 05:23:20PM +0100, Christian Borntraeger wrote:
>> >From a users perspective it makes absolutely perfect sense to use the
>> bounce buffers when they are NEEDED.
>> Forcing the user to specify iommu_platform just because you n
On Thu, Feb 20, 2020 at 05:23:20PM +0100, Christian Borntraeger wrote:
> >From a users perspective it makes absolutely perfect sense to use the
> bounce buffers when they are NEEDED.
> Forcing the user to specify iommu_platform just because you need bounce
> buffers
> really feels wrong. And obvi
On Thu Feb 20 20, Lu Baolu wrote:
Hi Jerry,
On 2020/2/20 7:55, Jerry Snitselaar wrote:
Is it possible for a device to end up with dev->archdata.iommu == NULL
on iommu_need_mapping in the following instance:
1. iommu_group has dma domain for default
2. device gets private identity domain in int
On 20.02.20 17:11, Christoph Hellwig wrote:
> On Thu, Feb 20, 2020 at 05:06:05PM +0100, Halil Pasic wrote:
>> Currently force_dma_unencrypted() is only used by the direct
>> implementation of the DMA API, and thus resides in dma-direct.h. But
>> there is nothing dma-direct specific about it: if
On Thu, Feb 20, 2020 at 05:06:06PM +0100, Halil Pasic wrote:
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 867c7ebd3f10..fafc8f924955 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -243,6 +243,9 @@ static bool vring_use_dma
On Thu, Feb 20, 2020 at 05:06:05PM +0100, Halil Pasic wrote:
> Currently force_dma_unencrypted() is only used by the direct
> implementation of the DMA API, and thus resides in dma-direct.h. But
> there is nothing dma-direct specific about it: if one was -- for
> whatever reason -- to implement cus
Currently if one intends to run a memory protection enabled VM with
virtio devices and linux as the guest OS, one needs to specify the
VIRTIO_F_IOMMU_PLATFORM flag for each virtio device to make the guest
linux use the DMA API, which in turn handles the memory
encryption/protection stuff if the gue
Currently force_dma_unencrypted() is only used by the direct
implementation of the DMA API, and thus resides in dma-direct.h. But
there is nothing dma-direct specific about it: if one was -- for
whatever reason -- to implement custom DMA ops that have to in the
encrypted/protected scenarios dma-dir
Currently the advanced guest memory protection technologies (AMD SEV,
powerpc secure guest technology and s390 Protected VMs) abuse the
VIRTIO_F_IOMMU_PLATFORM flag to make virtio core use the DMA API, which
is in turn necessary, to make IO work with guest memory protection.
But VIRTIO_F_IOMMU_PLA
Hi,
On 2020/2/20 18:06, Daniel Drake wrote:
On Wed, Feb 19, 2020 at 11:40 AM Lu Baolu wrote:
With respect, this is problematical. The parent and all subdevices share
a single translation entry. The DMA mask should be consistent.
Otherwise, for example, subdevice A has 64-bit DMA capability an
> On Wed, Feb 19, 2020 at 11:40 AM Lu Baolu wrote:
> > With respect, this is problematical. The parent and all subdevices share
> > a single translation entry. The DMA mask should be consistent.
> >
> > Otherwise, for example, subdevice A has 64-bit DMA capability and uses
> > an identity domain f
On Wed, Feb 19, 2020 at 12:06:28PM -0800, isa...@codeaurora.org wrote:
> On 2020-02-19 03:15, Will Deacon wrote:
> > On Tue, Feb 18, 2020 at 05:57:18PM -0800, isa...@codeaurora.org wrote:
> > > Does this mean that the driver should be managing the IOVA space and
> > > mappings for this device using
Hi,
On Wed, Feb 19, 2020 at 06:19:56AM -0800, Steve Morris wrote:
> I'm running Arch Linux x86_64
>
> My system consistently reboots when I power on my FA-101 when running
> kernels 5.5.1-4. Downgrading to 5.4.15 allows everything to work
> properly.
>
> Here's the outpu of:
> journalctl | grep
44 matches
Mail list logo