On 06/24/21 at 11:47am, Robin Murphy wrote:
> On 2021-06-24 10:29, Baoquan He wrote:
> > On 06/24/21 at 08:40am, Christoph Hellwig wrote:
> > > So reduce the amount allocated. But the pool is needed for proper
> > > operation on systems with memory encryption. And please add the right
> > > maint
On Mon, Aug 2, 2021 at 10:54 PM Will Deacon wrote:
>
> On Fri, Jul 09, 2021 at 12:35:01PM +0900, David Stevens wrote:
> > From: David Stevens
> >
> > If SKIP_CPU_SYNC isn't already set, then iommu_dma_unmap_(page|sg) has
> > already called iommu_dma_sync_(single|sg)_for_cpu, so there is no need
>
On Mon, Aug 2, 2021 at 10:30 PM Will Deacon wrote:
>
> On Fri, Jul 09, 2021 at 12:34:59PM +0900, David Stevens wrote:
> > From: David Stevens
> >
> > The is_swiotlb_buffer function takes the physical address of the swiotlb
> > buffer, not the physical address of the original buffer. The sglist
>
> From: Eric Auger
> Sent: Wednesday, August 4, 2021 11:59 PM
>
[...]
> > 1.2. Attach Device to I/O address space
> > +++
> >
> > Device attach/bind is initiated through passthrough framework uAPI.
> >
> > Device attaching is allowed only after a device is succ
> From: Jason Gunthorpe
> Sent: Wednesday, August 4, 2021 10:05 PM
>
> On Mon, Aug 02, 2021 at 02:49:44AM +, Tian, Kevin wrote:
>
> > Can you elaborate? IMO the user only cares about the label (device cookie
> > plus optional vPASID) which is generated by itself when doing the attaching
> >
On 8/4/21 11:44 AM, Tianyu Lan wrote:
> +static int default_set_memory_enc(unsigned long addr, int numpages, bool
> enc);
> +DEFINE_STATIC_CALL(x86_set_memory_enc, default_set_memory_enc);
> +
> #define CPA_FLUSHTLB 1
> #define CPA_ARRAY 2
> #define CPA_PAGES_ARRAY 4
> @@ -1981,6 +1985,11 @@ in
Am Mittwoch, 4. August 2021, 19:15:36 CEST schrieb Robin Murphy:
> The core code bakes its own cookies now.
>
> CC: Heiko Stuebner
> Signed-off-by: Robin Murphy
On a Rockchip rk3288 (arm32), rk3399 (arm64) and px30 (arm64)
with the graphics pipeline using the iommu
Tested-by: Heiko Stuebner
Am Mittwoch, 4. August 2021, 19:15:29 CEST schrieb Robin Murphy:
> Now that everyone has converged on iommu-dma for IOMMU_DOMAIN_DMA
> support, we can abandon the notion of drivers being responsible for the
> cookie type, and consolidate all the management into the core code.
>
> CC: Marek Szyprow
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still need to handle. Use DMA API to map/umap these
memory during
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() still need to handle. Use DMA API to map/umap these
memory durin
From: Tianyu Lan
Hyper-V Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is abov
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary
(E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
The access physical address will be original physical address +
shared_gpa_boundary. T
From: Tianyu Lan
In Hyper-V Isolation VM with AMD SEV, swiotlb boucne buffer
needs to be mapped into address space above vTOM and so
introduce dma_map_decrypted/dma_unmap_encrypted() to map/unmap
bounce buffer memory. The platform can populate man/unmap callback
in the dma memory decrypted ops.
From: Tianyu Lan
VMbus ring buffer are shared with host and it's need to
be accessed via extra address space of Isolation VM with
SNP support. This patch is to map the ring buffer
address in extra address space via ioremap(). HV host
visibility hvcall smears data in the ring buffer and
so reset t
From: Tianyu Lan
The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
with host in Isolation VM and so it's necessary to use hvcall to set
them visible to host. In Isolation VM with AMD SEV SNP, the access
address should be in the extra space which is above shared gpa
boundary. So
From: Tianyu Lan
Hyper-V provides ghcb hvcall to handle VMBus
HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
msg in SNP Isolation VM. Add such support.
Signed-off-by: Tianyu Lan
---
arch/x86/hyperv/ivm.c | 43 +
arch/x86/include/asm/mshyperv.h | 1 +
dri
From: Tianyu Lan
Hyper-V provides GHCB protocol to write Synthetic Interrupt
Controller MSR registers in Isolation VM with AMD SEV SNP
and these registers are emulated by hypervisor directly.
Hyper-V requires to write SINTx MSR registers twice. First
writes MSR via GHCB page to communicate with h
From: Tianyu Lan
Mark vmbus ring buffer visible with set_memory_decrypted() when
establish gpadl handle.
Signed-off-by: Tianyu Lan
---
drivers/hv/channel.c | 44 --
include/linux/hyperv.h | 11 +++
2 files changed, 53 insertions(+), 2 deletions
From: Tianyu Lan
Add new hvcall guest address host visibility support to mark
memory visible to host. Override x86_set_memory_enc static
call with hv hook to mark memory visible to host via set_
memory_decrypted().
Signed-off-by: Tianyu Lan
---
Change since v1:
* Use new staic call x86_s
From: Tianyu Lan
Hyper-V and other platforms(e.g Intel and AMD) want to override
the __set_memory_enc_dec(). Add x86_set_memory_enc static
call here and platforms can hook their implementation.
Signed-off-by: Tianyu Lan
---
arch/x86/include/asm/set_memory.h | 4
arch/x86/mm/pat/set_memory
From: Tianyu Lan
Hyper-V exposes shared memory boundary via cpuid
HYPERV_CPUID_ISOLATION_CONFIG and store it in the
shared_gpa_boundary of ms_hyperv struct. This prepares
to share memory with host for SNP guest.
Signed-off-by: Tianyu Lan
---
arch/x86/kernel/cpu/mshyperv.c | 2 ++
include/asm-
From: Tianyu Lan
Hyper-V exposes GHCB page via SEV ES GHCB MSR for SNP guest
to communicate with hypervisor. Map GHCB page for all
cpus to read/write MSR register and submit hvcall request
via GHCB.
Signed-off-by: Tianyu Lan
---
arch/x86/hyperv/hv_init.c | 69
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory directly
Allocating and enabling a flush queue is in fact something we can
reasonably do while a DMA domain is active, without having to rebuild it
from scratch. Thus we can allow a strict -> non-strict transition from
sysfs without requiring to unbind the device's driver, which is of
particular interest to
Factor out flush queue setup from the initial domain init so that we
can potentially trigger it from sysfs later on in a domain's lifetime.
Reviewed-by: Lu Baolu
Reviewed-by: John Garry
Signed-off-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 30 --
include/linux
To parallel the sysfs behaviour, merge the new build-time option
for DMA domain strictness into the default domain type choice.
Suggested-by: Joerg Roedel
Reviewed-by: Lu Baolu
Reviewed-by: Jean-Philippe Brucker
Signed-off-by: Robin Murphy
---
v3: Remember to update parameter documentation a
When passthrough is enabled, the default strictness policy becomes
irrelevant, since any subsequent runtime override to a DMA domain type
now embodies an explicit choice of strictness as well. Save on noise by
only logging the default policy when it is meaningfully in effect.
Reviewed-by: John Gar
The sysfs interface for default domain types exists primarily so users
can choose the performance/security tradeoff relevant to their own
workload. As such, the choice between the policies for DMA domains fits
perfectly as an additional point on that scale - downgrading a
particular device from a s
Eliminate the iommu_get_dma_strict() indirection and pipe the
information through the domain type from the beginning. Besides
the flow simplification this also has several nice side-effects:
- Automatically implies strict mode for untrusted devices by
virtue of their IOMMU_DOMAIN_DMA override.
In preparation for the strict vs. non-strict decision for DMA domains
to be expressed in the domain type, make sure we expose our flush queue
awareness by accepting the new domain type, and test the specific
feature flag where we want to identify DMA domains in general. The DMA
ops reset/setup can
In preparation for the strict vs. non-strict decision for DMA domains to
be expressed in the domain type, make sure we expose our flush queue
awareness by accepting the new domain type.
Signed-off-by: Robin Murphy
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 1 +
drivers/iommu/arm/arm-smmu/
The DMA ops reset/setup can simply be unconditional, since
iommu-dma already knows only to touch DMA domains.
Signed-off-by: Robin Murphy
---
drivers/iommu/amd/iommu.c | 9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu
Promote the difference between strict and non-strict DMA domains from an
internal detail to a distinct domain feature and type, to pave the road
for exposing it through the sysfs default domain interface.
Reviewed-by: Lu Baolu
Reviewed-by: Jean-Philippe Brucker
Signed-off-by: Robin Murphy
---
IO_PGTABLE_QUIRK_NON_STRICT was never a very comfortable fit, since it's
not a quirk of the pagetable format itself. Now that we have a more
appropriate way to convey non-strict unmaps, though, this last of the
non-quirk quirks can also go, and with the flush queue code also now
enforcing its own o
Since iommu_iotlb_gather exists to help drivers optimise flushing for a
given unmap request, it is also the logical place to indicate whether
the unmap is strict or not, and thus help them further optimise for
whether to expect a sync or a flush_all subsequently. As part of that,
it also seems fair
iommu_dma_init_domain() is now only called from iommu_setup_dma_ops(),
which has already assumed dev to be non-NULL.
Reviewed-by: John Garry
Reviewed-by: Lu Baolu
Signed-off-by: Robin Murphy
---
drivers/iommu/dma-iommu.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/d
IOVA cookies are now got and put by core code, so we no longer need to
export these to modular drivers. The export for getting MSI cookies
stays, since VFIO can still be a module, but it was already relying on
someone else putting them, so that aspect is unaffected.
Reviewed-by: Lu Baolu
Reviewed
The core code bakes its own cookies now.
Reviewed-by: Jean-Philippe Brucker
Signed-off-by: Robin Murphy
---
drivers/iommu/virtio-iommu.c | 8
1 file changed, 8 deletions(-)
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 6abdcab7273b..80930ce04a16 100644
The core code bakes its own cookies now.
CC: Maxime Ripard
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded include
---
drivers/iommu/sun50i-iommu.c | 13 +
1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-io
The core code bakes its own cookies now.
CC: Chunyan Zhang
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded include
---
drivers/iommu/sprd-iommu.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
index 73dfd9946312..27ac
The core code bakes its own cookies now.
CC: Heiko Stuebner
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded include
---
drivers/iommu/rockchip-iommu.c | 12 +---
1 file changed, 1 insertion(+), 11 deletions(-)
diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockc
The core code bakes its own cookies now.
CC: Yong Wu
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded includes
---
drivers/iommu/mtk_iommu.c| 7 ---
drivers/iommu/mtk_iommu_v1.c | 1 -
2 files changed, 8 deletions(-)
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_
The core code bakes its own cookies now.
CC: Yoshihiro Shimoda
CC: Geert Uytterhoeven
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded include
---
drivers/iommu/ipmmu-vmsa.c | 28
1 file changed, 4 insertions(+), 24 deletions(-)
diff --git a/drivers/iomm
The core code bakes its own cookies now.
CC: Marek Szyprowski
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded include
---
drivers/iommu/exynos-iommu.c | 19 ---
1 file changed, 4 insertions(+), 15 deletions(-)
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu
The core code bakes its own cookies now.
Reviewed-by: Lu Baolu
Signed-off-by: Robin Murphy
---
drivers/iommu/intel/iommu.c | 8
1 file changed, 8 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index c12cc955389a..7e168634c433 100644
--- a/drivers/i
The core code bakes its own cookies now.
Signed-off-by: Robin Murphy
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 7 ---
drivers/iommu/arm/arm-smmu/arm-smmu.c | 15 ---
drivers/iommu/arm/arm-smmu/qcom_iommu.c | 9 -
3 files changed, 4 insertions(+), 27 de
Now that everyone has converged on iommu-dma for IOMMU_DOMAIN_DMA
support, we can abandon the notion of drivers being responsible for the
cookie type, and consolidate all the management into the core code.
CC: Marek Szyprowski
CC: Yoshihiro Shimoda
CC: Geert Uytterhoeven
CC: Yong Wu
CC: Heiko
The core code bakes its own cookies now.
Signed-off-by: Robin Murphy
---
v3: Also remove unneeded include
---
drivers/iommu/amd/iommu.c | 13 -
1 file changed, 13 deletions(-)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 52fe2326042a..92f7cbe3d14a 10064
v1:
https://lore.kernel.org/linux-iommu/cover.1626888444.git.robin.mur...@arm.com/
v2:
https://lore.kernel.org/linux-iommu/cover.1627468308.git.robin.mur...@arm.com/
Hi all,
Round 3, and the patch count has crept up yet again. But the overall
diffstat is even more negative, so that's good, righ
Hi Kevin,
Few comments/questions below.
On 7/9/21 9:48 AM, Tian, Kevin wrote:
> /dev/iommu provides an unified interface for managing I/O page tables for
> devices assigned to userspace. Device passthrough frameworks (VFIO, vDPA,
> etc.) are expected to use this interface instead of creating th
On 2021-08-04 06:02, Yongji Xie wrote:
On Tue, Aug 3, 2021 at 6:54 PM Robin Murphy wrote:
On 2021-08-03 09:54, Yongji Xie wrote:
On Tue, Aug 3, 2021 at 3:41 PM Jason Wang wrote:
在 2021/7/29 下午3:34, Xie Yongji 写道:
Export alloc_iova_fast() and free_iova_fast() so that
some modules can use
On Tue, Aug 03, 2021 at 11:58:54AM +1000, David Gibson wrote:
> > I'd rather deduce the endpoint from a collection of devices than the
> > other way around...
>
> Which I think is confusing, and in any case doesn't cover the case of
> one "device" with multiple endpoints.
Well they are both confu
On Mon, Aug 02, 2021 at 02:49:44AM +, Tian, Kevin wrote:
> Can you elaborate? IMO the user only cares about the label (device cookie
> plus optional vPASID) which is generated by itself when doing the attaching
> call, and expects this virtual label being used in various spots
> (invalidatio
On Wed, Aug 4, 2021 at 4:54 PM Jason Wang wrote:
>
>
> 在 2021/8/4 下午4:50, Yongji Xie 写道:
> > On Wed, Aug 4, 2021 at 4:32 PM Jason Wang wrote:
> >>
> >> 在 2021/8/3 下午5:38, Yongji Xie 写道:
> >>> On Tue, Aug 3, 2021 at 4:09 PM Jason Wang wrote:
> 在 2021/7/29 下午3:34, Xie Yongji 写道:
> > The d
在 2021/8/4 下午4:50, Yongji Xie 写道:
On Wed, Aug 4, 2021 at 4:32 PM Jason Wang wrote:
在 2021/8/3 下午5:38, Yongji Xie 写道:
On Tue, Aug 3, 2021 at 4:09 PM Jason Wang wrote:
在 2021/7/29 下午3:34, Xie Yongji 写道:
The device reset may fail in virtio-vdpa case now, so add checks to
its return value and
On Wed, Aug 4, 2021 at 4:32 PM Jason Wang wrote:
>
>
> 在 2021/8/3 下午5:38, Yongji Xie 写道:
> > On Tue, Aug 3, 2021 at 4:09 PM Jason Wang wrote:
> >>
> >> 在 2021/7/29 下午3:34, Xie Yongji 写道:
> >>> The device reset may fail in virtio-vdpa case now, so add checks to
> >>> its return value and fail the
在 2021/8/3 下午5:50, Yongji Xie 写道:
On Tue, Aug 3, 2021 at 4:10 PM Jason Wang wrote:
在 2021/7/29 下午3:34, Xie Yongji 写道:
Re-read the device status to ensure it's set to zero during
resetting. Otherwise, fail the vhost_vdpa_set_status() after timeout.
Signed-off-by: Xie Yongji
---
drivers/v
在 2021/8/3 下午5:38, Yongji Xie 写道:
On Tue, Aug 3, 2021 at 4:09 PM Jason Wang wrote:
在 2021/7/29 下午3:34, Xie Yongji 写道:
The device reset may fail in virtio-vdpa case now, so add checks to
its return value and fail the register_virtio_device().
So the reset() would be called by the driver dur
在 2021/8/3 下午5:31, Yongji Xie 写道:
On Tue, Aug 3, 2021 at 3:58 PM Jason Wang wrote:
在 2021/7/29 下午3:34, Xie Yongji 写道:
Re-read the device status to ensure it's set to zero during
resetting. Otherwise, fail the vdpa_reset() after timeout.
Signed-off-by: Xie Yongji
---
include/linux/vdpa.h
在 2021/8/3 下午5:01, Yongji Xie 写道:
On Tue, Aug 3, 2021 at 3:46 PM Jason Wang wrote:
在 2021/7/29 下午3:34, Xie Yongji 写道:
Export receive_fd() so that some modules can use
it to pass file descriptor between processes without
missing any security stuffs.
Signed-off-by: Xie Yongji
---
fs/file.
On 04-08-2021 03:26, Wei Liu wrote:
>>> struct iommu_domain domain;
>>> @@ -774,6 +784,41 @@ static struct iommu_device
>>> *hv_iommu_probe_device(struct device *dev)
>>> if (!dev_is_pci(dev))
>>> return ERR_PTR(-ENODEV);
>>>
>>> + /*
>>> +* Skip the PCI device specifie
61 matches
Mail list logo