Hi Jean,
On 2022/4/28 22:47, Jean-Philippe Brucker wrote:
Hi Baolu,
On Thu, Apr 21, 2022 at 01:21:19PM +0800, Lu Baolu wrote:
+/*
+ * Get the attached domain for asynchronous usage, for example the I/O
+ * page fault handling framework. The caller get a reference counter
+ * of the domain auto
On 2022/4/28 22:53, Jean-Philippe Brucker wrote:
On Thu, Apr 21, 2022 at 01:21:12PM +0800, Lu Baolu wrote:
Attaching an IOMMU domain to a PASID of a device is a generic operation
for modern IOMMU drivers which support PASID-granular DMA address
translation. Currently visible usage scenarios incl
On 2022/4/28 22:57, Jean-Philippe Brucker wrote:
On Thu, Apr 21, 2022 at 01:21:20PM +0800, Lu Baolu wrote:
static void iopf_handle_group(struct work_struct *work)
{
struct iopf_group *group;
@@ -134,12 +78,23 @@ static void iopf_handle_group(struct work_struct *work)
group =
Hi Robin,
On 2022/4/28 21:18, Robin Murphy wrote:
Move the bus setup to iommu_device_register(). This should allow
bus_iommu_probe() to be correctly replayed for multiple IOMMU instances,
and leaves bus_set_iommu() as a glorified no-op to be cleaned up next.
I re-fetched the latest patches on
On 2022/4/28 16:39, Jean-Philippe Brucker wrote:
On Tue, Apr 26, 2022 at 04:31:57PM -0700, Dave Hansen wrote:
On 4/26/22 09:48, Jean-Philippe Brucker wrote:
On Tue, Apr 26, 2022 at 08:27:00AM -0700, Dave Hansen wrote:
On 4/25/22 09:40, Jean-Philippe Brucker wrote:
The problem is that we'd hav
Hi Joao,
Thanks for doing this.
On 2022/4/29 05:09, Joao Martins wrote:
Add to iommu domain operations a set of callbacks to
perform dirty tracking, particulary to start and stop
tracking and finally to test and clear the dirty data.
Drivers are expected to dynamically change its hw protection
On 2022/4/29 05:09, Joao Martins wrote:
+int iopt_set_dirty_tracking(struct io_pagetable *iopt,
+ struct iommu_domain *domain, bool enable)
+{
+ struct iommu_domain *dom;
+ unsigned long index;
+ int ret = -EOPNOTSUPP;
+
+ down_write(&iopt->iova_r
On 2022/4/29 05:09, Joao Martins wrote:
Add an IO pagetable API iopt_read_and_clear_dirty_data() that
performs the reading of dirty IOPTEs for a given IOVA range and
then copying back to userspace from each area-internal bitmap.
Underneath it uses the IOMMU equivalent API which will read the
dir
On 2022/4/29 05:09, Joao Martins wrote:
Today, the dirty state is lost and the page wouldn't be migrated to
destination potentially leading the guest into error.
Add an unmap API that reads the dirty bit and sets it in the
user passed bitmap. This unmap iommu API tackles a potentially
racy updat
On 2022/4/29 05:09, Joao Martins wrote:
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5089,6 +5089,113 @@ static void intel_iommu_iotlb_sync_map(struct
iommu_domain *domain,
}
}
+static int intel_iommu_set_dirty_tracking(struct iommu_domain *domain,
+
On 2022/4/30 06:19, Fenghua Yu wrote:
Hi, Jean and Baolu,
On Fri, Apr 29, 2022 at 03:34:36PM +0100, Jean-Philippe Brucker wrote:
On Fri, Apr 29, 2022 at 06:51:17AM -0700, Fenghua Yu wrote:
Hi, Baolu,
On Fri, Apr 29, 2022 at 03:53:57PM +0800, Baolu Lu wrote:
On 2022/4/28 16:39, Jean-Philippe
Hi Jason,
On 2022/5/2 21:05, Jason Gunthorpe wrote:
On Sun, May 01, 2022 at 07:24:31PM +0800, Lu Baolu wrote:
The SNP bit is only valid for second-level PTEs. Setting this bit in the
first-level PTEs has no functional impact because the Intel IOMMU always
ignores the same bit in first-level PTE
On 2022/5/2 21:17, Jason Gunthorpe wrote:
On Sun, May 01, 2022 at 07:24:32PM +0800, Lu Baolu wrote:
+static bool domain_support_force_snooping(struct dmar_domain *domain)
+{
+ struct device_domain_info *info;
+ unsigned long flags;
+ bool support = true;
+
+ spin_lock_irq
On 2022/5/3 05:31, Jacob Pan wrote:
Hi BaoLu,
Hi Jacob,
On Sun, 1 May 2022 19:24:32 +0800, Lu Baolu
wrote:
As domain->force_snooping only impacts the devices attached with the
domain, there's no need to check against all IOMMU units. At the same
time, for a brand new domain (hasn't been a
On 2022/5/3 05:36, Jacob Pan wrote:
Hi BaoLu,
On Sun, 1 May 2022 19:24:33 +0800, Lu Baolu
wrote:
The IOMMU force snooping capability is not required to be consistent
among all the IOMMUs anymore. Remove force snooping capability check
in the IOMMU hot-add path and domain_update_iommu_snooping
On 2022/5/2 21:19, Jason Gunthorpe wrote:
On Sun, May 01, 2022 at 07:24:34PM +0800, Lu Baolu wrote:
As enforce_cache_coherency has been introduced into the iommu_domain_ops,
the kernel component which owns the iommu domain is able to opt-in its
requirement for force snooping support. The iommu d
Hi Jason,
On 2022/5/4 08:11, Jason Gunthorpe wrote:
+static int __iommu_group_attach_domain(struct iommu_group *group,
+ struct iommu_domain *new_domain)
{
int ret;
+ if (group->domain == new_domain)
+ return 0;
+
/*
-
Hi Jason,
On 2022/5/4 08:11, Jason Gunthorpe wrote:
Once the group enters 'owned' mode it can never be assigned back to the
default_domain or to a NULL domain. It must always be actively assigned to
a current domain. If the caller hasn't provided a domain then the core
must provide an explicit D
On 2022/5/4 21:31, Jason Gunthorpe wrote:
On Wed, May 04, 2022 at 03:25:50PM +0800, Baolu Lu wrote:
Hi Jason,
On 2022/5/2 21:05, Jason Gunthorpe wrote:
On Sun, May 01, 2022 at 07:24:31PM +0800, Lu Baolu wrote:
The SNP bit is only valid for second-level PTEs. Setting this bit in the
first
On 2022/5/4 22:38, Jason Gunthorpe wrote:
@@ -3180,7 +3181,9 @@ int iommu_group_claim_dma_owner(struct iommu_group
*group, void *owner)
ret = -EPERM;
goto unlock_out;
} else {
- if (group->domain && group->domain != group->default_domain)
{
On 2022/5/4 02:02, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:32AM +0800, Lu Baolu wrote:
Use this field to save the pasid/ssid bits that a device is able to
support with its IOMMU hardware. It is a generic attribute of a device
and lifting it into the per-device dev_iommu struct
On 2022/5/4 02:07, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:33AM +0800, Lu Baolu wrote:
Attaching an IOMMU domain to a PASID of a device is a generic operation
for modern IOMMU drivers which support PASID-granular DMA address
translation. Currently visible usage scenarios inclu
On 2022/5/4 02:09, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:34AM +0800, Lu Baolu wrote:
Use below data structures for SVA implementation in the IOMMU core:
- struct iommu_sva_ioas
Represent the I/O address space shared with an application CPU address
space. This structur
On 2022/5/4 02:12, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:37AM +0800, Lu Baolu wrote:
Add support for SVA domain allocation and provide an SVA-specific
iommu_domain_ops.
Signed-off-by: Lu Baolu
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 14 +++
.../iommu/arm
On 2022/5/4 02:20, Jean-Philippe Brucker wrote:
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 7cae631c1baa..33449523afbe 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3174,3 +3174,24 @@ void iommu_detach_device_pasid(struct iommu_domain
*domain,
iomm
On 2022/5/5 16:43, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 5, 2022 9:07 AM
As domain->force_snooping only impacts the devices attached with the
domain, there's no need to check against all IOMMU units. At the same
time, for a brand new domain (hasn't been attached to any device),
On 2022/5/5 16:46, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 5, 2022 9:07 AM
As enforce_cache_coherency has been introduced into the
iommu_domain_ops,
the kernel component which owns the iommu domain is able to opt-in its
requirement for force snooping support. The iommu driver has
On 2022/5/3 15:49, Jean-Philippe Brucker wrote:
On Sat, Apr 30, 2022 at 03:33:17PM +0800, Baolu Lu wrote:
Jean, another quick question about the iommu_sva_bind_device()
/**
* iommu_sva_bind_device() - Bind a process address space to a device
* @dev: the device
* @mm: the mm to bind
On 2022/5/5 21:38, Jean-Philippe Brucker wrote:
Hi Baolu,
On Thu, May 05, 2022 at 04:31:38PM +0800, Baolu Lu wrote:
On 2022/5/4 02:20, Jean-Philippe Brucker wrote:
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 7cae631c1baa..33449523afbe 100644
--- a/drivers/iommu/iommu.c
On 2022/5/6 03:46, Steve Wahl wrote:
Increase DMAR_UNITS_SUPPORTED to support 64 sockets with 10 DMAR units
each, for a total of 640.
If the available hardware exceeds DMAR_UNITS_SUPPORTED (previously set
to MAX_IO_APICS, or 128), it causes these messages: "DMAR: Failed to
allocate seq_id", "DMA
On 2022/5/6 14:10, Tian, Kevin wrote:
From: Lu Baolu
Sent: Friday, May 6, 2022 1:27 PM
+
+/*
+ * Set the page snoop control for a pasid entry which has been set up.
+ */
+void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
+ struct device
On 2022/5/6 03:27, Jason Gunthorpe wrote:
On Thu, May 05, 2022 at 07:56:59PM +0100, Robin Murphy wrote:
Ack to that, there are certainly further improvements to consider once we've
got known-working code into a released kernel, but let's not get ahead of
ourselves just now.
Yes please
(I'm
Hi Jean,
On 2022/5/5 14:42, Baolu Lu wrote:
On 2022/5/4 02:09, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:34AM +0800, Lu Baolu wrote:
Use below data structures for SVA implementation in the IOMMU core:
- struct iommu_sva_ioas
Represent the I/O address space shared with an
On 2022/5/7 16:32, Baolu Lu wrote:
Hi Jean,
On 2022/5/5 14:42, Baolu Lu wrote:
On 2022/5/4 02:09, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:34AM +0800, Lu Baolu wrote:
Use below data structures for SVA implementation in the IOMMU core:
- struct iommu_sva_ioas
Represent
On 2022/5/10 08:51, Tian, Kevin wrote:
From: Lu Baolu
Sent: Sunday, May 8, 2022 8:35 PM
As domain->force_snooping only impacts the devices attached with the
domain, there's no need to check against all IOMMU units. On the other
hand, force_snooping could be set on a domain no matter whether it
On 2022/5/10 22:34, Jason Gunthorpe wrote:
On Tue, May 10, 2022 at 02:17:28PM +0800, Lu Baolu wrote:
int iommu_device_register(struct iommu_device *iommu,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 627a3ed5ee8f..afc63fce6107 1
On 2022/5/10 22:02, Jason Gunthorpe wrote:
On Tue, May 10, 2022 at 02:17:29PM +0800, Lu Baolu wrote:
This adds a pair of common domain ops for this purpose and adds helpers
to attach/detach a domain to/from a {device, PASID}.
I wonder if this should not have a detach op - after discussing wit
On 2022/5/10 23:23, Jason Gunthorpe wrote:
On Tue, May 10, 2022 at 02:17:34PM +0800, Lu Baolu wrote:
+/**
+ * iommu_sva_bind_device() - Bind a process address space to a device
+ * @dev: the device
+ * @mm: the mm to bind, caller must hold a reference to mm_users
+ * @drvdata: opaque data point
On 2022/5/12 01:25, Jacob Pan wrote:
Hi Jason,
On Wed, 11 May 2022 14:00:25 -0300, Jason Gunthorpe wrote:
On Wed, May 11, 2022 at 10:02:16AM -0700, Jacob Pan wrote:
If not global, perhaps we could have a list of pasids (e.g. xarray)
attached to the device_domain_info. The TLB flush logic wou
On 2022/5/11 22:53, Jason Gunthorpe wrote:
Assuming we leave room for multi-device groups this logic should just
be
group = iommu_group_get(dev);
if (!group)
return -ENODEV;
mutex_lock(&group->mutex);
domain = xa_load(&group->pasid_array, mm->pasi
On 2022/5/12 13:01, Tian, Kevin wrote:
From: Baolu Lu
Sent: Thursday, May 12, 2022 11:03 AM
On 2022/5/11 22:53, Jason Gunthorpe wrote:
Also, given the current arrangement it might make sense to have a
struct iommu_domain_sva given that no driver is wrappering this in
something else.
Fair
On 2022/5/12 13:44, Tian, Kevin wrote:
From: Baolu Lu
Sent: Thursday, May 12, 2022 1:17 PM
On 2022/5/12 13:01, Tian, Kevin wrote:
From: Baolu Lu
Sent: Thursday, May 12, 2022 11:03 AM
On 2022/5/11 22:53, Jason Gunthorpe wrote:
Also, given the current arrangement it might make sense to have
Hi Jason,
On 2022/5/12 01:00, Jason Gunthorpe wrote:
Consolidate pasid programming into dev_set_pasid() then called by both
intel_svm_attach_dev_pasid() and intel_iommu_attach_dev_pasid(), right?
I was only suggesting that really dev_attach_pasid() op is misnamed,
it should be called set_dev_pa
On 2022/5/12 19:48, Jason Gunthorpe wrote:
On Thu, May 12, 2022 at 01:17:08PM +0800, Baolu Lu wrote:
On 2022/5/12 13:01, Tian, Kevin wrote:
From: Baolu Lu
Sent: Thursday, May 12, 2022 11:03 AM
On 2022/5/11 22:53, Jason Gunthorpe wrote:
Also, given the current arrangement it might make sense
On 2022/5/12 19:51, Jason Gunthorpe wrote:
On Thu, May 12, 2022 at 11:02:39AM +0800, Baolu Lu wrote:
+ mutex_lock(&group->mutex);
+ domain = xa_load(&group->pasid_array, pasid);
+ if (domain && domain->type != type)
+ domain = NULL;
+
On 2022/5/12 20:03, Jason Gunthorpe wrote:
On Thu, May 12, 2022 at 07:59:41PM +0800, Baolu Lu wrote:
On 2022/5/12 19:48, Jason Gunthorpe wrote:
On Thu, May 12, 2022 at 01:17:08PM +0800, Baolu Lu wrote:
On 2022/5/12 13:01, Tian, Kevin wrote:
From: Baolu Lu
Sent: Thursday, May 12, 2022 11:03
On 2022/5/13 07:12, Steve Wahl wrote:
On Thu, May 12, 2022 at 10:13:09AM -0500, Steve Wahl wrote:
To support up to 64 sockets with 10 DMAR units each (640), make the
value of DMAR_UNITS_SUPPORTED adjustable by a config variable,
CONFIG_DMAR_UNITS_SUPPORTED, and make it's default 1024 when MAXSMP
On 2022/5/13 08:32, Nicolin Chen wrote:
Local boot test and VFIO sanity test show that info->iommu can be
used in device_to_iommu() as a fast path. So this patch adds it.
Signed-off-by: Nicolin Chen
---
drivers/iommu/intel/iommu.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/driv
On 2022/5/12 19:51, Jason Gunthorpe wrote:
On Thu, May 12, 2022 at 08:00:59AM +0100, Jean-Philippe Brucker wrote:
It is not "missing" it is just renamed to blocking_domain->ops->set_dev_pasid()
The implementation of that function would be identical to
detach_dev_pasid.
attach(dev, pasid,
Hi Robin,
On 2022/5/16 19:22, Robin Murphy wrote:
On 2022-05-16 02:57, Lu Baolu wrote:
Each IOMMU driver must provide a blocking domain ops. If the hardware
supports detaching domain from device, setting blocking domain equals
detaching the existing domain from the deivce. Otherwise, an UNMANAG
Hi Jason,
On 2022/5/17 02:06, Jason Gunthorpe wrote:
+static __init int tboot_force_iommu(void)
+{
+ if (!tboot_enabled())
+ return 0;
+
+ if (no_iommu || dmar_disabled)
+ pr_warn("Forcing Intel-IOMMU to enabled\n");
Unrelated, but when we are in the spec
Hi Jason,
On 2022/5/16 21:57, Jason Gunthorpe wrote:
On Mon, May 16, 2022 at 12:22:08PM +0100, Robin Murphy wrote:
On 2022-05-16 02:57, Lu Baolu wrote:
Each IOMMU driver must provide a blocking domain ops. If the hardware
supports detaching domain from device, setting blocking domain equals
de
On 2022/5/17 21:13, Jason Gunthorpe wrote:
On Tue, May 17, 2022 at 01:43:03PM +0100, Robin Murphy wrote:
FWIW from my point of view I'm happy with having a .detach_dev_pasid op
meaning implicitly-blocked access for now.
If this is the path then lets not call it attach/detach
please. 'set_dev_
On 2022/5/17 19:13, Jason Gunthorpe wrote:
On Tue, May 17, 2022 at 10:05:43AM +0800, Baolu Lu wrote:
Hi Jason,
On 2022/5/17 02:06, Jason Gunthorpe wrote:
+static __init int tboot_force_iommu(void)
+{
+ if (!tboot_enabled())
+ return 0;
+
+ if (no_iommu
On 2022/5/19 02:21, Jacob Pan wrote:
IOMMU group maintains a PASID array which stores the associated IOMMU
domains. This patch introduces a helper function to do domain to PASID
look up. It will be used by TLB flush and device-PASID attach verification.
Do you really need this?
The IOMMU drive
On 2022/5/19 02:21, Jacob Pan wrote:
DMA requests tagged with PASID can target individual IOMMU domains.
Introduce a domain-wide PASID for DMA API, it will be used on the same
mapping as legacy DMA without PASID. Let it be IOVA or PA in case of
identity domain.
Signed-off-by: Jacob Pan
---
in
Hi Jean,
On 2022/5/19 18:37, Jean-Philippe Brucker wrote:
On Thu, May 19, 2022 at 03:20:38PM +0800, Lu Baolu wrote:
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 88817a3376ef..6e2cd082c670 100644
--- a/drivers/iommu/arm/arm-smmu-v3
On 2022/5/20 00:33, Jean-Philippe Brucker wrote:
diff --git a/drivers/iommu/iommu-sva-lib.h b/drivers/iommu/iommu-sva-lib.h
index 8909ea1094e3..1be21e6b93ec 100644
--- a/drivers/iommu/iommu-sva-lib.h
+++ b/drivers/iommu/iommu-sva-lib.h
@@ -7,6 +7,7 @@
#include
#include
+#include
i
On 2022/5/20 00:39, Jean-Philippe Brucker wrote:
+struct iommu_sva *iommu_sva_bind_device(struct device *dev, struct mm_struct
*mm)
+{
+ struct iommu_sva_domain *sva_domain;
+ struct iommu_domain *domain;
+ ioasid_t max_pasid = 0;
+ int ret = -EINVAL;
+
+ /* Allocat
On 2022/5/20 16:45, Joerg Roedel wrote:
On Mon, May 16, 2022 at 09:57:56AM +0800, Lu Baolu wrote:
const struct iommu_domain_ops *default_domain_ops;
+ const struct iommu_domain_ops *blocking_domain_ops;
I don't understand why extra domain-ops are needed for this. I think it
would
On 2022/5/20 19:28, Jean-Philippe Brucker wrote:
On Fri, May 20, 2022 at 02:38:12PM +0800, Baolu Lu wrote:
On 2022/5/20 00:39, Jean-Philippe Brucker wrote:
+struct iommu_sva *iommu_sva_bind_device(struct device *dev, struct mm_struct
*mm)
+{
+ struct iommu_sva_domain *sva_domain
On 2022/5/19 15:20, Lu Baolu wrote:
The iommu_sva_domain represents a hardware pagetable that the IOMMU
hardware could use for SVA translation. This adds some infrastructure
to support SVA domain in the iommu common layer. It includes:
- Add a new struct iommu_sva_domain and new IOMMU_DOMAIN_SVA
Hi Joerg,
On 2022/5/21 08:21, Yian Chen wrote:
Notifier calling chain uses priority to determine the execution
order of the notifiers or listeners registered to the chain.
PCI bus device hot add utilizes the notification mechanism.
The current code sets low priority (INT_MIN) to Intel
dmar_pci_
Hi Kevin,
Thank you for reviewing my patches.
On 2022/5/24 17:24, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 19, 2022 3:21 PM
Use this field to keep the number of supported PASIDs that an IOMMU
hardware is able to support. This is a generic attribute of an IOMMU
and lifting it into
On 2022/5/25 10:03, Baolu Lu wrote:
diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index 4de960834a1b..1c3cf267934d 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1126,6 +1126,10 @@ static int alloc_iommu(struct dmar_drhd_unit
*drhd
On 2022/5/24 17:44, Tian, Kevin wrote:
From: Baolu Lu
Sent: Monday, May 23, 2022 3:13 PM
@@ -254,6 +259,7 @@ struct iommu_ops {
int (*def_domain_type)(struct device *dev);
const struct iommu_domain_ops *default_domain_ops;
+ const struct iommu_domain_ops *sva_domain_ops
On 2022/5/25 08:44, Tian, Kevin wrote:
From: Jason Gunthorpe
Sent: Tuesday, May 24, 2022 9:39 PM
On Tue, May 24, 2022 at 09:39:52AM +, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 19, 2022 3:21 PM
The iommu_sva_domain represents a hardware pagetable that the
IOMMU
hardware cou
On 2022/5/24 17:39, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 19, 2022 3:21 PM
The iommu_sva_domain represents a hardware pagetable that the IOMMU
hardware could use for SVA translation. This adds some infrastructure
to support SVA domain in the iommu common layer. It includes:
- A
Hi Jason,
On 2022/5/24 21:44, Jason Gunthorpe wrote:
diff --git a/drivers/iommu/iommu-sva-lib.c b/drivers/iommu/iommu-sva-lib.c
index 106506143896..210c376f6043 100644
+++ b/drivers/iommu/iommu-sva-lib.c
@@ -69,3 +69,51 @@ struct mm_struct *iommu_sva_find(ioasid_t pasid)
return ioasid_fi
On 2022/5/24 21:44, Jason Gunthorpe wrote:
+{
+ struct iommu_sva_domain *sva_domain;
+ struct iommu_domain *domain;
+
+ if (!bus->iommu_ops || !bus->iommu_ops->sva_domain_ops)
+ return ERR_PTR(-ENODEV);
+
+ sva_domain = kzalloc(sizeof(*sva_domain), GFP_KERNEL
Hi Robin,
On 2022/5/24 22:36, Robin Murphy wrote:
On 2022-05-19 08:20, Lu Baolu wrote:
[...]
diff --git a/drivers/iommu/iommu-sva-lib.c
b/drivers/iommu/iommu-sva-lib.c
index 106506143896..210c376f6043 100644
--- a/drivers/iommu/iommu-sva-lib.c
+++ b/drivers/iommu/iommu-sva-lib.c
@@ -69,3 +69,5
On 2022/5/25 09:31, Nobuhiro Iwamatsu wrote:
+static const struct iommu_ops visconti_atu_ops = {
+ .domain_alloc = visconti_atu_domain_alloc,
+ .probe_device = visconti_atu_probe_device,
+ .release_device = visconti_atu_release_device,
+ .device_group = generic_device_grou
On 2022/5/25 19:06, Jean-Philippe Brucker wrote:
On Wed, May 25, 2022 at 11:07:49AM +0100, Robin Murphy wrote:
Did you mean @handler and @handler_token staffs below?
struct iommu_domain {
unsigned type;
const struct iommu_domain_ops *ops;
unsigned long pgsize_bitma
On 2022/5/25 23:25, Jason Gunthorpe wrote:
On Wed, May 25, 2022 at 01:19:08PM +0800, Baolu Lu wrote:
Hi Jason,
On 2022/5/24 21:44, Jason Gunthorpe wrote:
diff --git a/drivers/iommu/iommu-sva-lib.c b/drivers/iommu/iommu-sva-lib.c
index 106506143896..210c376f6043 100644
+++ b/drivers/iommu
On 2022/5/27 22:59, Jason Gunthorpe wrote:
On Fri, May 27, 2022 at 02:30:08PM +0800, Lu Baolu wrote:
Retrieve the attached domain for a device through the generic interface
exposed by the iommu core. This also makes device_domain_lock static.
Signed-off-by: Lu Baolu
drivers/iommu/intel/iommu
On 2022/5/27 23:01, Jason Gunthorpe wrote:
On Fri, May 27, 2022 at 02:30:10PM +0800, Lu Baolu wrote:
The disable_dmar_iommu() is called when IOMMU initialzation fails or
the IOMMU is hot-removed from the system. In both cases, there is no
need to clear the IOMMU translation data structures for d
On 2022/5/30 20:14, Jason Gunthorpe wrote:
On Sun, May 29, 2022 at 01:14:46PM +0800, Baolu Lu wrote:
From 1e87b5df40c6ce9414cdd03988c3b52bfb17af5f Mon Sep 17 00:00:00 2001
From: Lu Baolu
Date: Sun, 29 May 2022 10:18:56 +0800
Subject: [PATCH 1/1] iommu/vt-d: debugfs: Remove device_domain_lock
Hi Suravee ,
On 2022/5/31 19:34, Suravee Suthikulpanit wrote:
On 4/29/22 4:09 AM, Joao Martins wrote:
.
+static int amd_iommu_set_dirty_tracking(struct iommu_domain *domain,
+ bool enable)
+{
+ struct protection_domain *pdomain = to_pdomain(domain);
+ struct iommu_d
On 2022/5/31 18:12, Tian, Kevin wrote:
+++ b/include/linux/iommu.h
@@ -105,6 +105,8 @@ struct iommu_domain {
enum iommu_page_response_code (*iopf_handler)(struct
iommu_fault *fault,
void *data);
void *fault_data;
+ ioas
On 2022/5/31 21:10, Jason Gunthorpe wrote:
On Tue, May 31, 2022 at 11:02:06AM +0800, Baolu Lu wrote:
For case 2, it is a bit weird. I tried to add a rwsem lock to make the
iommu_unmap() and dumping tables in debugfs exclusive. This does not
work because debugfs may depend on the DMA of the
Hi Robin,
Thank you for the comments.
On 2022/5/31 21:52, Robin Murphy wrote:
On 2022-05-31 04:02, Baolu Lu wrote:
On 2022/5/30 20:14, Jason Gunthorpe wrote:
On Sun, May 29, 2022 at 01:14:46PM +0800, Baolu Lu wrote:
[--snip--]
diff --git a/drivers/iommu/intel/debugfs.c
b/drivers/iommu
On 2022/5/31 23:59, Jason Gunthorpe wrote:
On Tue, May 31, 2022 at 02:52:28PM +0100, Robin Murphy wrote:
+ break;
+ pgtable_walk_level(m, phys_to_virt(phys_addr),
Also, obligatory reminder that pfn_valid() only means that pfn_to_page()
gets you a valid struct page. W
On 2022/6/1 02:51, Jason Gunthorpe wrote:
Oh, I've spent the last couple of weeks hacking up horrible things
manipulating entries in init_mm, and never realised that that was actually
the special case. Oh well, live and learn.
The init_mm is sort of different, it doesn't have zap in quite the
sa
On 2022/6/1 09:43, Tian, Kevin wrote:
From: Jacob Pan
Sent: Wednesday, June 1, 2022 1:30 AM
In both cases the pasid is stored in the attach data instead of the
domain.
So during IOTLB flush for the domain, do we loop through the attach data?
Yes and it's required.
What does the attach data
Hi Kevin,
Thank you for the comments.
On 2022/6/1 17:09, Tian, Kevin wrote:
From: Lu Baolu
Sent: Friday, May 27, 2022 2:30 PM
The iommu->lock is used to protect the per-IOMMU domain ID resource.
Move the spinlock acquisition/release into the helpers where domain
IDs are allocated and freed. Th
On 2022/6/1 17:18, Tian, Kevin wrote:
From: Lu Baolu
Sent: Friday, May 27, 2022 2:30 PM
The iommu->lock is used to protect the per-IOMMU pasid directory table
and pasid table. Move the spinlock acquisition/release into the helpers
to make the code self-contained.
Signed-off-by: Lu Baolu
Rev
On 2022/6/1 17:28, Tian, Kevin wrote:
From: Lu Baolu
Sent: Friday, May 27, 2022 2:30 PM
When the IOMMU domain is about to be freed, it should not be set on any
device. Instead of silently dealing with some bug cases, it's better to
trigger a warning to report and fix any potential bugs at the f
On 2022/6/2 14:29, Tian, Kevin wrote:
From: Baolu Lu
Sent: Wednesday, June 1, 2022 7:02 PM
On 2022/6/1 17:28, Tian, Kevin wrote:
From: Lu Baolu
Sent: Friday, May 27, 2022 2:30 PM
When the IOMMU domain is about to be freed, it should not be set on
any
device. Instead of silently dealing
On 2022/6/6 14:19, Nicolin Chen wrote:
+/**
+ * iommu_attach_group - Attach an IOMMU group to an IOMMU domain
+ * @domain: IOMMU domain to attach
+ * @dev: IOMMU group that will be attached
Nit: @group: ...
+ *
+ * Returns 0 on success and error code on failure
+ *
+ * Specifically, -EMEDIUMT
On 2022/6/6 14:19, Nicolin Chen wrote:
Worths mentioning the exact match for enforce_cache_coherency is removed
with this series, since there's very less value in doing that since KVM
won't be able to take advantage of it -- this just wastes domain memory.
Instead, we rely on Intel IOMMU driver t
On 2022/6/7 19:58, Jason Gunthorpe wrote:
On Tue, Jun 07, 2022 at 03:44:43PM +0800, Baolu Lu wrote:
On 2022/6/6 14:19, Nicolin Chen wrote:
Worths mentioning the exact match for enforce_cache_coherency is removed
with this series, since there's very less value in doing that since KVM
won
On 2022/6/9 20:49, Jason Gunthorpe wrote:
+void iommu_free_pgtbl_pages(struct iommu_domain *domain,
+ struct list_head *pages)
+{
+ struct page *page, *next;
+
+ if (!domain->concurrent_traversal) {
+ put_pages_list(pages);
+ retur
On 2022/6/9 21:32, Jason Gunthorpe wrote:
On Thu, Jun 09, 2022 at 02:19:06PM +0100, Robin Murphy wrote:
Is there a significant benefit to keeping both paths, or could we get away
with just always using RCU? Realistically, pagetable pages aren't likely to
be freed all that frequently, except per
On 2022/6/10 01:06, Raj, Ashok wrote:
On Thu, Jun 09, 2022 at 03:08:10PM +0800, Lu Baolu wrote:
The IOMMU page tables are updated using iommu_map/unmap() interfaces.
Currently, there is no mandatory requirement for drivers to use locks
to ensure concurrent updates to page tables, because it's as
On 2022/6/10 01:25, Raj, Ashok wrote:
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 4f29139bbfc3..e065cbe3c857 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -479,7 +479,6 @@ enum {
#define VTD_FLAG_IRQ_REMAP_PRE_ENABLED(1 <<
On 2022/6/10 03:01, Raj, Ashok wrote:
On Tue, Jun 07, 2022 at 09:49:33AM +0800, Lu Baolu wrote:
Use this field to save the number of PASIDs that a device is able to
consume. It is a generic attribute of a device and lifting it into the
per-device dev_iommu struct could help to avoid the boilerpl
On 2022/6/10 04:25, Raj, Ashok wrote:
Hi Baolu
Hi Ashok,
some minor nits.
Thanks for your comments.
On Tue, Jun 07, 2022 at 09:49:35AM +0800, Lu Baolu wrote:
The sva iommu_domain represents a hardware pagetable that the IOMMU
hardware could use for SVA translation. This adds some infra
On 2022/6/10 17:01, Tian, Kevin wrote:
From: Baolu Lu
Sent: Friday, June 10, 2022 2:47 PM
On 2022/6/10 03:01, Raj, Ashok wrote:
On Tue, Jun 07, 2022 at 09:49:33AM +0800, Lu Baolu wrote:
@@ -218,6 +219,30 @@ static void dev_iommu_free(struct device *dev)
kfree(param);
}
+static
On 2022/6/14 04:38, Jerry Snitselaar wrote:
On Thu, May 12, 2022 at 10:13:09AM -0500, Steve Wahl wrote:
To support up to 64 sockets with 10 DMAR units each (640), make the
value of DMAR_UNITS_SUPPORTED adjustable by a config variable,
CONFIG_DMAR_UNITS_SUPPORTED, and make it's default 1024 when
On 2022/6/14 04:57, Jerry Snitselaar wrote:
On Thu, May 12, 2022 at 10:13:09AM -0500, Steve Wahl wrote:
To support up to 64 sockets with 10 DMAR units each (640), make the
value of DMAR_UNITS_SUPPORTED adjustable by a config variable,
CONFIG_DMAR_UNITS_SUPPORTED, and make it's default 1024 when
1 - 100 of 170 matches
Mail list logo