On 2022/4/29 05:09, Joao Martins wrote:
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5089,6 +5089,113 @@ static void intel_iommu_iotlb_sync_map(struct
iommu_domain *domain,
}
}
+static int intel_iommu_set_dirty_tracking(struct iommu_domain *domain,
+
On 2022/4/29 05:09, Joao Martins wrote:
Today, the dirty state is lost and the page wouldn't be migrated to
destination potentially leading the guest into error.
Add an unmap API that reads the dirty bit and sets it in the
user passed bitmap. This unmap iommu API tackles a potentially
racy updat
On 2022/4/29 05:09, Joao Martins wrote:
Add an IO pagetable API iopt_read_and_clear_dirty_data() that
performs the reading of dirty IOPTEs for a given IOVA range and
then copying back to userspace from each area-internal bitmap.
Underneath it uses the IOMMU equivalent API which will read the
dir
On 2022/4/29 05:09, Joao Martins wrote:
+int iopt_set_dirty_tracking(struct io_pagetable *iopt,
+ struct iommu_domain *domain, bool enable)
+{
+ struct iommu_domain *dom;
+ unsigned long index;
+ int ret = -EOPNOTSUPP;
+
+ down_write(&iopt->iova_r
On Fri, 29 Apr 2022, Boris Ostrovsky wrote:
> On 4/28/22 6:49 PM, Stefano Stabellini wrote:
> > On Thu, 28 Apr 2022, Boris Ostrovsky wrote:
> > > On 4/28/22 5:49 PM, Stefano Stabellini wrote:
> > > > On Thu, 28 Apr 2022, Christoph Hellwig wrote:
> > > > > On Tue, Apr 26, 2022 at 04:07:45PM -0700, S
On 4/28/22 6:49 PM, Stefano Stabellini wrote:
On Thu, 28 Apr 2022, Boris Ostrovsky wrote:
On 4/28/22 5:49 PM, Stefano Stabellini wrote:
On Thu, 28 Apr 2022, Christoph Hellwig wrote:
On Tue, Apr 26, 2022 at 04:07:45PM -0700, Stefano Stabellini wrote:
Reported-by: Rahul Singh
Signed-off-by:
Hi, Jean and Baolu,
On Fri, Apr 29, 2022 at 03:34:36PM +0100, Jean-Philippe Brucker wrote:
> On Fri, Apr 29, 2022 at 06:51:17AM -0700, Fenghua Yu wrote:
> > Hi, Baolu,
> >
> > On Fri, Apr 29, 2022 at 03:53:57PM +0800, Baolu Lu wrote:
> > > On 2022/4/28 16:39, Jean-Philippe Brucker wrote:
> > > >
The deferred probe timer that's used for this currently starts at
late_initcall and runs for driver_deferred_probe_timeout seconds. The
assumption being that all available drivers would be loaded and
registered before the timer expires. This means, the
driver_deferred_probe_timeout has to be pretty
On 2022-04-29 17:40, Joao Martins wrote:
On 4/29/22 17:11, Jason Gunthorpe wrote:
On Fri, Apr 29, 2022 at 03:45:23PM +0100, Joao Martins wrote:
On 4/29/22 13:23, Jason Gunthorpe wrote:
On Fri, Apr 29, 2022 at 01:06:06PM +0100, Joao Martins wrote:
TBH I'd be inclined to just enable DBM uncond
On 29/04/2022 9:50 am, Robin Murphy wrote:
On 2022-04-29 07:57, Baolu Lu wrote:
Hi Robin,
On 2022/4/28 21:18, Robin Murphy wrote:
Move the bus setup to iommu_device_register(). This should allow
bus_iommu_probe() to be correctly replayed for multiple IOMMU instances,
and leaves bus_set_iommu()
Hi Alex,
Here is the PR for Joerg's shared topic branch for VFIO. It was merged
to iommu here:
https://lore.kernel.org/all/ympffa1iiqygb...@8bytes.org/
The cover letter for making the merge commit is here:
https://lore.kernel.org/all/20220418005000.897664-1-baolu...@linux.intel.com/
It is base
On Fri, Apr 29, 2022 at 05:40:56PM +0100, Joao Martins wrote:
> > A common use model might be to just destroy the iommu_domain without
> > doing stop so prefering the clearing io page table at stop might be a
> > better overall design.
>
> If we want to ensure that the IOPTE dirty state is immuta
On 4/29/22 17:11, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 03:45:23PM +0100, Joao Martins wrote:
>> On 4/29/22 13:23, Jason Gunthorpe wrote:
>>> On Fri, Apr 29, 2022 at 01:06:06PM +0100, Joao Martins wrote:
>>>
> TBH I'd be inclined to just enable DBM unconditionally in
> arm_smmu_
On Fri, Apr 29, 2022 at 03:45:23PM +0100, Joao Martins wrote:
> On 4/29/22 13:23, Jason Gunthorpe wrote:
> > On Fri, Apr 29, 2022 at 01:06:06PM +0100, Joao Martins wrote:
> >
> >>> TBH I'd be inclined to just enable DBM unconditionally in
> >>> arm_smmu_domain_finalise() if the SMMU supports it.
On Thu, Apr 07, 2022 at 08:58:36PM +0800, Yicong Yang via iommu wrote:
> HiSilicon PCIe tune and trace device(PTT) is a PCIe Root Complex integrated
> Endpoint(RCiEP) device, providing the capability to dynamically monitor and
> tune the PCIe traffic, and trace the TLP headers.
>
> Add the driver
On 4/29/22 14:40, Baolu Lu wrote:
> Hi Joao,
>
> Thanks for doing this.
>
> On 2022/4/29 05:09, Joao Martins wrote:
>> Add to iommu domain operations a set of callbacks to
>> perform dirty tracking, particulary to start and stop
>> tracking and finally to test and clear the dirty data.
>>
>> Driv
On 4/29/22 13:38, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 11:27:58AM +0100, Joao Martins wrote:
3) Unmapping an IOVA range while returning its dirty bit prior to
unmap. This case is specific for non-nested vIOMMU case where an
erronous guest (or device) DMAing to an address
On 4/29/22 15:36, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 03:27:00PM +0100, Joao Martins wrote:
>
>>> We've made a qemu patch to allow qemu to be happy if dirty tracking is
>>> not supported in the vfio container for migration, which is part of
>>> the v2 enablement series. That seems lik
Joerg,
On 4/28/2022 3:40 PM, Joerg Roedel wrote:
> On Mon, Apr 25, 2022 at 05:03:48PM +0530, Vasant Hegde wrote:
>> +/* Largest PCI device id we expect translation requests for */
>> +u16 last_bdf;
>
> How does the IVRS table look like on these systems? Do they still
> enumerate the whol
On 4/29/22 13:23, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 01:06:06PM +0100, Joao Martins wrote:
>
>>> TBH I'd be inclined to just enable DBM unconditionally in
>>> arm_smmu_domain_finalise() if the SMMU supports it. Trying to toggle it
>>> dynamically (especially on a live domain) seems
Joerg,
On 4/28/2022 3:24 PM, Joerg Roedel wrote:
> Hi Vasant,
>
> On Mon, Apr 25, 2022 at 05:03:40PM +0530, Vasant Hegde wrote:
>> +/*
>> + * This structure contains information about one PCI segment in the system.
>> + */
>> +struct amd_iommu_pci_seg {
>> +struct list_head list;
>
> The pur
From: Yunfei Wang
Add the quirk IO_PGTABLE_QUIRK_ARM_MTK_TTBR_EXT support, so that
level 2 page table can allocate in normal memory.
Signed-off-by: Ning Li
Signed-off-by: Yunfei Wang
Cc: # 5.10.*
---
drivers/iommu/mtk_iommu.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff
From: Yunfei Wang
The calling to kmem_cache_alloc for level 2 page table allocation may
run in atomic context, and it fails sometimes when DMA32 zone runs out
of memory.
Since Mediatek IOMMU hardware support at most 35bit PA in page table,
so add a quirk to allow the PA of level 2 pgtable suppor
On 4/28/2022 3:45 PM, Joerg Roedel wrote:
> On Mon, Apr 25, 2022 at 05:04:05PM +0530, Vasant Hegde wrote:
>> From: Suravee Suthikulpanit
>>
>> Replace global amd_iommu_dev_table with per PCI segment device table.
>> Also remove "dev_table_size".
>>
>> Co-developed-by: Vasant Hegde
>> Signed-of
Joerg,
On 4/28/2022 3:49 PM, Joerg Roedel wrote:
> On Mon, Apr 25, 2022 at 05:04:15PM +0530, Vasant Hegde wrote:
>> +seg_id = (iommu_fault->sbdf >> 16) & 0x;
>> +devid = iommu_fault->sbdf & 0x;
>
> This deserves some macros for readability.
Sure. Will add macros in next version
On 4/29/22 13:14, Jason Gunthorpe wrote:
> On Thu, Apr 28, 2022 at 10:09:19PM +0100, Joao Martins wrote:
>
>> +static void iommu_unmap_read_dirty_nofail(struct iommu_domain *domain,
>> + unsigned long iova, size_t size,
>> +
On Fri, Apr 29, 2022 at 03:27:00PM +0100, Joao Martins wrote:
> > We've made a qemu patch to allow qemu to be happy if dirty tracking is
> > not supported in the vfio container for migration, which is part of
> > the v2 enablement series. That seems like the better direction.
> >
> So in my audit
Joerg,
On 4/28/2022 3:52 PM, Joerg Roedel wrote:
> Hi Vasant, Hi Suravee,
>
> On Mon, Apr 25, 2022 at 05:03:38PM +0530, Vasant Hegde wrote:
>> Newer AMD systems can support multiple PCI segments, where each segment
>> contains one or more IOMMU instances. However, an IOMMU instance can only
>> su
On 4/29/22 13:26, Robin Murphy wrote:
> On 2022-04-29 12:54, Joao Martins wrote:
>> On 4/29/22 12:11, Robin Murphy wrote:
>>> On 2022-04-28 22:09, Joao Martins wrote:
From: Kunkun Jiang
This detects BBML feature and if SMMU supports it, transfer BBMLx
quirk to io-pgtable.
On Fri, Apr 29, 2022 at 03:26:41PM +0100, Joao Martins wrote:
> I had this in the iommufd_dirty_iter logic given that the iommu iteration
> logic is in the parent structure that stores iommu_dirty_data.
>
> My thinking with this patch was just to have what the IOMMU driver needs.
I would put the
Joerg,
On 4/28/2022 3:25 PM, Joerg Roedel wrote:
> On Mon, Apr 25, 2022 at 05:03:39PM +0530, Vasant Hegde wrote:
>
> Subject: iommu/amd: Update struct iommu_dev_data defination
> ^^ Typo
>
Thanks for the review. Will fix it in v3.
-Vasa
On Fri, Apr 29, 2022 at 06:51:17AM -0700, Fenghua Yu wrote:
> Hi, Baolu,
>
> On Fri, Apr 29, 2022 at 03:53:57PM +0800, Baolu Lu wrote:
> > On 2022/4/28 16:39, Jean-Philippe Brucker wrote:
> > > > The address space is what the OOM killer is after. That gets refcounted
> > > > with mmget()/mmput()/
On 4/29/22 13:09, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 11:54:16AM +0100, Joao Martins wrote:
>> On 4/29/22 09:12, Tian, Kevin wrote:
From: Joao Martins
Sent: Friday, April 29, 2022 5:09 AM
>>> [...]
+
+static int iommu_read_and_clear_dirty(struct iommu_domain *domai
On 4/29/22 12:56, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 08:07:14AM +, Tian, Kevin wrote:
>>> From: Joao Martins
>>> Sent: Friday, April 29, 2022 5:09 AM
>>>
>>> +static int __set_dirty_tracking_range_locked(struct iommu_domain
>>> *domain,
>>
>> suppose anything using iommu_domain a
On 4/29/22 13:19, Jason Gunthorpe wrote:
> On Thu, Apr 28, 2022 at 10:09:21PM +0100, Joao Martins wrote:
>> Add the correspondent APIs for performing VFIO dirty tracking,
>> particularly VFIO_IOMMU_DIRTY_PAGES ioctl subcmds:
>> * VFIO_IOMMU_DIRTY_PAGES_FLAG_START: Start dirty tracking and allocates
On 4/29/22 13:08, Jason Gunthorpe wrote:
> On Thu, Apr 28, 2022 at 10:09:15PM +0100, Joao Martins wrote:
>> +
>> +unsigned int iommu_dirty_bitmap_record(struct iommu_dirty_bitmap *dirty,
>> + unsigned long iova, unsigned long length)
>> +{
>
> Lets put iommu_dirty
On 4/29/2022 10:21 PM, Tianyu Lan wrote:
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
s
Hi, Baolu,
On Fri, Apr 29, 2022 at 03:53:57PM +0800, Baolu Lu wrote:
> On 2022/4/28 16:39, Jean-Philippe Brucker wrote:
> > > The address space is what the OOM killer is after. That gets refcounted
> > > with mmget()/mmput()/mm->mm_users. The OOM killer is satiated by the
> > > page freeing done
Hi Joao,
Thanks for doing this.
On 2022/4/29 05:09, Joao Martins wrote:
Add to iommu domain operations a set of callbacks to
perform dirty tracking, particulary to start and stop
tracking and finally to test and clear the dirty data.
Drivers are expected to dynamically change its hw protection
On Fri, Apr 29, 2022 at 04:00:14PM +1000, David Gibson wrote:
> > But I don't have a use case in mind? The simplified things I know
> > about want to attach their devices then allocate valid IOVA, they
> > don't really have a notion about what IOVA regions they are willing to
> > accept, or necessa
On Fri, Apr 29, 2022 at 04:22:56PM +1000, David Gibson wrote:
> On Fri, Apr 29, 2022 at 01:21:30AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Thursday, April 28, 2022 11:11 PM
> > >
> > >
> > > > 3) "dynamic DMA windows" (DDW). The IBM IOMMU hardware allows for
> > > 2 IO
On Fri, Apr 29, 2022 at 04:20:36PM +1000, David Gibson wrote:
> > I think PPC and S390 are solving the same problem here. I think S390
> > is going to go to a SW nested model where it has an iommu_domain
> > controlled by iommufd that is populated with the pinned pages, eg
> > stored in an xarray.
On 2022-04-29 13:10, Joao Martins wrote:
On 4/29/22 12:35, Robin Murphy wrote:
On 2022-04-28 22:09, Joao Martins wrote:
From: Kunkun Jiang
As nested mode is not upstreamed now, we just aim to support dirty
log tracking for stage1 with io-pgtable mapping (means not support
SVA mapping). If HTT
On Fri, Apr 29, 2022 at 11:27:58AM +0100, Joao Martins wrote:
> >> 3) Unmapping an IOVA range while returning its dirty bit prior to
> >> unmap. This case is specific for non-nested vIOMMU case where an
> >> erronous guest (or device) DMAing to an address being unmapped at the
> >> same time.
> >
On 2022-04-29 12:54, Joao Martins wrote:
On 4/29/22 12:11, Robin Murphy wrote:
On 2022-04-28 22:09, Joao Martins wrote:
From: Kunkun Jiang
This detects BBML feature and if SMMU supports it, transfer BBMLx
quirk to io-pgtable.
BBML1 requires still marking PTE nT prior to performing a
translat
On Fri, Apr 29, 2022 at 01:06:06PM +0100, Joao Martins wrote:
> > TBH I'd be inclined to just enable DBM unconditionally in
> > arm_smmu_domain_finalise() if the SMMU supports it. Trying to toggle it
> > dynamically (especially on a live domain) seems more trouble that it's
> > worth.
>
> Hmmm
On Thu, Apr 28, 2022 at 10:09:21PM +0100, Joao Martins wrote:
> Add the correspondent APIs for performing VFIO dirty tracking,
> particularly VFIO_IOMMU_DIRTY_PAGES ioctl subcmds:
> * VFIO_IOMMU_DIRTY_PAGES_FLAG_START: Start dirty tracking and allocates
>the area
On Thu, Apr 28, 2022 at 10:09:19PM +0100, Joao Martins wrote:
> +static void iommu_unmap_read_dirty_nofail(struct iommu_domain *domain,
> + unsigned long iova, size_t size,
> + struct iommufd_dirty_data *bitmap,
> +
On 4/29/22 12:35, Robin Murphy wrote:
> On 2022-04-28 22:09, Joao Martins wrote:
>> From: Kunkun Jiang
>>
>> As nested mode is not upstreamed now, we just aim to support dirty
>> log tracking for stage1 with io-pgtable mapping (means not support
>> SVA mapping). If HTTU is supported, we enable HA/
On Fri, Apr 29, 2022 at 11:54:16AM +0100, Joao Martins wrote:
> On 4/29/22 09:12, Tian, Kevin wrote:
> >> From: Joao Martins
> >> Sent: Friday, April 29, 2022 5:09 AM
> > [...]
> >> +
> >> +static int iommu_read_and_clear_dirty(struct iommu_domain *domain,
> >> +str
On Thu, Apr 28, 2022 at 10:09:15PM +0100, Joao Martins wrote:
> +
> +unsigned int iommu_dirty_bitmap_record(struct iommu_dirty_bitmap *dirty,
> +unsigned long iova, unsigned long length)
> +{
Lets put iommu_dirty_bitmap in its own patch, the VFIO driver side
wil
On 4/29/22 12:19, Robin Murphy wrote:
> On 2022-04-29 12:05, Joao Martins wrote:
>> On 4/29/22 09:28, Tian, Kevin wrote:
From: Joao Martins
Sent: Friday, April 29, 2022 5:09 AM
Similar to .read_and_clear_dirty() use the page table
walker helper functions and set DBM|RDONLY
On Fri, Apr 29, 2022 at 08:07:14AM +, Tian, Kevin wrote:
> > From: Joao Martins
> > Sent: Friday, April 29, 2022 5:09 AM
> >
> > +static int __set_dirty_tracking_range_locked(struct iommu_domain
> > *domain,
>
> suppose anything using iommu_domain as the first argument should
> be put in the
On 4/29/22 12:11, Robin Murphy wrote:
> On 2022-04-28 22:09, Joao Martins wrote:
>> From: Kunkun Jiang
>>
>> This detects BBML feature and if SMMU supports it, transfer BBMLx
>> quirk to io-pgtable.
>>
>> BBML1 requires still marking PTE nT prior to performing a
>> translation table update, while
On 2022-04-28 22:09, Joao Martins wrote:
Mostly reuses unmap existing code with the extra addition of
marshalling into a bitmap of a page size. To tackle the race,
switch away from a plain store to a cmpxchg() and check whether
IOVA was dirtied or not once it succeeds.
Signed-off-by: Joao Martin
On 2022-04-28 22:09, Joao Martins wrote:
From: Kunkun Jiang
As nested mode is not upstreamed now, we just aim to support dirty
log tracking for stage1 with io-pgtable mapping (means not support
SVA mapping). If HTTU is supported, we enable HA/HD bits in the SMMU
CD and transfer ARM_HD quirk to
On 4/29/22 10:03, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Friday, April 29, 2022 5:10 AM
>>
>> IOMMU advertises Access/Dirty bits if the extended capability
>> DMAR register reports it (ECAP, mnemonic ECAP.SSADS). The first
>> stage table, though, has not bit for advertising, unless refe
On 2022-04-29 12:05, Joao Martins wrote:
On 4/29/22 09:28, Tian, Kevin wrote:
From: Joao Martins
Sent: Friday, April 29, 2022 5:09 AM
Similar to .read_and_clear_dirty() use the page table
walker helper functions and set DBM|RDONLY bit, thus
switching the IOPTE to writeable-clean.
this should
On 2022-04-28 22:09, Joao Martins wrote:
From: Kunkun Jiang
This detects BBML feature and if SMMU supports it, transfer BBMLx
quirk to io-pgtable.
BBML1 requires still marking PTE nT prior to performing a
translation table update, while BBML2 requires neither break-before-make
nor PTE nT bit b
On 4/29/22 09:28, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Friday, April 29, 2022 5:09 AM
>>
>> Similar to .read_and_clear_dirty() use the page table
>> walker helper functions and set DBM|RDONLY bit, thus
>> switching the IOPTE to writeable-clean.
>
> this should not be one-off if the o
On 4/29/22 09:12, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Friday, April 29, 2022 5:09 AM
> [...]
>> +
>> +static int iommu_read_and_clear_dirty(struct iommu_domain *domain,
>> + struct iommufd_dirty_data *bitmap)
>
> In a glance this function and all pre
On 4/29/22 09:07, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Friday, April 29, 2022 5:09 AM
>>
>> +static int __set_dirty_tracking_range_locked(struct iommu_domain
>> *domain,
>
> suppose anything using iommu_domain as the first argument should
> be put in the iommu layer. Here it's more r
On 4/29/22 08:54, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Friday, April 29, 2022 5:09 AM
>>
>> Add to iommu domain operations a set of callbacks to
>> perform dirty tracking, particulary to start and stop
>> tracking and finally to test and clear the dirty data.
>
> to be consistent wit
On 4/29/22 06:45, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Friday, April 29, 2022 5:09 AM
>>
>> Presented herewith is a series that extends IOMMUFD to have IOMMU
>> hardware support for dirty bit in the IOPTEs.
>>
>> Today, AMD Milan (which been out for a year now) supports it while ARM
>
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:10 AM
>
> IOMMU advertises Access/Dirty bits if the extended capability
> DMAR register reports it (ECAP, mnemonic ECAP.SSADS). The first
> stage table, though, has not bit for advertising, unless referenced via
first-stage is compatible to C
On 2022-04-29 07:57, Baolu Lu wrote:
Hi Robin,
On 2022/4/28 21:18, Robin Murphy wrote:
Move the bus setup to iommu_device_register(). This should allow
bus_iommu_probe() to be correctly replayed for multiple IOMMU instances,
and leaves bus_set_iommu() as a glorified no-op to be cleaned up next.
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> Similar to .read_and_clear_dirty() use the page table
> walker helper functions and set DBM|RDONLY bit, thus
> switching the IOPTE to writeable-clean.
this should not be one-off if the operation needs to be
applied to IOPTE. Say a m
From: Thierry Reding
Allow the NVIDIA-specific ARM SMMU implementation to bind to the SMMU
instances found on Tegra234.
Acked-by: Robin Murphy
Acked-by: Will Deacon
Signed-off-by: Thierry Reding
---
drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 3 ++-
1 file changed, 2 insertions(+), 1 deleti
From: Thierry Reding
The NVIDIA Tegra234 SoC comes with one single-instance ARM SMMU used by
isochronous memory clients and two dual-instance ARM SMMUs used by non-
isochronous memory clients.
Reviewed-by: Rob Herring
Acked-by: Will Deacon
Signed-off-by: Thierry Reding
---
Documentation/devi
From: Thierry Reding
On NVIDIA SoC's the ARM SMMU needs to interact with the memory
controller in order to map memory clients to the corresponding stream
IDs. Document how the nvidia,memory-controller property can be used to
achieve this.
Note that this is a backwards-incompatible change that is
From: Thierry Reding
Hi Joerg,
this is essentially a resend of v2 with a Acked-by:s from Robin and Will
added. These have been on the list for quite a while now, but apparently
there was a misunderstanding, so neither you nor Will picked this up.
Since Will acked these, I think it's probably be
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
[...]
> +
> +static int iommu_read_and_clear_dirty(struct iommu_domain *domain,
> + struct iommufd_dirty_data *bitmap)
In a glance this function and all previous helpers doesn't rely on any
iommufd objec
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> +static int __set_dirty_tracking_range_locked(struct iommu_domain
> *domain,
suppose anything using iommu_domain as the first argument should
be put in the iommu layer. Here it's more reasonable to use iopt
as the first argument or
> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> Add to iommu domain operations a set of callbacks to
> perform dirty tracking, particulary to start and stop
> tracking and finally to test and clear the dirty data.
to be consistent with other context, s/test/read/
>
> Drivers ar
On 2022/4/28 16:39, Jean-Philippe Brucker wrote:
On Tue, Apr 26, 2022 at 04:31:57PM -0700, Dave Hansen wrote:
On 4/26/22 09:48, Jean-Philippe Brucker wrote:
On Tue, Apr 26, 2022 at 08:27:00AM -0700, Dave Hansen wrote:
On 4/25/22 09:40, Jean-Philippe Brucker wrote:
The problem is that we'd hav
Hi Robin,
On 2022/4/28 21:18, Robin Murphy wrote:
Move the bus setup to iommu_device_register(). This should allow
bus_iommu_probe() to be correctly replayed for multiple IOMMU instances,
and leaves bus_set_iommu() as a glorified no-op to be cleaned up next.
I re-fetched the latest patches on
On Thu, Apr 28, 2022 at 12:10:37PM -0300, Jason Gunthorpe wrote:
> On Fri, Apr 29, 2022 at 12:53:16AM +1000, David Gibson wrote:
>
> > 2) Costly GUPs. pseries (the most common ppc machine type) always
> > expects a (v)IOMMU. That means that unlike the common x86 model of a
> > host with IOMMU, b
78 matches
Mail list logo