ernel. The kernel crashes and kunit.py shows the WARN and reports
the test fails.
Signed-off-by: Jason Gunthorpe
---
tools/testing/kunit/kunit_kernel.py | 2 ++
1 file changed, 2 insertions(+)
I saw there was an earlier series working to make tests that deliberately made
WARNs not do that, so
On Tue, Oct 15, 2024 at 09:43:24AM +0100, Will Deacon wrote:
> > @@ -2890,6 +2891,7 @@ int arm_smmu_set_pasid(struct arm_smmu_master *master,
> > * already attached, no need to set old_domain.
> > */
> > .ssid = pasid,
> > + .old_domain = old,
>
> ni
On Mon, Sep 30, 2024 at 07:55:08AM +, Tian, Kevin wrote:
> > +struct vfio_device_pasid_attach_iommufd_pt {
> > + __u32 argsz;
> > + __u32 flags;
> > + __u32 pasid;
> > + __u32 pt_id;
> > +};
> > +
> > +#define VFIO_DEVICE_PASID_ATTACH_IOMMUFD_PT_IO(VFIO_TYPE,
> > VFIO_B
; ---
> drivers/vfio/iommufd.c | 50 +
> drivers/vfio/pci/vfio_pci.c | 2 ++
> include/linux/vfio.h| 11
> 3 files changed, 63 insertions(+)
Reviewed-by: Jason Gunthorpe
Jason
On Thu, Sep 12, 2024 at 06:17:28AM -0700, Yi Liu wrote:
> This adds ioctls for the userspace to attach/detach a given pasid of a
> vfio device to/from an IOAS/HWPT.
>
> Reviewed-by: Jason Gunthorpe
> Signed-off-by: Yi Liu
> ---
> drivers/vf
_ida)) > 0) {
> //anything to do with the allocated ID
> ida_free(pasid_ida, pasid);
> }
>
> Cc: Matthew Wilcox (Oracle)
> Suggested-by: Jason Gunthorpe
> Signed-off-by: Yi Liu
> ---
> include/linux/idr.h | 11
> lib/i
t we do this twice now?
Let's just keep it in the pci core?
It looks Ok otherwise
Reviewed-by: Jason Gunthorpe
Jason
support domain replacement for pasid yet, so it
> would fail the set_dev_pasid op to keep the old config if the input @old
> is non-NULL.
>
> Suggested-by: Jason Gunthorpe
> Signed-off-by: Yi Liu
> ---
> drivers/iommu/amd/pasid.c | 3 +++
> include/linux/iommu.h | 3 ++-
t; Otherwise, iommu drivers would need to track domain for pasids by themselves,
> this would duplicate code among the iommu drivers. Or iommu drivers would
> rely group->pasid_array to get domain, which may not always the correct
> one.
>
> Suggested-by: Jason Gunthorpe
> Signed
criptor support")
Fixes: e1d3c0fd701d ("iommu: add ARM LPAE page table allocator")
Fixes: 745ef1092bcf ("iommu/io-pgtable: Move Apple DART support to its own
file")
Signed-off-by: Jason Gunthorpe
---
drivers/iommu/io-pgtable-arm-v7s.c | 3 +--
drivers/iommu/io-pgtable-arm
I noticed some bugs here while working on iommupt. Fix them up.
Joerg, can you pick this both for your -rc branch?
Thanks,
Jason
Jason Gunthorpe (2):
iommufd: Do not allow creating areas without READ or WRITE
iommu: Do not return 0 from map_pages if it doesn't do anything
drivers/iom
leaks and worse during unmap
Since almost nothing can support this, and it is a useless thing to do,
block it early in iommufd.
Cc: sta...@kernel.org
Fixes: aad37e71d5c4 ("iommufd: IOCTLs for the io_pagetable")
Signed-off-by: Jason Gunthorpe
---
drivers/iommu/iommufd/ioas.c
On Mon, Aug 19, 2024 at 11:38:22AM -0700, Nicolin Chen wrote:
> On Mon, Aug 19, 2024 at 03:28:11PM -0300, Jason Gunthorpe wrote:
> > On Mon, Aug 19, 2024 at 11:19:39AM -0700, Nicolin Chen wrote:
> >
> > > > But nesting enablment with out viommu is alot less useful
On Mon, Aug 19, 2024 at 11:19:39AM -0700, Nicolin Chen wrote:
> > But nesting enablment with out viommu is alot less useful than I had
> > thought :(
>
> Actually, without viommu, the hwpt cache invalidate alone could
> still support non-SVA case?
That is what I thought, but doesn't the guest st
On Mon, Aug 19, 2024 at 11:10:03AM -0700, Nicolin Chen wrote:
> On Mon, Aug 19, 2024 at 02:33:32PM -0300, Jason Gunthorpe wrote:
> > On Thu, Aug 15, 2024 at 05:21:57PM -0700, Nicolin Chen wrote:
> >
> > > > Why not? The idev becomes linked to the vi
On Mon, Aug 19, 2024 at 10:49:56AM -0700, Nicolin Chen wrote:
> On Mon, Aug 19, 2024 at 02:30:56PM -0300, Jason Gunthorpe wrote:
> > On Thu, Aug 15, 2024 at 04:51:39PM -0700, Nicolin Chen wrote:
> > > On Thu, Aug 15, 2024 at 08:24:05PM -0300, Jason Gunthorpe wrote:
> > >
On Thu, Aug 15, 2024 at 05:50:06PM -0700, Nicolin Chen wrote:
> Though only driver would know whether it would eventually access
> the vdev_id list, I'd like to keep things in the way of having a
> core-managed VIOMMU object (IOMMU_VIOMMU_TYPE_DEFAULT), so the
> viommu invalidation handler could h
On Thu, Aug 15, 2024 at 05:21:57PM -0700, Nicolin Chen wrote:
> > Why not? The idev becomes linked to the viommu when the dev id is set
>
> > Unless we are also going to enforce the idev is always attached to a
> > nested then I don't think we need to check it here.
> >
> > Things will definatel
On Thu, Aug 15, 2024 at 04:51:39PM -0700, Nicolin Chen wrote:
> On Thu, Aug 15, 2024 at 08:24:05PM -0300, Jason Gunthorpe wrote:
> > On Wed, Aug 07, 2024 at 01:10:49PM -0700, Nicolin Chen wrote:
> > > @@ -946,4 +947,40 @@ struct iommu_viommu_unset_vdev_id {
> > &
On Fri, Aug 16, 2024 at 05:43:18PM +0800, Yi Liu wrote:
> On 2024/7/18 16:27, Tian, Kevin wrote:
> > > From: Liu, Yi L
> > > Sent: Friday, June 28, 2024 5:06 PM
> > >
> > > @@ -3289,7 +3290,20 @@ static int __iommu_set_group_pasid(struct
> > > iommu_domain *domain,
> > >
> > >
On Thu, Aug 15, 2024 at 12:53:04PM -0700, Nicolin Chen wrote:
> > Maybe the iommufd_viommu_invalidate ioctl handler should hold that
> > xa_lock around the viommu->ops->cache_invalidate, and then add lock
> > assert in iommufd_viommu_find_device?
>
> xa_lock/spinlock might be too heavy. We can ha
On Thu, Aug 15, 2024 at 12:46:24PM -0700, Nicolin Chen wrote:
> On Thu, Aug 15, 2024 at 04:08:48PM -0300, Jason Gunthorpe wrote:
> > On Wed, Aug 07, 2024 at 01:10:46PM -0700, Nicolin Chen wrote:
> >
> > > +int iommufd_viommu_set_vdev_id(struct iommufd_ucmd *ucmd)
On Thu, Aug 15, 2024 at 11:20:35AM -0700, Nicolin Chen wrote:
> > I don't have an easy solution in mind though later as surely we will
> > need this when we start to create more iommu bound objects. I'm pretty
> > sure syzkaller would eventually find such a UAF using the iommufd
> > selftest frame
On Wed, Aug 07, 2024 at 01:10:56PM -0700, Nicolin Chen wrote:
> Add an arm_smmu_viommu_cache_invalidate() function for user space to issue
> cache invalidation commands via viommu.
>
> The viommu invalidation takes the same native format of a 128-bit command,
> as the hwpt invalidation. Thus, reus
On Wed, Aug 07, 2024 at 01:10:49PM -0700, Nicolin Chen wrote:
> @@ -946,4 +947,40 @@ struct iommu_viommu_unset_vdev_id {
> __aligned_u64 vdev_id;
> };
> #define IOMMU_VIOMMU_UNSET_VDEV_ID _IO(IOMMUFD_TYPE,
> IOMMUFD_CMD_VIOMMU_UNSET_VDEV_ID)
> +
> +/**
> + * enum iommu_viommu_invalidate_da
On Wed, Aug 07, 2024 at 01:10:46PM -0700, Nicolin Chen wrote:
> +int iommufd_viommu_set_vdev_id(struct iommufd_ucmd *ucmd)
> +{
> + struct iommu_viommu_set_vdev_id *cmd = ucmd->cmd;
> + struct iommufd_hwpt_nested *hwpt_nested;
> + struct iommufd_vdev_id *vdev_id, *curr;
> + struct
On Wed, Aug 07, 2024 at 01:10:42PM -0700, Nicolin Chen wrote:
> @@ -876,4 +877,33 @@ struct iommu_fault_alloc {
> __u32 out_fault_fd;
> };
> #define IOMMU_FAULT_QUEUE_ALLOC _IO(IOMMUFD_TYPE,
> IOMMUFD_CMD_FAULT_QUEUE_ALLOC)
> +
> +/**
> + * enum iommu_viommu_type - Virtual IOMMU Type
> + *
On Wed, Aug 07, 2024 at 01:10:42PM -0700, Nicolin Chen wrote:
> +int iommufd_viommu_alloc_ioctl(struct iommufd_ucmd *ucmd)
> +{
> + struct iommu_viommu_alloc *cmd = ucmd->cmd;
> + struct iommufd_hwpt_paging *hwpt_paging;
> + struct iommufd_viommu *viommu;
> + struct iommufd_device
On Fri, Aug 02, 2024 at 05:32:02PM -0700, Nicolin Chen wrote:
> Reorder include files to alphabetic order to simplify maintenance, and
> separate local headers and global headers with a blank line.
>
> No functional change intended.
>
> Signed-off-by: Nicolin Chen
> ---
> drivers/iommu/iommufd/
On Wed, Aug 14, 2024 at 10:09:22AM -0700, Nicolin Chen wrote:
> This helps us to build a device-based virq report function:
> +void iommufd_device_report_virq(struct device *dev, unsigned int data_type,
> + void *data_ptr, size_t data_len);
>
> I built a link from de
On Fri, Aug 09, 2024 at 08:00:34AM +, Tian, Kevin wrote:
> > - IOMMUFD should provide VMM a way to tell the gPA (or directly +
> > GITS_TRANSLATER?). Then kernel should do the stage-2 mapping. I
> > have talked to Jason about this a while ago, and we have a few
> > thoughts how to implem
On Fri, Aug 09, 2024 at 12:18:42PM -0700, Nicolin Chen wrote:
> > The bigger issue is that we still have the hypervisor GIC driver
> > controlling things and it will need to know to use the guest provided
> > MSI address captured during the MSI trap, not its own address. I don't
> > have an idea h
On Thu, Aug 08, 2024 at 01:38:44PM +0100, Robin Murphy wrote:
> On 06/08/2024 9:25 am, Tian, Kevin wrote:
> > > From: Nicolin Chen
> > > Sent: Saturday, August 3, 2024 8:32 AM
> > >
> > > From: Robin Murphy
> > >
> > > Currently, iommu-dma is the only place outside of IOMMUFD and drivers
> > >
On Mon, Aug 05, 2024 at 02:24:42AM +, Tian, Kevin wrote:
>
> According to [3],
>
> "
> With SNP, when pages are marked as guest-owned in the RMP table,
> they are assigned to a specific guest/ASID, as well as a specific GFN
> with in the guest. Any attempts to map it in the RMP table to
On Fri, Aug 02, 2024 at 08:26:48AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Thursday, June 20, 2024 10:34 PM
> >
> > On Thu, Jun 20, 2024 at 04:14:23PM +0200, David Hildenbrand wrote:
> >
> > > 1) How would the device be able
On Tue, Jul 16, 2024 at 10:34:55AM -0700, Sean Christopherson wrote:
> On Tue, Jul 16, 2024, Jason Gunthorpe wrote:
> > On Tue, Jul 16, 2024 at 09:03:00AM -0700, Sean Christopherson wrote:
> >
> > > > + To support huge pages, guest_memfd will take ownership of
On Tue, Jul 16, 2024 at 09:03:00AM -0700, Sean Christopherson wrote:
> > + To support huge pages, guest_memfd will take ownership of the hugepages,
> > and
> > provide interested parties (userspace, KVM, iommu) with pages to be used.
> > + guest_memfd will track usage of (sub)pages, for both
On Thu, Jun 20, 2024 at 04:54:00PM -0700, Sean Christopherson wrote:
> Heh, and then we'd end up turning memfd into guest_memfd. As I see it, being
> able to safely map TDX/SNP/pKVM private memory is a happy side effect that is
> possible because guest_memfd isn't subordinate to the primary MMU,
On Fri, Jun 21, 2024 at 07:32:40AM +, Quentin Perret wrote:
> > No, I'm interested in what pKVM is doing that needs this to be so much
> > different than the CC case..
>
> The underlying technology for implementing CC is obviously very
> different (MMU-based for pKVM, encryption-based for the
On Thu, Jun 20, 2024 at 03:47:23PM -0700, Elliot Berman wrote:
> On Thu, Jun 20, 2024 at 11:29:56AM -0300, Jason Gunthorpe wrote:
> > On Thu, Jun 20, 2024 at 04:01:08PM +0200, David Hildenbrand wrote:
> > > Regarding huge pages: assume the huge page (e.g., 1 GiB hugetlb
On Thu, Jun 20, 2024 at 01:30:29PM -0700, Sean Christopherson wrote:
> I.e. except for blatant bugs, e.g. use-after-free, we need to be able to
> guarantee
> with 100% accuracy that there are no outstanding mappings when converting a
> page
> from shared=>private. Crossing our fingers and hoping
On Thu, Jun 20, 2024 at 08:53:07PM +0200, David Hildenbrand wrote:
> On 20.06.24 18:36, Jason Gunthorpe wrote:
> > On Thu, Jun 20, 2024 at 04:45:08PM +0200, David Hildenbrand wrote:
> >
> > > If we could disallow pinning any shared pages, that would make life a lot
> &g
> This is the step that concerns me. "Relatively short time" is, well,
> relative.
> Hmm, though I suppose if userspace managed to map a shared page into something
> that pins the page, and can't force an unpin, e.g. by stopping I/O?, then
> either
> there's a host userspace bug or a guest bu
On Thu, Jun 20, 2024 at 04:45:08PM +0200, David Hildenbrand wrote:
> If we could disallow pinning any shared pages, that would make life a lot
> easier, but I think there were reasons for why we might require it. To
> convert shared->private, simply unmap that folio (only the shared parts
> could
On Thu, Jun 20, 2024 at 04:14:23PM +0200, David Hildenbrand wrote:
> 1) How would the device be able to grab/access "private memory", if not
>via the user page tables?
The approaches I'm aware of require the secure world to own the IOMMU
and generate the IOMMU page tables. So we will not use
On Thu, Jun 20, 2024 at 04:01:08PM +0200, David Hildenbrand wrote:
> On 20.06.24 15:55, Jason Gunthorpe wrote:
> > On Thu, Jun 20, 2024 at 09:32:11AM +0100, Fuad Tabba wrote:
> > > Hi,
> > >
> > > On Thu, Jun 20, 2024 at 5:11 AM Christoph Hellwig
> >
On Thu, Jun 20, 2024 at 11:00:45AM +0200, David Hildenbrand wrote:
> > Not sure if IOMMU + private makes that much sense really, but I think
> > I might not really understand what you mean by this.
>
> A device might be able to access private memory. In the TDX world, this
> would mean that a devi
On Thu, Jun 20, 2024 at 09:32:11AM +0100, Fuad Tabba wrote:
> Hi,
>
> On Thu, Jun 20, 2024 at 5:11 AM Christoph Hellwig wrote:
> >
> > On Wed, Jun 19, 2024 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> > > If you can't agree with the guest_memfd people on ho
On Wed, Jun 19, 2024 at 01:01:14PM +0100, Fuad Tabba wrote:
> Hi Jason,
>
> On Wed, Jun 19, 2024 at 12:51 PM Jason Gunthorpe wrote:
> >
> > On Wed, Jun 19, 2024 at 10:11:35AM +0100, Fuad Tabba wrote:
> >
> > > To be honest, personally (speaking only for myse
On Wed, Jun 19, 2024 at 10:11:35AM +0100, Fuad Tabba wrote:
> To be honest, personally (speaking only for myself, not necessarily
> for Elliot and not for anyone else in the pKVM team), I still would
> prefer to use guest_memfd(). I think that having one solution for
> confidential computing that
On Tue, Jun 11, 2024 at 11:09:15AM -0700, Mina Almasry wrote:
> Just curious: in Pavel's effort, io_uring - which is not a device - is
> trying to share memory with the page_pool, which is also not a device.
> And Pavel is being asked to wrap the memory in a dmabuf. Is dmabuf
> going to be the ker
On Mon, Jun 10, 2024 at 08:20:08PM +0100, Pavel Begunkov wrote:
> On 6/10/24 16:16, David Ahern wrote:
> > > There is no reason you shouldn't be able to use your fast io_uring
> > > completion and lifecycle flow with DMABUF backed memory. Those are not
> > > widly different things and there is goo
On Mon, Jun 10, 2024 at 02:07:01AM +0100, Pavel Begunkov wrote:
> On 6/10/24 01:37, David Wei wrote:
> > On 2024-06-07 17:52, Jason Gunthorpe wrote:
> > > IMHO it seems to compose poorly if you can only use the io_uring
> > > lifecycle model with io_uring registered
On Fri, Jun 07, 2024 at 08:27:29AM -0600, David Ahern wrote:
> On 6/7/24 7:42 AM, Pavel Begunkov wrote:
> > I haven't seen any arguments against from the (net) maintainers so
> > far. Nor I see any objection against callbacks from them (considering
> > that either option adds an if).
>
> I have sa
On Tue, Jun 04, 2024 at 12:15:51PM -0400, Steven Rostedt wrote:
> On Tue, 04 Jun 2024 12:13:15 +0200
> Paolo Abeni wrote:
>
> > On Thu, 2024-05-30 at 20:16 +, Mina Almasry wrote:
> > > diff --git a/net/core/devmem.c b/net/core/devmem.c
> > > index d82f92d7cf9ce..d5fac8edf621d 100644
> > > ---
file | 2 --
> 1 file changed, 2 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
On Wed, May 08, 2024 at 04:44:32PM +0100, Pavel Begunkov wrote:
> > like a weird and indirect way to get there. Why can't io_uring just be
> > the entity that does the final free and not mess with the logic
> > allocator?
>
> Then the user has to do a syscall (e.g. via io_uring) to return pages,
On Wed, May 08, 2024 at 12:30:07PM +0100, Pavel Begunkov wrote:
> > I'm not going to pretend to know about page pool details, but dmabuf
> > is the way to get the bulk of pages into a pool within the net stack's
> > allocator and keep that bulk properly refcounted while.> An object like
> > dmabuf
On Tue, May 07, 2024 at 08:35:37PM +0100, Pavel Begunkov wrote:
> On 5/7/24 18:56, Jason Gunthorpe wrote:
> > On Tue, May 07, 2024 at 06:25:52PM +0100, Pavel Begunkov wrote:
> > > On 5/7/24 17:48, Jason Gunthorpe wrote:
> > > > On Tue, May 07, 2024 at 09:42:
On Tue, May 07, 2024 at 06:25:52PM +0100, Pavel Begunkov wrote:
> On 5/7/24 17:48, Jason Gunthorpe wrote:
> > On Tue, May 07, 2024 at 09:42:05AM -0700, Mina Almasry wrote:
> >
> > > 1. Align with devmem TCP to use udmabuf for your io_uring memory. I
> > > think
On Tue, May 07, 2024 at 09:42:05AM -0700, Mina Almasry wrote:
> 1. Align with devmem TCP to use udmabuf for your io_uring memory. I
> think in the past you said it's a uapi you don't link but in the face
> of this pushback you may want to reconsider.
dmabuf does not force a uapi, you can acquire
On Tue, May 07, 2024 at 05:05:12PM +0100, Pavel Begunkov wrote:
> > even in tree if you give them enough rope, and they should not have
> > that rope when the only sensible options are page/folio based kernel
> > memory (incuding large/huge folios) and dmabuf.
>
> I believe there is at least one d
On Sun, Apr 14, 2024 at 07:39:58PM +0500, Muhammad Usama Anjum wrote:
> On 4/5/24 5:10 AM, Jason Gunthorpe wrote:
> > On Mon, Mar 25, 2024 at 02:11:41PM +0500, Muhammad Usama Anjum wrote:
> >> On 3/25/24 2:00 PM, Muhammad Usama Anjum wrote:
> >>> Add FAULT_I
On Mon, Mar 25, 2024 at 02:11:41PM +0500, Muhammad Usama Anjum wrote:
> On 3/25/24 2:00 PM, Muhammad Usama Anjum wrote:
> > Add FAULT_INJECTION_DEBUG_FS and FAILSLAB configurations which are
> > needed by iommufd_fail_nth test.
> >
> > Signed-off-by: Muhammad Usama Anjum
> > ---
> > While buildin
On Wed, Mar 27, 2024 at 06:05:46PM -0600, Shuah Khan wrote:
> ASSERT_*() is supposed to exit the test right away. If this
> isn't happening it needs to be debugged.
We know it doesn't work in setup/teardown functions, you can see it in
the code it jumps back and does the teardown again in an inf
On Wed, Mar 27, 2024 at 06:09:37PM +, Joao Martins wrote:
> On 27/03/2024 17:49, Muhammad Usama Anjum wrote:
> > On 3/27/24 7:59 PM, Joao Martins wrote:
> >> On 27/03/2024 11:49, Jason Gunthorpe wrote:
> >>> On Wed, Mar 27, 2024 at 03:14:25PM +0500, Muhammad Usa
On Wed, Mar 27, 2024 at 03:04:09PM +, Joao Martins wrote:
> On 27/03/2024 11:40, Jason Gunthorpe wrote:
> > On Wed, Mar 27, 2024 at 10:41:52AM +, Joao Martins wrote:
> >> On 25/03/2024 13:52, Jason Gunthorpe wrote:
> >>> On Mon, Mar 25, 2024 at 12:17:
On Wed, Mar 27, 2024 at 03:14:25PM +0500, Muhammad Usama Anjum wrote:
> On 3/26/24 8:03 PM, Jason Gunthorpe wrote:
> > On Tue, Mar 26, 2024 at 06:09:34PM +0500, Muhammad Usama Anjum wrote:
> >> Even after applying this config patch and following snippet (which doesn't
> &
On Wed, Mar 27, 2024 at 10:41:52AM +, Joao Martins wrote:
> On 25/03/2024 13:52, Jason Gunthorpe wrote:
> > On Mon, Mar 25, 2024 at 12:17:28PM +, Joao Martins wrote:
> >>> However, I am not smart enough to figure out why ...
> >>>
> >>> Apparent
On Tue, Mar 26, 2024 at 06:09:34PM +0500, Muhammad Usama Anjum wrote:
> Even after applying this config patch and following snippet (which doesn't
> terminate the program if mmap doesn't allocate exactly as the hint), I'm
> finding failed tests.
>
> @@ -1746,7 +1748,7 @@ FIXTURE_SETUP(iommufd_dirt
On Mon, Mar 25, 2024 at 12:17:28PM +, Joao Martins wrote:
> > However, I am not smart enough to figure out why ...
> >
> > Apparently, from the source, mmap() fails to allocate pages on the desired
> > address:
> >
> > 1746 assert((uintptr_t)self->buffer % HUGEPAGE_SIZE == 0);
> >
On Thu, Mar 21, 2024 at 07:26:41PM +0800, Yi Liu wrote:
> > yes, the correct way is to undo what have been done before the fail
> > device. However, I somehow remember that pasid capability is only
> > available when the group is singleton. So iterate all devices of the
> > devices just means one d
On Tue, Mar 19, 2024 at 03:29:39PM +0800, Yi Liu wrote:
> On 2024/3/19 00:52, Jason Gunthorpe wrote:
> > On Wed, Mar 13, 2024 at 04:11:41PM +0800, Yi Liu wrote:
> >
> > > yes. how about your opinion? @Jason. I noticed the set_dev_pasid callback
> > > and pasi
On Tue, Mar 12, 2024 at 07:35:40AM +0100, Mirsad Todorovac wrote:
> Hi,
>
> (This is verified on the second test box.)
>
> In the most recent 6.8.0 release of torvalds tree kernel with selftest
> configs on,
> process ./iommufd appears to consume 99% of a CPU core for quote a while in an
> endle
On Wed, Mar 13, 2024 at 04:11:41PM +0800, Yi Liu wrote:
> yes. how about your opinion? @Jason. I noticed the set_dev_pasid callback
> and pasid_array update is under the group->lock, so update it should be
> fine to adjust the order to update pasid_array after set_dev_pasid returns.
Yes, it makes
On Thu, Feb 22, 2024 at 04:34:10PM +0800, Yi Liu wrote:
> > It doesn't mean that the S2 is globally shared across all the nesting
> > translations (like ARM does), and you still have to iterate over every
> > nesting DID.
> >
> > In light of that this design seems to have gone a bit off..
> >
> >
On Thu, Feb 22, 2024 at 12:49:33PM +0500, Muhammad Usama Anjum wrote:
> The config fragment doesn't follow the correct format to enable those
> config options which make the config options getting missed while
> merging with other configs.
>
> ➜ merge_config.sh -m .config tools/testing/selftests/i
On Thu, Feb 08, 2024 at 12:23:02AM -0800, Yi Liu wrote:
> If a domain is used as the parent in nested translation its mappings might
> be cached using DID of the nested domain. But the existing code ignores
> this fact to only invalidate the iotlb entries tagged by the domain's own
> DID.
> Loop t
On Thu, Jan 18, 2024 at 05:28:01PM +0800, Yi Liu wrote:
> On 2024/1/17 20:56, Jason Gunthorpe wrote:
> > On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
> > > Above indeed makes more sense if there can be concurrent
> > > attach/replace/detach
> > > on
On Wed, Jan 17, 2024 at 04:24:24PM +0800, Yi Liu wrote:
> Above indeed makes more sense if there can be concurrent attach/replace/detach
> on a single pasid. Just have one doubt should we add lock to protect the
> whole attach/replace/detach paths. In the attach/replace path[1] [2], the
> xarray en
On Tue, Jan 16, 2024 at 01:18:12AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, January 16, 2024 1:25 AM
> >
> > On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
> > > +/**
> > > + * iommufd_device_pasid_d
On Mon, Jan 15, 2024 at 05:44:13PM +, Shameerali Kolothum Thodi wrote:
> > If it is valid when userspace does read() then it should be valid when
> > userspace does write() too.
> >
> > It is the only way the kernel can actually match request and response
> > here.
>
> The kernel currently c
On Sun, Nov 26, 2023 at 10:34:23PM -0800, Yi Liu wrote:
> @@ -534,7 +537,17 @@ iommufd_device_do_replace(struct iommufd_device *idev,
> static struct iommufd_hw_pagetable *do_attach(struct iommufd_device *idev,
> struct iommufd_hw_pagetable *hwpt, struct attach_data *data)
> {
> -
On Sun, Nov 26, 2023 at 10:34:28PM -0800, Yi Liu wrote:
> +static int intel_nested_set_dev_pasid(struct iommu_domain *domain,
> + struct device *dev, ioasid_t pasid)
> +{
> + struct device_domain_info *info = dev_iommu_priv_get(dev);
> + struct dmar_domain
On Sun, Nov 26, 2023 at 10:34:21PM -0800, Yi Liu wrote:
> +int iommu_replace_device_pasid(struct iommu_domain *domain,
> +struct device *dev, ioasid_t pasid)
> +{
> + struct iommu_group *group = dev->iommu_group;
> + struct iommu_domain *old_domain;
> + int r
On Sun, Nov 26, 2023 at 10:39:07PM -0800, Yi Liu wrote:
> @@ -168,6 +180,42 @@ void vfio_iommufd_physical_detach_ioas(struct
> vfio_device *vdev)
> }
> EXPORT_SYMBOL_GPL(vfio_iommufd_physical_detach_ioas);
>
> +int vfio_iommufd_physical_pasid_attach_ioas(struct vfio_device *vdev,
> +
On Fri, Jan 12, 2024 at 05:46:13PM +, Shameerali Kolothum Thodi wrote:
>
>
> > -Original Message-
> > From: Lu Baolu
> > Sent: Thursday, October 26, 2023 3:49 AM
> > To: Jason Gunthorpe ; Kevin Tian ;
> > Joerg
> > Roedel ; Will Deacon
On Wed, Jan 10, 2024 at 08:10:07PM -0800, Yi Liu wrote:
> v11:
> - Drop hw_error field in vtd cache invalidation uapi. devTLB invalidation
>error is a serious security emergency requiring the host kernel to handle.
>No need to expose it to userspace (especially given existing VMs doesn't
>
On Thu, Jan 11, 2024 at 08:50:45AM -0800, Nicolin Chen wrote:
> On Wed, Jan 10, 2024 at 08:10:13PM -0800, Yi Liu wrote:
> > +#define test_cmd_hwpt_invalidate(hwpt_id, reqs, data_type, lreq, nreqs)
> > \
> > + ({
> > \
On Mon, Jan 08, 2024 at 04:07:12AM +, Tian, Kevin wrote:
> > > In concept w/o vSVA it's still possible to assign sibling vdev's to
> > > a same VM as each vdev is allocated with a unique pasid to mark vRID
> > > so can be differentiated from each other in the fault/error path.
> >
> > I though
On Thu, Jan 04, 2024 at 11:38:40PM -0800, Nicolin Chen wrote:
> On Wed, Jan 03, 2024 at 08:02:04PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 03, 2024 at 12:18:35PM -0800, Nicolin Chen wrote:
> > > > The driver would have to create it and there would be some driver
>
On Fri, Jan 05, 2024 at 02:52:50AM +, Tian, Kevin wrote:
> > but in reality the relation could be identified in an easy way due to a SIOV
> > restriction which we discussed before - shared PASID space of PF disallows
> > assigning sibling vdev's to a same VM (otherwise no way to identify which
On Thu, Dec 14, 2023 at 07:26:39PM +0800, Yi Liu wrote:
> Per the prior discussion[1], we agreed to move the error reporting into the
> driver specific part. On Intel side, we want to report two devTLB
> invalidation errors: ICE (invalid completion error) and ITE (invalidation
> timeout error). Suc
On Wed, Jan 03, 2024 at 12:18:35PM -0800, Nicolin Chen wrote:
> > The driver would have to create it and there would be some driver
> > specific enclosing struct to go with it
> >
> > Perhaps device_ids goes in the driver specific struct, I don't know.
>
> +struct iommufd_viommu {
> + struct
On Wed, Jan 03, 2024 at 09:06:23AM -0800, Nicolin Chen wrote:
> On Wed, Jan 03, 2024 at 12:58:48PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 03, 2024 at 08:48:46AM -0800, Nicolin Chen wrote:
> > > > You can pass the ctx to the invalidate op, it is already implied
>
On Wed, Jan 03, 2024 at 08:48:46AM -0800, Nicolin Chen wrote:
> > You can pass the ctx to the invalidate op, it is already implied
> > because the passed iommu_domain is linked to a single iommufd ctx.
>
> The device virtual id lookup API needs something similar, yet it
> likely needs a viommu poi
On Wed, Jan 03, 2024 at 10:24:42AM +0800, Yi Liu wrote:
> On 2024/1/3 07:38, Jason Gunthorpe wrote:
> > On Fri, Dec 15, 2023 at 12:01:19PM +0800, Yi Liu wrote:
> > > > I think I misread Yi's narrative: dev_id is a working approach
> > > > for VMM to conve
On Wed, Jan 03, 2024 at 11:06:19AM +0800, Baolu Lu wrote:
> On 2024/1/3 9:33, Yi Liu wrote:
> > On 2024/1/3 02:44, Jason Gunthorpe wrote:
> > > On Tue, Jan 02, 2024 at 06:38:34AM -0800, Yi Liu wrote:
> > >
> > > > +static void intel_nested_flush_cache(struc
On Fri, Dec 15, 2023 at 12:01:19PM +0800, Yi Liu wrote:
> > I think I misread Yi's narrative: dev_id is a working approach
> > for VMM to convert to a vRID, while he is asking for a better
> > alternative :)
>
> In concept, dev_id works, but in reality we have problem to get a dev_id
> for a given
On Tue, Jan 02, 2024 at 06:38:24AM -0800, Yi Liu wrote:
> Lu Baolu (4):
> iommu: Add cache_invalidate_user op
> iommu/vt-d: Allow qi_submit_sync() to return the QI faults
> iommu/vt-d: Convert stage-1 cache invalidation to return QI fault
> iommu/vt-d: Add iotlb flush for nested domain
>
>
1 - 100 of 212 matches
Mail list logo