On Mon, Apr 28, 2025 at 03:44:08PM -0700, Nicolin Chen wrote: > On Mon, Apr 28, 2025 at 09:34:05PM +0000, Pranjal Shrivastava wrote: > > On Fri, Apr 25, 2025 at 10:58:05PM -0700, Nicolin Chen wrote: > > > @@ -501,6 +504,9 @@ static const struct iommufd_object_ops > > > iommufd_object_ops[] = { > > > [IOMMUFD_OBJ_IOAS] = { > > > .destroy = iommufd_ioas_destroy, > > > }, > > > + [IOMMUFD_OBJ_VCMDQ] = { > > > + .destroy = iommufd_vcmdq_destroy, > > > + }, > > > [IOMMUFD_OBJ_VDEVICE] = { > > > .destroy = iommufd_vdevice_destroy, > > > }, > > > > When do we expect the VMM to use this ioctl? While it's spawning a new > > VM? > > When guest OS clears the VCMDQ's base address register, or when > guest OS reboots or shuts down. >
Ack. So, basically any write to VCMDQ_BASE is trapped? > > IIUC, one vintf can have multiple lvcmdqs and looking at the series > > it looks like the vcmdq_alloc allocates a single lvcmdq. Is the plan to > > dedicate one lvcmdq to per VM? Which means VMs can share a vintf? > > VINTF is a vSMMU instance per SMMU. Each VINTF can have multiple > LVCMDQs. Each vCMDQ is allocated per IOMMUFD_CMD_VCMDQ_ALLOC. In > other word, VM can issue multiple IOMMUFD_CMD_VCMDQ_ALLOC calls > for each VTINF/vSMMU. > Ack. I'm just wondering why would a single VM want more than one vCMDQ per vSMMU? > > Or do we plan to trap access to trap the access everytime the VM > > accesses an lvcmdq base register? > > Yes. That's the only place the VMM can trap. All other register > accesses are going to the HW directly without trappings. > Got it. > > > diff --git a/drivers/iommu/iommufd/viommu.c > > > b/drivers/iommu/iommufd/viommu.c > > > index a65153458a26..02a111710ffe 100644 > > > --- a/drivers/iommu/iommufd/viommu.c > > > +++ b/drivers/iommu/iommufd/viommu.c > > > @@ -170,3 +170,97 @@ int iommufd_vdevice_alloc_ioctl(struct iommufd_ucmd > > > *ucmd) > > > iommufd_put_object(ucmd->ictx, &viommu->obj); > > > return rc; > > > } > > > + > > > +void iommufd_vcmdq_destroy(struct iommufd_object *obj) > > > +{ > > > + struct iommufd_vcmdq *vcmdq = > > > + container_of(obj, struct iommufd_vcmdq, obj); > > > + struct iommufd_viommu *viommu = vcmdq->viommu; > > > + > > > + if (viommu->ops->vcmdq_destroy) > > > + viommu->ops->vcmdq_destroy(vcmdq); > > > + iopt_unpin_pages(&viommu->hwpt->ioas->iopt, vcmdq->addr, vcmdq->length); > > > + refcount_dec(&viommu->obj.users); > > > +} > > > + > > > +int iommufd_vcmdq_alloc_ioctl(struct iommufd_ucmd *ucmd) > > > +{ > > > + struct iommu_vcmdq_alloc *cmd = ucmd->cmd; > > > + struct iommufd_viommu *viommu; > > > + struct iommufd_vcmdq *vcmdq; > > > + struct page **pages; > > > + int max_npages, i; > > > + dma_addr_t end; > > > + int rc; > > > + > > > + if (cmd->flags || cmd->type == IOMMU_VCMDQ_TYPE_DEFAULT) > > > + return -EOPNOTSUPP; > > > > The cmd->type check is a little confusing here, I think we could > > re-order the series and add this check when we have the CMDQV type. > > This is the patch that introduces the IOMMU_VCMDQ_TYPE_DEFAULT. > So, it's natural we check it here. The thing is that we have to > introduce something to fill the enum iommu_vcmdq_type, so that > it wouldn't be empty. > > An unsupported DEFAULT type is what we have for vIOMMU/vEVENTQ > also. > > A driver patch should define its own type along with the driver > patch. And it's what this series does. I think it's pretty clear? > Alright. Agreed. > > Alternatively, we could keep this in place and > [..] > > add the driver-specific > > vcmdq_alloc op calls when it's added/available for Tegra CMDQV while > > stubbing out the rest of this function accordingly. > > Why? > > The vcmdq_alloc op is already introduced in the prior patch. It > is cleaner to keep all core code in one patch. And then another > tegra patch to add driver type and its support. > Alright. > Thanks > Nicolin Thanks, Praan