On Tue, May 30, 2023 at 01:37:07PM +0800, Lu Baolu wrote:
> Hi folks,
>
> This series implements the functionality of delivering IO page faults to
> user space through the IOMMUFD framework. The use case is nested
> translation, where modern IOMMU hardware supports two-stage translation
> tables.
On Fri, Jun 16, 2023 at 12:32:32PM +0100, Jean-Philippe Brucker wrote:
> We might need to revisit supporting stop markers: request that each device
> driver declares whether their device uses stop markers on unbind() ("This
> mechanism must indicate that a Stop Marker Message will be generated."
>
On Fri, Jun 23, 2023 at 02:18:38PM +0800, Baolu Lu wrote:
> struct io_uring ring;
>
> io_uring_setup(IOPF_ENTRIES, &ring);
>
> while (1) {
> struct io_uring_prep_read read;
> struct io_uring_cqe *cqe;
>
> read.fd = iopf_fd;
>
On Sun, Jun 25, 2023 at 02:30:46PM +0800, Baolu Lu wrote:
> Agreed. We should avoid workqueue in sva iopf framework. Perhaps we
> could go ahead with below code? It will be registered to device with
> iommu_register_device_fault_handler() in IOMMU_DEV_FEAT_IOPF enabling
> path. Un-registering in t
On Wed, Jun 28, 2023 at 10:00:56AM +0800, Baolu Lu wrote:
> > If the driver created a SVA domain then the op should point to some
> > generic 'handle sva fault' function. There shouldn't be weird SVA
> > stuff in the core code.
> >
> > The weird SVA stuff is really just a generic per-device workqu
On Fri, Jul 14, 2023 at 09:05:21AM +0200, Christian Brauner wrote:
> I have no skin in the game aside from having to drop this conversion
> which I'm fine to do if there are actually users for this btu really,
> that looks a lot like abusing an api that really wasn't designed for
> this.
Yeah, I
On Mon, Jul 17, 2023 at 01:08:31PM -0600, Alex Williamson wrote:
> What would that mechanism be? We've been iterating on getting the
> serialization and buffering correct, but I don't know of another means
> that combines the notification with a value, so we'd likely end up with
> an eventfd only
On Mon, Jul 17, 2023 at 04:52:03PM -0600, Alex Williamson wrote:
> On Mon, 17 Jul 2023 19:12:16 -0300
> Jason Gunthorpe wrote:
>
> > On Mon, Jul 17, 2023 at 01:08:31PM -0600, Alex Williamson wrote:
> >
> > > What would that mechanism be? We've been iterating
On Wed, Aug 02, 2023 at 01:36:12PM +0100, Jean-Philippe Brucker wrote:
> automatically get plugged into a VM without user intervention. Here I
> guess the devices we don't trust will be virtual devices implemented by
> other VMs. We don't have any method to identify them yet, so
> iommu.strict=1 a
On Tue, Sep 19, 2023 at 09:15:19AM +0100, Jean-Philippe Brucker wrote:
> On Mon, Sep 18, 2023 at 05:37:47PM +0100, Robin Murphy wrote:
> > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> > > index 17dcd826f5c2..3649586f0e5c 100644
> > > --- a/drivers/iommu/virtio-iommu.
On Fri, Sep 22, 2023 at 08:57:19AM +0100, Jean-Philippe Brucker wrote:
> > > They're not strictly equivalent: this check works around a temporary issue
> > > with the IOMMU core, which calls map/unmap before the domain is
> > > finalized.
> >
> > Where? The above points to iommu_create_device_dire
On Fri, Sep 22, 2023 at 02:13:18PM +0100, Robin Murphy wrote:
> On 22/09/2023 1:41 pm, Jason Gunthorpe wrote:
> > On Fri, Sep 22, 2023 at 08:57:19AM +0100, Jean-Philippe Brucker wrote:
> > > > > They're not strictly equivalent: this check works around a temporary
&
On Fri, Sep 22, 2023 at 07:07:40PM +0100, Robin Murphy wrote:
> virtio isn't setting ops->pgsize_bitmap for the sake of direct mappings
> either; it sets it once it's discovered any instance, since apparently it's
> assuming that all instances must support identical page sizes, and thus once
> it'
On Mon, Sep 25, 2023 at 10:48:21AM +0800, Baolu Lu wrote:
> On 9/23/23 7:33 AM, Jason Gunthorpe wrote:
> > On Fri, Sep 22, 2023 at 07:07:40PM +0100, Robin Murphy wrote:
> >
> > > virtio isn't setting ops->pgsize_bitmap for the sake of direct mappings
> > >
On Mon, Sep 25, 2023 at 02:07:50PM +0100, Robin Murphy wrote:
> On 2023-09-23 00:33, Jason Gunthorpe wrote:
> > On Fri, Sep 22, 2023 at 07:07:40PM +0100, Robin Murphy wrote:
> >
> > > virtio isn't setting ops->pgsize_bitmap for the sake of direct mappings
&g
On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote:
> Hi folks,
>
> This series implements the functionality of delivering IO page faults to
> user space through the IOMMUFD framework for nested translation. Nested
> translation is a hardware feature that supports two-stage translation
> tab
On Mon, Nov 06, 2023 at 02:12:23AM -0500, Tina Zhang wrote:
> Add basic hook up code to implement generic IO page table framework.
>
> Signed-off-by: Tina Zhang
> ---
> drivers/iommu/intel/Kconfig | 1 +
> drivers/iommu/intel/iommu.c | 94 +
> drivers/iommu/i
On Tue, Nov 07, 2023 at 08:35:10AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Thursday, November 2, 2023 8:48 PM
> >
> > On Thu, Oct 26, 2023 at 10:49:24AM +0800, Lu Baolu wrote:
> > > Hi folks,
> > >
> > > This series implem
On Wed, Nov 08, 2023 at 06:34:58PM +, André Draszik wrote:
> For me, it's working fine so far on master, and I've also done my own back
> port
> to 6.1 and am currently testing both. An official back port once finalised
> could be useful, though :-)
Great, I'll post a non-RFC version next we
On Thu, Nov 09, 2023 at 12:10:59AM +, Zhang, Tina wrote:
> > If this is going to happen can we also convert vt-d to actually use the io
> > page
> > table stuff directly and shuffle the code around so it is structured like
> > the rest of
> > the io page table implementations?
> Converting
On Sun, Nov 12, 2023 at 09:44:18AM -0800, Moritz Fischer wrote:
> On Fri, Nov 03, 2023 at 01:44:55PM -0300, Jason Gunthorpe wrote:
> > This call chain is using dev->iommu->fwspec to pass around the fwspec
> > between the three parts (acpi_iommu_configure(), a
On Wed, Mar 06, 2024 at 11:15:50PM +0800, Zhangfei Gao wrote:
>
> Double checked, this does not send flags, 0 is OK,
> Only domain_alloc_user in iommufd_hwpt_paging_alloc requires flags.
>
> In my debug, I need this patch, otherwise NULL pointer errors happen
> since SVA is not set.
This is some
On Mon, Jan 22, 2024 at 03:38:57PM +0800, Lu Baolu wrote:
> @@ -215,7 +202,23 @@ static struct iopf_group *iopf_group_alloc(struct
> iommu_fault_param *iopf_param,
> group = abort_group;
> }
>
> + cookie = iopf_pasid_cookie_get(iopf_param->dev, pasid);
> + if (!cookie
On Mon, Jan 22, 2024 at 03:38:58PM +0800, Lu Baolu wrote:
> +/**
> + * enum iommu_hwpt_pgfault_flags - flags for struct iommu_hwpt_pgfault
> + * @IOMMU_PGFAULT_FLAGS_PASID_VALID: The pasid field of the fault data is
> + * valid.
> + * @IOMMU_PGFAULT_FLAGS_LAST_PAG
On Mon, Jan 22, 2024 at 03:38:59PM +0800, Lu Baolu wrote:
> --- /dev/null
> +++ b/drivers/iommu/iommufd/fault.c
> @@ -0,0 +1,255 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Copyright (C) 2024 Intel Corporation
> + */
> +#define pr_fmt(fmt) "iommufd: " fmt
> +
> +#include
> +#include
> +#
On Mon, Jan 22, 2024 at 03:39:00PM +0800, Lu Baolu wrote:
> @@ -411,6 +414,8 @@ enum iommu_hwpt_data_type {
> * @__reserved: Must be 0
> * @data_type: One of enum iommu_hwpt_data_type
> * @data_len: Length of the type specific data
> + * @fault_id: The ID of IOMMUFD_FAULT object. Valid only
On Thu, Mar 14, 2024 at 03:41:23PM +0800, Baolu Lu wrote:
> The whole cookie mechanism aims to address two things:
>
> - Extend the domain lifetime until all pending page faults are
> resolved.
Like you answered, I think the flush is a simpler scheme..
> - Associate information about the iommu
On Thu, Mar 14, 2024 at 09:41:45PM +0800, Baolu Lu wrote:
> On 2024/3/9 1:50, Jason Gunthorpe wrote:
> > On Mon, Jan 22, 2024 at 03:38:58PM +0800, Lu Baolu wrote:
> >
> > > +/**
> > > + * enum iommu_hwpt_pgfault_flags - fl
On Fri, Mar 15, 2024 at 09:16:43AM +0800, Baolu Lu wrote:
> On 3/9/24 3:05 AM, Jason Gunthorpe wrote:
> > On Mon, Jan 22, 2024 at 03:39:00PM +0800, Lu Baolu wrote:
> >
> > > @@ -411,6 +414,8 @@ enum iommu_hwpt_data_type {
> > >* @__reserved: Must be 0
&
On Fri, Mar 15, 2024 at 09:46:06AM +0800, Baolu Lu wrote:
> On 3/9/24 2:03 AM, Jason Gunthorpe wrote:
> > On Mon, Jan 22, 2024 at 03:38:59PM +0800, Lu Baolu wrote:
> > > --- /dev/null
> > > +++ b/drivers/iommu/iommufd/fault.c
> > > @@ -0,0 +1,255 @@
> > &
On Wed, Mar 20, 2024 at 04:18:05PM +, Shameerali Kolothum Thodi wrote:
>
> What I have noticed is that,
> -read interface works fine and I can receive struct tiommu_hwpt_pgfault data.
> -But once Guest handles the page faults and returns the page response,
> the write to fault fd never reache
achment relationship between a domain and a device or its PASID.
> A caller-specific data field can be used by the caller to store additional
> information beyond a domain pointer, depending on its specific use case.
>
> Co-developed-by: Jason Gunthorpe
> Signed-off-by: Jason G
On Wed, Apr 03, 2024 at 09:15:12AM +0800, Lu Baolu wrote:
> + /* A bond already exists, just take a reference`. */
> + handle = iommu_attach_handle_get(group, iommu_mm->pasid);
> + if (handle) {
> + mutex_unlock(&iommu_sva_lock);
> + return handle;
> }
At
On Sat, Apr 06, 2024 at 12:34:14PM +0800, Baolu Lu wrote:
> On 4/3/24 7:58 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 03, 2024 at 09:15:11AM +0800, Lu Baolu wrote:
> > > Currently, when attaching a domain to a device or its PASID, domain is
> > > stored within t
On Sat, Apr 06, 2024 at 02:09:34PM +0800, Baolu Lu wrote:
> On 4/3/24 7:59 PM, Jason Gunthorpe wrote:
> > On Wed, Apr 03, 2024 at 09:15:12AM +0800, Lu Baolu wrote:
> > > + /* A bond already exists, just take a reference`. */
> > > + handle = iommu_attach_handle
On Tue, Apr 09, 2024 at 09:53:26AM +0800, Baolu Lu wrote:
> On 4/8/24 10:05 PM, Jason Gunthorpe wrote:
> > > void iommufd_fault_domain_detach_dev(struct iommufd_hw_pagetable *hwpt,
> > > struct iommufd_device *idev)
> > > {
>
On Tue, Apr 09, 2024 at 10:11:28AM +0800, Baolu Lu wrote:
> On 4/8/24 10:19 PM, Jason Gunthorpe wrote:
> > On Sat, Apr 06, 2024 at 02:09:34PM +0800, Baolu Lu wrote:
> > > On 4/3/24 7:59 PM, Jason Gunthorpe wrote:
> > > > On Wed, Apr 03, 2024 at 09
On Sun, Apr 28, 2024 at 06:22:28PM +0800, Baolu Lu wrote:
> /* A bond already exists, just take a reference`. */
> handle = iommu_attach_handle_get(group, iommu_mm->pasid);
> if (handle) {
> if (handle->domain->iopf_handler != iommu_sva_iopf_handler)
> {
>
On Tue, Apr 30, 2024 at 10:57:04PM +0800, Lu Baolu wrote:
> @@ -206,8 +197,11 @@ void iommu_report_device_fault(struct device *dev,
> struct iopf_fault *evt)
> if (group == &abort_group)
> goto err_abort;
>
> - group->domain = get_domain_for_iopf(dev, fault);
> - if (
On Tue, Apr 30, 2024 at 10:57:03PM +0800, Lu Baolu wrote:
> diff --git a/drivers/iommu/iommu-priv.h b/drivers/iommu/iommu-priv.h
> index da1addaa1a31..ae65e0b85d69 100644
> --- a/drivers/iommu/iommu-priv.h
> +++ b/drivers/iommu/iommu-priv.h
> @@ -30,6 +30,13 @@ void iommu_device_unregister_bus(stru
On Tue, Apr 30, 2024 at 10:57:06PM +0800, Lu Baolu wrote:
> diff --git a/drivers/iommu/iommu-priv.h b/drivers/iommu/iommu-priv.h
> index ae65e0b85d69..1a0450a83bd0 100644
> --- a/drivers/iommu/iommu-priv.h
> +++ b/drivers/iommu/iommu-priv.h
> @@ -36,6 +36,10 @@ struct iommu_attach_handle {
>
On Tue, Apr 30, 2024 at 10:57:07PM +0800, Lu Baolu wrote:
> diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
> index 13125c0feecb..6357229bf3b4 100644
> --- a/drivers/iommu/iommufd/fault.c
> +++ b/drivers/iommu/iommufd/fault.c
> @@ -15,6 +15,124 @@
> #include "../iommu-pr
On Tue, Apr 30, 2024 at 10:57:06PM +0800, Lu Baolu wrote:
> +static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
> +size_t count, loff_t *ppos)
> +{
> + size_t fault_size = sizeof(struct iommu_hwpt_pgfault);
> + struct iommufd_fau
On Tue, Apr 30, 2024 at 10:57:08PM +0800, Lu Baolu wrote:
> /**
> @@ -412,6 +415,9 @@ enum iommu_hwpt_data_type {
> * @data_type: One of enum iommu_hwpt_data_type
> * @data_len: Length of the type specific data
> * @data_uptr: User pointer to the type specific data
> + * @fault_id: The ID of
On Fri, May 10, 2024 at 11:14:20AM +0800, Baolu Lu wrote:
> On 5/8/24 8:04 AM, Jason Gunthorpe wrote:
> > On Tue, Apr 30, 2024 at 10:57:04PM +0800, Lu Baolu wrote:
> > > @@ -206,8 +197,11 @@ void iommu_report_device_fault(struct device *dev,
> > > struct iopf_fault *evt
On Fri, May 10, 2024 at 11:20:01AM +0800, Baolu Lu wrote:
> On 5/8/24 8:18 AM, Jason Gunthorpe wrote:
> > On Tue, Apr 30, 2024 at 10:57:07PM +0800, Lu Baolu wrote:
> > > diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
> > > index 13125c
On Fri, May 10, 2024 at 10:30:10PM +0800, Baolu Lu wrote:
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 35ae9a6f73d3..09b4e671dcee 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -173,6 +173,8 @@ struct iommu_domain_geometry {
>
> #define __IOMMU_DO
On Mon, May 20, 2024 at 04:59:18AM +, Tian, Kevin wrote:
> > From: Baolu Lu
> > Sent: Monday, May 20, 2024 11:33 AM
> >
> > On 5/20/24 11:24 AM, Tian, Kevin wrote:
> > >> From: Baolu Lu
> > >> Sent: Sunday, May 19, 2024 10:38 PM
> > >>
> > >> On 2024/5/15 15:43, Tian, Kevin wrote:
> > F
On Mon, May 20, 2024 at 09:24:09AM +0800, Baolu Lu wrote:
> On 5/15/24 4:37 PM, Tian, Kevin wrote:
> > > +static ssize_t iommufd_fault_fops_write(struct file *filep, const char
> > > __user
> > > *buf,
> > > + size_t count, loff_t *ppos)
> > > +{
> > > + size_t resp
On Mon, May 20, 2024 at 03:39:54AM +, Tian, Kevin wrote:
> > From: Baolu Lu
> > Sent: Monday, May 20, 2024 10:19 AM
> >
> > On 5/15/24 4:50 PM, Tian, Kevin wrote:
> > >> From: Lu Baolu
> > >> Sent: Tuesday, April 30, 2024 10:57 PM
> > >>
> > >> @@ -308,6 +314,19 @@ int iommufd_hwpt_alloc(str
On Fri, Jun 07, 2024 at 09:35:23AM +, Tian, Kevin wrote:
> > From: Baolu Lu
> > Sent: Thursday, June 6, 2024 2:07 PM
> >
> > On 6/5/24 4:15 PM, Tian, Kevin wrote:
> > >> From: Lu Baolu
> > >> Sent: Monday, May 27, 2024 12:05 PM
> > >>
> > >> -list_for_each_entry(handle, &mm->iommu_mm
On Thu, Jun 06, 2024 at 01:33:29PM +0800, Baolu Lu wrote:
> > But if certain path (other than iopf) in the iommu core needs to know
> > the exact domain pointer then this change breaks it.
>
> The iommu core should not fetch the domain pointer in paths other than
> attach/detach/replace. There is
On Fri, Jun 07, 2024 at 09:38:38AM +, Tian, Kevin wrote:
> > From: Baolu Lu
> > Sent: Thursday, June 6, 2024 2:28 PM
> >
> > On 6/5/24 4:28 PM, Tian, Kevin wrote:
> > >> From: Lu Baolu
> > >> Sent: Monday, May 27, 2024 12:05 PM
> > >>
> > >> +
> > >> +/**
> > >> + * struct iommu_hwpt_page_re
On Fri, Jun 07, 2024 at 09:17:28AM +, Tian, Kevin wrote:
> > From: Lu Baolu
> > Sent: Monday, May 27, 2024 12:05 PM
> >
> > +static ssize_t iommufd_fault_fops_read(struct file *filep, char __user
> > *buf,
> > + size_t count, loff_t *ppos)
> > +{
> > + size
On Sat, Jun 08, 2024 at 05:58:34PM +0800, Baolu Lu wrote:
> > > +static int iommufd_fault_fops_release(struct inode *inode, struct file
> > > *filep)
> > > +{
> > > + struct iommufd_fault *fault = filep->private_data;
> > > +
> > > + iommufd_ctx_put(fault->ictx);
> > > + refcount_dec(&fault->obj.
On Mon, May 27, 2024 at 12:05:10PM +0800, Lu Baolu wrote:
> @@ -206,20 +182,49 @@ void iommu_report_device_fault(struct device *dev,
> struct iopf_fault *evt)
> if (group == &abort_group)
> goto err_abort;
>
> - group->domain = get_domain_for_iopf(dev, fault);
> - if
On Mon, May 27, 2024 at 12:05:11PM +0800, Lu Baolu wrote:
> Unlike the SVA case where each PASID of a device has an SVA domain
> attached to it, the I/O page faults are handled by the fault handler
> of the SVA domain. The I/O page faults for a user page table might
> be handled by the domain attac
On Wed, Jun 12, 2024 at 10:19:46AM -0300, Jason Gunthorpe wrote:
> > > I prefer not to mess the definition of user API data and the kernel
> > > driver implementation. The kernel driver may change in the future, but
> > > the user API will remain stable for a long tim
On Mon, May 27, 2024 at 12:05:12PM +0800, Lu Baolu wrote:
> +/**
> + * struct iommu_hwpt_pgfault - iommu page fault data
> + * @size: sizeof(struct iommu_hwpt_pgfault)
> + * @flags: Combination of enum iommu_hwpt_pgfault_flags
> + * @dev_id: id of the originated device
> + * @pasid: Process Address
On Mon, May 27, 2024 at 12:05:07PM +0800, Lu Baolu wrote:
> This series implements the functionality of delivering IO page faults to
> user space through the IOMMUFD framework. One feasible use case is the
> nested translation. Nested translation is a hardware feature that
> supports two-stage tran
On Thu, Jun 13, 2024 at 12:23:17PM +0800, Baolu Lu wrote:
> struct iommu_ops {
> bool (*capable)(struct device *dev, enum iommu_cap);
> @@ -600,6 +598,7 @@ struct iommu_ops {
> struct iommu_domain *blocked_domain;
> struct iommu_domain *release_domain;
> struct iom
On Sun, Jun 16, 2024 at 02:11:49PM +0800, Lu Baolu wrote:
> +int iommu_replace_group_handle(struct iommu_group *group,
> +struct iommu_domain *new_domain,
> +struct iommu_attach_handle *handle)
> +{
> + struct iommu_domain *old_domain = g
On Sun, Jun 16, 2024 at 02:11:52PM +0800, Lu Baolu wrote:
> +static int iommufd_fault_iopf_enable(struct iommufd_device *idev)
> +{
> + struct device *dev = idev->dev;
> + int ret;
> +
> + /*
> + * Once we turn on PCI/PRI support for VF, the response failure code
> + * should
achment relationship between a domain and a device or its PASID.
>
> Co-developed-by: Jason Gunthorpe
> Signed-off-by: Jason Gunthorpe
> Signed-off-by: Lu Baolu
> ---
> include/linux/iommu.h | 18 +++---
> drivers/dma/idxd/init.c | 2 +-
> driv
On Sun, Jun 16, 2024 at 02:11:53PM +0800, Lu Baolu wrote:
> @@ -308,13 +315,29 @@ int iommufd_hwpt_alloc(struct iommufd_ucmd *ucmd)
> goto out_put_pt;
> }
>
> + if (cmd->flags & IOMMU_HWPT_FAULT_ID_VALID) {
> + struct iommufd_fault *fault;
> +
> +
On Sun, Jun 16, 2024 at 02:11:45PM +0800, Lu Baolu wrote:
> Lu Baolu (10):
> iommu: Introduce domain attachment handle
> iommu: Remove sva handle list
> iommu: Add attach handle to struct iopf_group
> iommu: Extend domain attach group with handle support
> iommufd: Add fault and response
gt; device
> + *RID.
> */
> struct iommu_ops {
> bool (*capable)(struct device *dev, enum iommu_cap);
> @@ -590,6 +594,7 @@ struct iommu_ops {
> struct iommu_domain *blocked_domain;
> struct iommu_domain *release_domain;
>
> 4 files changed, 54 insertions(+), 12 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
On Sun, Jun 16, 2024 at 02:11:51PM +0800, Lu Baolu wrote:
> @@ -6,6 +6,7 @@ iommufd-y := \
> ioas.o \
> main.o \
> pages.o \
> + fault.o \
> vfio_compat.o
Keep sorted
Reviewed-by: Jason Gunthorpe
Jason
On Fri, Jul 05, 2024 at 12:49:10AM +, Tian, Kevin wrote:
> > > > > > > +enum iommu_fault_type {
> > > > > > > + IOMMU_FAULT_TYPE_HWPT_IOPF,
> > > > > > > + IOMMU_FAULT_TYPE_VIOMMU_IRQ,
> > > > > > > +};
> > > > > > >
> > > > > > >struct iommu_fault_alloc {
> > > > > > >__u3
On Wed, Jul 03, 2024 at 04:06:15PM -0700, Nicolin Chen wrote:
> I learned that this hwpt->fault is exclusively for IOPF/PRI. And
> Jason suggested me to add a different one for VIOMMU. Yet, after
> taking a closer look, I found the fault object in this series is
> seemingly quite generic at the uA
On Mon, Jul 08, 2024 at 11:36:57AM -0700, Nicolin Chen wrote:
> Maybe something like this?
>
> struct iommu_viommu_event_arm_smmuv3 {
> u64 evt[4];
> };
>
> struct iommu_viommu_event_tegra241_cmdqv {
> u64 vcmdq_err_map[2];
> };
>
> enum iommu_event_type {
> IOMMM_HWPT_EVENT_TY
On Thu, Jul 04, 2024 at 03:18:57PM +0100, Will Deacon wrote:
> On Tue, 02 Jul 2024 14:34:34 +0800, Lu Baolu wrote:
> > This series implements the functionality of delivering IO page faults to
> > user space through the IOMMUFD framework. One feasible use case is the
> > nested translation. Nested t
On Mon, Jul 01, 2024 at 01:55:12PM +0800, Baolu Lu wrote:
> On 2024/6/29 5:17, Jason Gunthorpe wrote:
> > On Sun, Jun 16, 2024 at 02:11:52PM +0800, Lu Baolu wrote:
> > > +static int iommufd_fault_iopf_enable(struct iommufd_device *idev)
> > > +{
> > > + struct
On Tue, Jul 09, 2024 at 10:33:42AM -0700, Nicolin Chen wrote:
> > We are potentially talking about 5-10 physical smmus and 2-3 FDs per
> > physical? Does that scare anyone?
>
> I think we can share the same FD by adding a viommu_id somewhere
> to indicate what the data/event belongs to. Yet, it s
On Tue, Dec 21, 2021 at 03:58:48PM -0800, David E. Box wrote:
> Depends on "driver core: auxiliary bus: Add driver data helpers" patch [1].
> Applies the helpers to all auxiliary device drivers using
> dev_(get/set)_drvdata. Drivers were found using the following search:
>
> grep -lr "struct a
On Tue, Dec 21, 2021 at 04:48:17PM -0800, David E. Box wrote:
> On Tue, 2021-12-21 at 20:09 -0400, Jason Gunthorpe wrote:
> > On Tue, Dec 21, 2021 at 03:58:48PM -0800, David E. Box wrote:
> > > Depends on "driver core: auxiliary bus: Add driver data helpers" patch
On Mon, Oct 19, 2020 at 12:42:15PM -0700, Nick Desaulniers wrote:
> On Sat, Oct 17, 2020 at 10:43 PM Greg KH wrote:
> >
> > On Sat, Oct 17, 2020 at 09:09:28AM -0700, t...@redhat.com wrote:
> > > From: Tom Rix
> > >
> > > This is a upcoming change to clean up a new warning treewide.
> > > I am won
On Sun, Nov 01, 2020 at 10:15:37PM +0200, Leon Romanovsky wrote:
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index 6c218b47b9f1..5316e51e72d4 100644
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -1,18 +1,27 @@
> // SPDX-License-Identifier: GPL-2.0 OR
On Thu, Nov 05, 2020 at 08:33:02AM +0100, gregkh wrote:
> > Were there any additional changes you wanted to see happen? I'll go
> > give the final set another once over, but David has been diligently
> > fixing up all the declared major issues so I expect to find at most
> > minor incremental fixup
On Mon, Jun 27, 2022 at 08:27:37PM +0200, Daniel Borkmann wrote:
> On 6/27/22 8:04 PM, Gustavo A. R. Silva wrote:
> > There is a regular need in the kernel to provide a way to declare
> > having a dynamically sized set of trailing elements in a structure.
> > Kernel code should always use “flexible
On Tue, Jun 28, 2022 at 04:21:29AM +0200, Gustavo A. R. Silva wrote:
> > > Though maybe we could just switch off
> > > -Wgnu-variable-sized-type-not-at-end during configuration ?
> We need to think in a different strategy.
I think we will need to switch off the warning in userspace - this is
d
On Tue, Jun 28, 2022 at 10:54:58AM -0700, Kees Cook wrote:
> which must also be assuming it's a header. So probably better to just
> drop the driver_data field? I don't see anything using it (that I can
> find) besides as a sanity-check that the field exists and is at the end
> of the struct.
T
> > - dma_buf_unmap_attachment(umem_dmabuf->attach, umem_dmabuf->sgt,
> > -DMA_BIDIRECTIONAL);
> > + dma_buf_unmap_attachment_unlocked(umem_dmabuf->attach, umem_dmabuf->sgt,
> > +
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable R
On Thu, Apr 11, 2019 at 08:34:20PM +0300, Yuval Shaia wrote:
> On Thu, Apr 11, 2019 at 05:24:08PM +0000, Jason Gunthorpe wrote:
> > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > On Thu, 11 Apr 2019 14:01:54 +0300
> > > Yuval Shaia wrote:
> >
On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > On Thu, 11 Apr 2019 14:01:54 +0300
> > > Yuval Shaia wrote:
> > >
> > > > Data center backends use more and more RDMA
On Tue, Apr 30, 2019 at 08:13:54PM +0300, Yuval Shaia wrote:
> On Mon, Apr 22, 2019 at 01:45:27PM -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> > > On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > > > On Thu, Ap
On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote:
> We used to use RCU to synchronize MMU notifier with worker. This leads
> calling synchronize_rcu() in invalidate_range_start(). But on a busy
> system, there would be many factors that may slow down the
> synchronize_rcu() which makes it
On Wed, Jul 31, 2019 at 04:46:50AM -0400, Jason Wang wrote:
> The vhost_set_vring_num_addr() could be called in the middle of
> invalidate_range_start() and invalidate_range_end(). If we don't reset
> invalidate_count after the un-registering of MMU notifier, the
> invalidate_cont will run out of s
On Wed, Jul 31, 2019 at 09:29:28PM +0800, Jason Wang wrote:
>
> On 2019/7/31 下午8:41, Jason Gunthorpe wrote:
> > On Wed, Jul 31, 2019 at 04:46:50AM -0400, Jason Wang wrote:
> > > The vhost_set_vring_num_addr() could be called in the middle of
> > > invalidate_range_st
On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote:
>
> On 2019/7/31 下午8:39, Jason Gunthorpe wrote:
> > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote:
> > > We used to use RCU to synchronize MMU notifier with worker. This leads
> > >
On Thu, Aug 01, 2019 at 01:02:18PM +0800, Jason Wang wrote:
>
> On 2019/8/1 上午3:30, Jason Gunthorpe wrote:
> > On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote:
> > > On 2019/7/31 下午8:39, Jason Gunthorpe wrote:
> > > > On Wed, Jul 31, 2019 at 0
On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote:
> > This must be a proper barrier, like a spinlock, mutex, or
> > synchronize_rcu.
>
>
> I start with synchronize_rcu() but both you and Michael raise some
> concern.
I've also idly wondered if calling synchronize_rcu() under the variou
On Fri, Aug 02, 2019 at 10:27:21AM -0400, Michael S. Tsirkin wrote:
> On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote:
> > On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote:
> > > > This must be a proper barrier, like a spinlock, mutex, or
&g
On Sat, Aug 03, 2019 at 05:36:13PM -0400, Michael S. Tsirkin wrote:
> On Fri, Aug 02, 2019 at 02:24:18PM -0300, Jason Gunthorpe wrote:
> > On Fri, Aug 02, 2019 at 10:27:21AM -0400, Michael S. Tsirkin wrote:
> > > On Fri, Aug 02, 2019 at 09:46:13AM -0300, Jason Gunthorpe wrote:
&
On Sun, Aug 04, 2019 at 04:07:17AM -0400, Michael S. Tsirkin wrote:
> > > > Also, why can't this just permanently GUP the pages? In fact, where
> > > > does it put_page them anyhow? Worrying that 7f466 adds a get_user page
> > > > but does not add a put_page??
> >
> > You didn't answer this.. Why
On Mon, Aug 05, 2019 at 12:20:45PM +0800, Jason Wang wrote:
>
> On 2019/8/2 下午8:46, Jason Gunthorpe wrote:
> > On Fri, Aug 02, 2019 at 05:40:07PM +0800, Jason Wang wrote:
> > > > This must be a proper barrier, like a spinlock, mutex, or
> > > > synchr
On Tue, Aug 06, 2019 at 09:36:58AM -0400, Michael S. Tsirkin wrote:
> On Tue, Aug 06, 2019 at 08:53:17AM -0300, Jason Gunthorpe wrote:
> > On Sun, Aug 04, 2019 at 04:07:17AM -0400, Michael S. Tsirkin wrote:
> > > > > > Also, why can't this just permanent
On Wed, Aug 07, 2019 at 03:06:15AM -0400, Jason Wang wrote:
> We used to use RCU to synchronize MMU notifier with worker. This leads
> calling synchronize_rcu() in invalidate_range_start(). But on a busy
> system, there would be many factors that may slow down the
> synchronize_rcu() which makes it
1 - 100 of 155 matches
Mail list logo