> From: Nicolin Chen <nicol...@nvidia.com>
> Sent: Saturday, January 25, 2025 8:31 AM
> +
> +/*
> + * An iommufd_veventq object represents an interface to deliver vIOMMU
> events to
> + * the user space. It is created/destroyed by the user space and associated
> with
> + * vIOMMU object(s) during the allocations.

s/object(s)/object/, given the eventq cannot be shared between vIOMMUs.

> +static inline void iommufd_vevent_handler(struct iommufd_veventq
> *veventq,
> +                                       struct iommufd_vevent *vevent)
> +{
> +     struct iommufd_eventq *eventq = &veventq->common;
> +
> +     /*
> +      * Remove the overflow node and add the new node at the same
> time. Note
> +      * it is possible that vevent == &veventq->overflow for sequence
> update
> +      */
> +     spin_lock(&eventq->lock);
> +     if (veventq->overflow.on_list) {
> +             list_del(&veventq->overflow.node);
> +             veventq->overflow.on_list = false;
> +     }

We can save one field 'on_list' in every entry by:

        if (list_is_last(&veventq->overflow.node, &eventq->deliver))
                list_del(&veventq->overflow.node);

> +
> +/**
> + * struct iommufd_vevent_header - Virtual Event Header for a vEVENTQ
> Status
> + * @flags: Combination of enum iommu_veventq_flag
> + * @sequence: The sequence index of a vEVENT in the vEVENTQ, with a
> range of
> + *            [0, INT_MAX] where the following index of INT_MAX is 0
> + * @__reserved: Must be 0
> + *
> + * Each iommufd_vevent_header reports a sequence index of the following
> vEVENT:
> + *  
> ---------------------------------------------------------------------------
> + * || header0 {sequence=0} | data0 | header1 {sequence=1} | data1 |...|
> dataN ||
> + *  
> ---------------------------------------------------------------------------
> + * And this sequence index is expected to be monotonic to the sequence
> index of
> + * the previous vEVENT. If two adjacent sequence indexes has a delta larger
> than
> + * 1, it indicates that an overflow occurred to the vEVENTQ and that delta - 
> 1
> + * number of vEVENTs lost due to the overflow (e.g. two lost vEVENTs):
> + *  
> ---------------------------------------------------------------------------
> + * || ... | header3 {sequence=3} | data3 | header6 {sequence=6} | data6 | ...
> ||
> + *  
> ---------------------------------------------------------------------------
> + * If an overflow occurred to the tail of the vEVENTQ and there is no
> following
> + * vEVENT providing the next sequence index, a special overflow header
> would be
> + * added to the tail of the vEVENTQ, where there would be no more type-
> specific
> + * data following the vEVENTQ:
> + *  
> ---------------------------------------------------------------------------
> + * ||...| header3 {sequence=3} | data4 | header4 {flags=OVERFLOW,
> sequence=4} ||
> + *  
> ---------------------------------------------------------------------------
> + */
> +struct iommufd_vevent_header {
> +     __aligned_u64 flags;
> +     __u32 sequence;
> +     __u32 __reserved;
> +};

Is there a reason that flags must be u64? At a glance all flags fields
(except the one in iommu_hwpt_vtd_s1) in iommufd uAPIs are u32
which can cut the size of the header by half...

> +void iommufd_veventq_abort(struct iommufd_object *obj)
> +{
> +     struct iommufd_eventq *eventq =
> +             container_of(obj, struct iommufd_eventq, obj);
> +     struct iommufd_veventq *veventq = eventq_to_veventq(eventq);
> +     struct iommufd_viommu *viommu = veventq->viommu;
> +     struct iommufd_vevent *cur, *next;
> +
> +     lockdep_assert_held_write(&viommu->veventqs_rwsem);
> +
> +     list_for_each_entry_safe(cur, next, &eventq->deliver, node) {
> +             list_del(&cur->node);
> +             kfree(cur);

kfree() doesn't apply to the overflow node.

otherwise it looks good to me:

Reviewed-by: Kevin Tian <kevin.t...@intel.com>

Reply via email to