On Tue, Feb 18, 2025 at 05:13:47AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicol...@nvidia.com>
> > Sent: Saturday, January 25, 2025 8:31 AM
> > +
> > +/*
> > + * An iommufd_veventq object represents an interface to deliver vIOMMU
> > events to
> > + * the user space. It is created/destroyed by the user space and associated
> > with
> > + * vIOMMU object(s) during the allocations.
> 
> s/object(s)/object/, given the eventq cannot be shared between vIOMMUs.

Done. Adding an "a" too.

> > +static inline void iommufd_vevent_handler(struct iommufd_veventq
> > *veventq,
> > +                                     struct iommufd_vevent *vevent)
> > +{
> > +   struct iommufd_eventq *eventq = &veventq->common;
> > +
> > +   /*
> > +    * Remove the overflow node and add the new node at the same
> > time. Note
> > +    * it is possible that vevent == &veventq->overflow for sequence
> > update
> > +    */
> > +   spin_lock(&eventq->lock);
> > +   if (veventq->overflow.on_list) {
> > +           list_del(&veventq->overflow.node);
> > +           veventq->overflow.on_list = false;
> > +   }
> 
> We can save one field 'on_list' in every entry by:
> 
>       if (list_is_last(&veventq->overflow.node, &eventq->deliver))
>               list_del(&veventq->overflow.node);

Hmm. Given that the overflow node, if being on the list, should be
always the last one... yes!

> > +struct iommufd_vevent_header {
> > +   __aligned_u64 flags;
> > +   __u32 sequence;
> > +   __u32 __reserved;
> > +};
> 
> Is there a reason that flags must be u64? At a glance all flags fields
> (except the one in iommu_hwpt_vtd_s1) in iommufd uAPIs are u32
> which can cut the size of the header by half...

Not having a particular reason yet. Just thought that a 64-bit
could make the uAPI more expandable. It's true that u32 would
be cleaner. I will make a change.

> 
> > +void iommufd_veventq_abort(struct iommufd_object *obj)
> > +{
> > +   struct iommufd_eventq *eventq =
> > +           container_of(obj, struct iommufd_eventq, obj);
> > +   struct iommufd_veventq *veventq = eventq_to_veventq(eventq);
> > +   struct iommufd_viommu *viommu = veventq->viommu;
> > +   struct iommufd_vevent *cur, *next;
> > +
> > +   lockdep_assert_held_write(&viommu->veventqs_rwsem);
> > +
> > +   list_for_each_entry_safe(cur, next, &eventq->deliver, node) {
> > +           list_del(&cur->node);
> > +           kfree(cur);
> 
> kfree() doesn't apply to the overflow node.

Oh right, that's missed.

> otherwise it looks good to me:
> 
> Reviewed-by: Kevin Tian <kevin.t...@intel.com>

Thanks!
Nicolin

Reply via email to