> -----Original Message-----
> From: Jerin Jacob Kollanukkaran <jer...@marvell.com>
 [...]
> > > > A rough patch for the approach mentioned earlier. It is only for
> > discussion.
> > > > http://mails.dpdk.org/archives/dev/2019-July/138113.html
> > > >
> > > > To try this out, first revert the following then apply.
> > > > commit 89aac60e0be9 ("vfio: fix interrupts race condition")
> > >
> > > Yes. This patch has to be to reverted. It changes the existing
> > > interrupt behavior and does not address the MSIX case as well.
> > >
> > > I think, The clean fix would be to introduce rte_intr_mask() and
> > > rte_intr_unmask() by abstracting the INTX and MSIX differences And let
> > > qede driver call it as needed.
> > >
> > > Thoughts?
> >
> > Hi,
> 
> Hi Hyong,
> 
> >
> > You are proposing these?
> > - Add rte_intr_mask_intx, rte_intr_unmask_intx.
> >   No APIs for masking MSI/MSI-X as vfio-pci does not support that.
> > - Modify PMD irq handlers to use rte_intr_unmask_intx as necessary.
> 
> No, introduce the rte_intr_mask() and rte_intr_unmask().
> For MSIX + Linux VFIO, That API can return -ENOSUP as Linux VFIO+MSIX is
> not supporting.
> Another platform/eal may support it.
> 

These generic names would invite people to use API, only to see it
fail, since it only works with INTx..

> Mask and unmask is operation is known to all IRQ controllers.
> So, IMO, As far as abstraction is concerned it will be good fit.
> 
> > That might be too intrusive. And too much work for the sake of INTx..
> > Anyone really using/needing INTx these days? :-)
> 
> Yup. Mask needs to called only for only qede INTx. Looks like qede
> Has MSIX and INTX separate handler. So this mask can go to qede INTx
> 
> >
> > The following drivers call rte_intr_enable from their irq handlers. So with
> > explicit rte_intr_unmask_intx, all these would need to do "if using intx,
> > unmask"?
> >
> > atlantic, avp, axgbe, bnx2x, e1000, fm10k, ice, ixgbe, nfp, qede, sfc,
> > vmxnet3
> 
> No change on these PMDs.
> 

Why is that?

These drivers potentially have the same "lost" interrupt issue
mentioned in the original redhat bz (qede + MSI). I *think* this
observation led David to address them all through vfio changes, rather
than fixing qede alone.

You want to introduce unmask API and use it only for qede in this
cycle, and ask respective maintainers to fix their drivers in 19.11?

> > And nfp seems to rely on rte_intr_enable to re-install irq handler to
> unmask
> > a vector in MSI-X Table?
> >
> >         if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
> >                 /* If MSI-X auto-masking is used, clear the entry */
> >                 rte_wmb();
> >                 rte_intr_enable(&pci_dev->intr_handle);
> >
> > With David's patch and mine, this handler would have to first
> > rte_intr_disable() and then enable, if such unmasking is really necessary..
> >
> > As for the semantics of rte_intr_enable/disable, I am ok as is.
> > - "enable": put things in a state where NIC can send an interrupt, and
> >   PMD/app gets a callback.
> >   Whether this involves unmasking for INTx is hidden.
> > - "disable": put things in a state where NIC cannot send an interrupt.
> 
> It looks OK to me. My only thought was, Since mask and unmask
> is a common irq controller operation. We may not need to add
> A lot of common code(Introducing a state) to hide unmask INTx.
> More over as you said, There is may only handful of devices uses INTX.
> 
> IMO, mask and unmask API is good fit as eal abstraction.
> But Using a separate API or hide inside eal to solve this problem is good
> question.
> May be more thoughts from another quys will be good.
> 
> We will try to send a version with mask/unmask API to see the changes
> required.
> 
> >
> > Regardless of vfio changes, we should probably remove rte_intr_enable
> > from qede_interrupt_handler (the MSI/MSI-X interrupt handler), to make
> > usage/intention clear..
> 
> Yes. Anyway this change is required.
> 

That change fixes the immediate problem (redhat bz) that started all
this discussion. And allow us to kick the can down the road, for
potential issues in other PMDs :-) Not suggesting we do, but it
becomes an option..

Thanks.
-Hyong

Reply via email to