Hello Jan,

Jan Beulich <jbeul...@suse.com> writes:

> On 21.02.2023 00:13, Volodymyr Babchuk wrote:
>> Stefano Stabellini <sstabell...@kernel.org> writes:
>>> On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
>>>> As pci devices are refcounted now and all list that store them are
>>>> protected by separate locks, we can safely drop global pcidevs_lock.
>>>>
>>>> Signed-off-by: Volodymyr Babchuk <volodymyr_babc...@epam.com>
>>>
>>> Up until this patch this patch series introduces:
>>> - d->pdevs_lock to protect d->pdev_list
>>> - pci_seg->alldevs_lock to protect pci_seg->alldevs_list
>>> - iommu->ats_list_lock to protect iommu->ats_devices
>>> - pdev refcounting to detect a pdev in-use and when to free it
>>> - pdev->lock to protect pdev->msi_list
>>>
>>> They cover a lot of ground.  Are they collectively covering everything
>>> pcidevs_lock() was protecting?
>> 
>> Well, that is the question. Those patch are in RFC stage because I can't
>> fully answer your question. I tried my best to introduce proper locking,
>> but apparently missed couple of places, like
>> 
>>> deassign_device is not protected by pcidevs_lock anymore.
>>> deassign_device accesses a number of pdev fields, including quarantine,
>>> phantom_stride and fault.count.
>>>
>>> deassign_device could run at the same time as assign_device who sets
>>> quarantine and other fields.
>>>
>> 
>> I hope this is all, but problem is that PCI subsystem is old, large and
>> complex. Fo example, as I wrote earlier, there are places that are
>> protected with pcidevs_lock(), but do nothing with PCI. I just don't
>> know what to do with such places. I have a hope that x86 maintainers
>> would review my changes and give feedback on missed spots.
>
> At the risk of it sounding unfair, at least initially: While review may
> spot issues, you will want to keep in mind that none of the people who
> originally wrote that code are around anymore. And even if they were,
> it would be uncertain - just like for the x86 maintainers - that they
> would recall (if they were aware at some time in the first place) all
> the corner cases. Therefore I'm afraid that proving correctness and
> safety of the proposed transformations can only be done by properly
> auditing all involved code paths. Yet that's something that imo wants
> to already have been done by the time patches are submitted for review.
> Reviewers would then "merely" (hard enough perhaps) check the results
> of that audit.
>
> I might guess that this locking situation is one of the reasons why
> Andrew in particular thinks (afaik) that the IOMMU code we have would
> better be re-written almost from scratch. I assume it's clear to him
> (it certainly is to me) that this is something that could only be
> expected to happen in an ideal work: I see no-one taking on such an
> exercise. We already have too little bandwidth.

The more I dig into IOMMU code, the more I agree with Andrew. I can't
see how current PCI locking can be untangled in the IOMMU code. There
are just too many moving parts. I tried to play with static code
analysis tools, but I haven't find anything that can reliably analyze
locking in Xen. I even tried to write own tool tailored specifically for
PCI locking analysis. While it works on some synthetic tests, there is
too much work to support actual Xen code.

I am not able to rework x86 IOMMU code. So, I am inclined to drop this
patch series at all. My current plan is to take minimal refcounting from
this series to satisfy your comments for "vpci: use pcidevs locking to
protect MMIO handlers".

-- 
WBR, Volodymyr

Reply via email to