Hello,

Thanks for the patch! Next time could you please try to reply to the
previous questions before sending a new version:

https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg00257.html

On Wed, Jan 29, 2020 at 10:28:07AM +0100, Varad Gautam wrote:
> XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTS.
> In that scenario, it is possible to receive multiple _pirq_guest_unbind
> calls for the same pirq from domain_kill, if the pirq has not yet been
> removed from the domain's pirq_tree, as:
>   domain_kill()
>     -> domain_relinquish_resources()
>       -> pci_release_devices()
>         -> pci_clean_dpci_irq()
>           -> pirq_guest_unbind()
>             -> __pirq_guest_unbind()
> 
> For a shared pirq (nr_guests > 1), the first call would zap the current
> domain from the pirq's guests[] list, but the action handler is never freed
> as there are other guests using this pirq. As a result, on the second call,
> __pirq_guest_unbind searches for the current domain which has been removed
> from the guests[] list, and hits a BUG_ON.
> 
> Make __pirq_guest_unbind safe to be called multiple times by letting xen
> continue if a shared pirq has already been unbound from this guest. The
> PIRQ will be cleaned up from the domain's pirq_tree during the destruction
> in complete_domain_destroy anyways.

So AFAICT this is because pt_pirq_softirq_active() returns true in
pci_clean_dpci_irq() and hence the iteration is stopped and
hvm_domain_irq(d)->dpci is not set to NULL.

Would it be possible to clean the already processed IRQs from the
domain pirq_tree?

pci_clean_dpci_irq() already seems to free part of this structure, and
would be nicer IMO if we didn't leave cleaned up stuff behind on
ERESTART.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to