[snip]

>> +    bool found = false;
>> +
>> +    pcidevs_lock();
>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>> +    {
>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>> +        {
>> +            /* Replace virtual SBDF with the physical one. */
>> +            *sbdf = vdev->pdev->sbdf;
>> +            found = true;
>> +            break;
>> +        }
>> +    }
>> +    pcidevs_unlock();
> As per the comments on the earlier patch, locking as well as placement
> may need reconsidering.
I was thinking about the locking happening here.
So, there are 4 sources where we need to manipulate d->vdev_list:
1. XEN_DOMCTL_assign_device
2. XEN_DOMCTL_test_assign_device
3. XEN_DOMCTL_deassign_device
4. MMIO handlers
5. Do I miss others?

The first three already use pcidevs_{lock|unlock} and there it seems
to be ok as those get called when PCI devices are discovered by Dom0
and during guest domain creation. So, this is assumed not to happen
frequently and can be accepted wrt global locking.

What is more important is the fourth case, where in order to redirect
configuration space access from virtual SBDF to physical SBDF we need
to traverse the d->vdev_list each time the guest accesses PCI configuration
space. This means that with each such access we take a BIG PCI lock...

That being said, I think that we may want having a dedicated per-domain
lock for d->vdev_list handling, e.g. d->vdev_lock.
At the same time we may also consider that even for guests it is acceptable
to use pcidevs_{lock|unlock} as this will not affect PCI memory space access
and only has influence during device setup.

I would love to hear your opinion on this
> Jan
>
Thank you,
Oleksandr

Reply via email to