>>> On 19.10.17 at 13:26, <andrew.coop...@citrix.com> wrote:
> --- a/xen/drivers/passthrough/vtd/qinval.c
> +++ b/xen/drivers/passthrough/vtd/qinval.c
> @@ -147,7 +147,8 @@ static int __must_check queue_invalidate_wait(struct 
> iommu *iommu,
>                                                u8 iflag, u8 sw, u8 fn,
>                                                bool_t flush_dev_iotlb)
>  {
> -    volatile u32 poll_slot = QINVAL_STAT_INIT;

You've lost the initializer.

> +    static DEFINE_PER_CPU(u32, poll_slot);

volatile u32

> @@ -182,7 +183,7 @@ static int __must_check queue_invalidate_wait(struct 
> iommu *iommu,
>          timeout = NOW() + MILLISECS(flush_dev_iotlb ?
>                                      iommu_dev_iotlb_timeout : 
> VTD_QI_TIMEOUT);
>  
> -        while ( poll_slot != QINVAL_STAT_DONE )
> +        while ( *this_poll_slot != QINVAL_STAT_DONE )
>          {
>              if ( NOW() > timeout )
>              {

Okay, you indeed improve the situation. But is that improvement
enough? I.e. what if the write of a first (timed out) request happens
while waiting for a subsequent one? Don't you need distinct addresses
for every possible slot? Or alternatively isn't it high time for the
interrupt approach to be made work (perhaps not by you, but rather
by Intel folks)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to