On 07.03.2022 17:37, Roger Pau Monne wrote:
> Map the PBA in order to access it from the MSI-X read and write
> handlers. Note that previously the handlers would pass the physical
> host address into the {read,write}{l,q} handlers, which is wrong as
> those expect a linear address.
> 
> Map the PBA using ioremap when the first access is performed. Note
> that 32bit arches might want to abstract the call to ioremap into a
> vPCI arch handler, so they can use a fixmap range to map the PBA.
> 
> Reported-by: Jan Beulich <jbeul...@suse.com>
> Signed-off-by: Roger Pau Monné <roger....@citrix.com>

Reviewed-by: Jan Beulich <jbeul...@suse.com>

> Cc: Alex Olson <this.is.a0l...@gmail.com>

I'll wait a little with committing, in the hope for Alex to re-provide
a Tested-by.

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -182,6 +182,38 @@ static struct vpci_msix_entry *get_entry(struct 
> vpci_msix *msix,
>      return &msix->entries[(addr - start) / PCI_MSIX_ENTRY_SIZE];
>  }
>  
> +static void __iomem *get_pba(struct vpci *vpci)
> +{
> +    struct vpci_msix *msix = vpci->msix;
> +    /*
> +     * PBA will only be unmapped when the device is deassigned, so access it
> +     * without holding the vpci lock.
> +     */
> +    void __iomem *pba = read_atomic(&msix->pba);
> +
> +    if ( likely(pba) )
> +        return pba;
> +
> +    pba = ioremap(vmsix_table_addr(vpci, VPCI_MSIX_PBA),
> +                  vmsix_table_size(vpci, VPCI_MSIX_PBA));
> +    if ( !pba )
> +        return read_atomic(&msix->pba);
> +
> +    spin_lock(&vpci->lock);
> +    if ( !msix->pba )
> +    {
> +        write_atomic(&msix->pba, pba);
> +        spin_unlock(&vpci->lock);
> +    }
> +    else
> +    {
> +        spin_unlock(&vpci->lock);
> +        iounmap(pba);
> +    }

TBH I had been hoping for just a single spin_unlock(), but you're
the maintainer of this code ...

Jan


Reply via email to