On Fri, Sep 22, 2023 at 02:13:18PM +0100, Robin Murphy wrote:
> On 22/09/2023 1:41 pm, Jason Gunthorpe wrote:
> > On Fri, Sep 22, 2023 at 08:57:19AM +0100, Jean-Philippe Brucker wrote:
> > > > > They're not strictly equivalent: this check works around a temporary 
> > > > > issue
> > > > > with the IOMMU core, which calls map/unmap before the domain is
> > > > > finalized.
> > > > 
> > > > Where? The above points to iommu_create_device_direct_mappings() but
> > > > it doesn't because the pgsize_bitmap == 0:
> > > 
> > > __iommu_domain_alloc() sets pgsize_bitmap in this case:
> > > 
> > >          /*
> > >           * If not already set, assume all sizes by default; the driver
> > >           * may override this later
> > >           */
> > >          if (!domain->pgsize_bitmap)
> > >                  domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap;
> > 
> > Dirver's shouldn't do that.
> > 
> > The core code was fixed to try again with mapping reserved regions to
> > support these kinds of drivers.
> 
> This is still the "normal" code path, really; I think it's only AMD that
> started initialising the domain bitmap "early" and warranted making it
> conditional.

My main point was that iommu_create_device_direct_mappings() should
fail for unfinalized domains, setting pgsize_bitmap to allow it to
succeed is not a nice hack, and not necessary now.

What do you think about something like this to replace
iommu_create_device_direct_mappings(), that does enforce things
properly?


static int resv_cmp(void *priv, const struct list_head *llhs,
                    const struct list_head *lrhs)
{
        struct iommu_resv_region *lhs = list_entry(llhs, struct 
iommu_resv_region, list);
        struct iommu_resv_region *rhs = list_entry(lrhs, struct 
iommu_resv_region, list);

        if (lhs->start == rhs->start)
                return 0;
        if (lhs->start < rhs->start)
                return -1;
        return 1;
}

static int iommu_create_device_direct_mappings(struct iommu_domain *domain,
                                               struct device *dev)
{
        struct iommu_resv_region *entry;
        struct iommu_resv_region *tmp;
        struct list_head mappings;
        struct list_head direct;
        phys_addr_t cur = 0;
        int ret = 0;

        INIT_LIST_HEAD(&mappings);
        INIT_LIST_HEAD(&direct);

        iommu_get_resv_regions(dev, &mappings);

        list_for_each_entry_safe(entry, tmp, &mappings, list) {
                if (entry->type == IOMMU_RESV_DIRECT)
                        dev->iommu->require_direct = 1;

                if ((domain->type & __IOMMU_DOMAIN_PAGING) &&
                    (entry->type == IOMMU_RESV_DIRECT ||
                     entry->type == IOMMU_RESV_DIRECT_RELAXABLE)) {
                        if (domain->geometry.aperture_start > entry->start ||
                            domain->geometry.aperture_end == 0 ||
                            (domain->geometry.aperture_end - 1) <
                                    (entry->start + entry->length - 1)) {
                                ret = -EINVAL;
                                goto out;
                        }
                        list_move(&entry->list, &direct);
                }
        }

        if (list_empty(&direct))
                goto out;

        /*
         * FW can have overlapping ranges, sort the list by start address
         * and map any duplicated IOVA only once.
         */
        list_sort(NULL, &direct, resv_cmp);
        list_for_each_entry(entry, &direct, list) {
                phys_addr_t start_pfn = entry->start / PAGE_SIZE;
                phys_addr_t last_pfn =
                        (entry->length - 1 + entry->start) / PAGE_SIZE;

                if (start_pfn < cur)
                        start_pfn = cur;

                if (start_pfn <= last_pfn) {
                        ret = iommu_map(domain, start_pfn * PAGE_SIZE,
                                        start_pfn * PAGE_SIZE,
                                        (last_pfn - start_pfn + 1) * PAGE_SIZE,
                                        entry->prot, GFP_KERNEL);
                        if (ret)
                                goto out;
                        cur = last_pfn + 1;
                }
        }

out:
        list_splice(&direct, &mappings);
        iommu_put_resv_regions(dev, &mappings);

        return ret;
}
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to