On 5/6/20 12:14 PM, Nataliya Korovkina wrote:
> On Wed, May 6, 2020 at 9:43 AM Boris Ostrovsky
> <boris.ostrov...@oracle.com> wrote:
>>
>> On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
>>> Hello,
>>>
>>> What I found out: rpi_firmware_property_list() allocates memory from
>>> dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
>>> is not eligible in this case.
>>
>> So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
>> which case it has no business calling xen_swiotlb_free_coherent().
>>
>>
>> -boris
>>
>>
>>
>>
> It does go.
> dma_alloc_coherent() indirectly calls xen_swiotlb_alloc_coherent(),
> then  xen_alloc_coherent_pages() eventually calls arch_dma_alloc() in
> remap.c which successfully allocates pages from atomic pool.


Yes, I was looking at x86's implementation of xen_alloc_coherent_pages().


>
> The patch Julien offered for domian_build.c moved Dom0 banks in the
> first G of RAM.
> So it covered the previous symptom (a crash during allocation) because
> now we avoid pathway  when we mark a page "XenMapped".
>
> But the symptom still remains in xen_swiotlb_free_coherent() because
> "TestPage..." is called unconditionally. virt_to_page() is not
> applicable to such allocations.


Perhaps we just need to make sure we are using right virt-to-page
method. Something like


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d2776..f224e69 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -335,6 +335,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
        int order = get_order(size);
        phys_addr_t phys;
        u64 dma_mask = DMA_BIT_MASK(32);
+       struct page *pg;
 
        if (hwdev && hwdev->coherent_dma_mask)
                dma_mask = hwdev->coherent_dma_mask;
@@ -346,9 +347,12 @@ int __ref xen_swiotlb_init(int verbose, bool early)
        /* Convert the size to actually allocated. */
        size = 1UL << (order + XEN_PAGE_SHIFT);
 
+       pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
virt_to_page(vaddr);
+       BUG_ON(!pg);
+
        if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
                     range_straddles_page_boundary(phys, size)) &&
-           TestClearPageXenRemapped(virt_to_page(vaddr)))
+           TestClearPageXenRemapped(pg))
                xen_destroy_contiguous_region(phys, order);
 
        xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys,
attrs);


(I have not tested this at all)


Reply via email to