Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
On Mon, Jan 21, 2019 at 04:51:57AM +, Peng Fan wrote: > on i.MX8QM, M4_1 is communicating with DomU using rpmsg with a fixed > address as the dma mem buffer which is predefined. > > Without this patch, the flow is: > vring_map_one_sg -> vring_use_dma_api > -> dma_map_page > -> __swiotlb_map_page > ->swiotlb_map_page > ->__dma_map_area(phys_to_virt(dma_to_phys(dev, > dev_addr)), size, dir); > However we are using per device dma area for rpmsg, phys_to_virt > could not return a correct virtual address for virtual address in > vmalloc area. Then kernel panic. And that is the right thing to do. You must not call dma_map_* on memory that was allocated from dma_alloc_*. We actually have another thread which appears to be for this same issue. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
On Tue, Jan 22, 2019 at 11:59:31AM -0800, Stefano Stabellini wrote: > > if (!virtio_has_iommu_quirk(vdev)) > > return true; > > > > @@ -260,7 +262,7 @@ static bool vring_use_dma_api(struct virtio_device > > *vdev) > > * the DMA API if we're a Xen guest, which at least allows > > * all of the sensible Xen configurations to work correctly. > > */ > > - if (xen_domain()) > > + if (xen_domain() && !dma_dev->dma_mem) > > return true; > > > > return false; > > I can see you spotted a real issue, but this is not the right fix. We > just need something a bit more flexible than xen_domain(): there are > many kinds of Xen domains on different architectures, we basically want > to enable this (return true from vring_use_dma_api) only when the xen > swiotlb is meant to be used. Does the appended patch fix the issue you > have? The problem generally is the other way around - if dma_dev->dma_mem is set the device decription in the device tree explicitly requires using this memory, so we must _always_ use the DMA API. The problem is just that that rproc driver absuses the DMA API in horrible ways. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote: > If vring_use_dma_api is actually supposed to return true when > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote > are not fixing the real issue here. > > I don't know enough about remoteproc to know where the problem actually > lies though. The problem is the following: Devices can declare a specific memory region that they want to use when the driver calls dma_alloc_coherent for the device, this is done using the shared-dma-pool DT attribute, which comes in two variants that would be a little to much to explain here. remoteproc makes use of that because apparently the device can only communicate using that region. But it then feeds back memory obtained with dma_alloc_coherent into the virtio code. For that it calls vmalloc_to_page on the dma_alloc_coherent, which is a huge no-go for the ĐMA API and only worked accidentally on a few platform, and apparently arm64 just changed a few internals that made it stop working for remoteproc. The right answer is to not use the DMA API to allocate memory from a device-speficic region, but to tie the driver directly into the DT reserved memory API in a way that allows it to easilt obtain a struct device for it. This is orthogonal to another issue, and that is that hardware virtio devices really always need to use the DMA API, otherwise we'll bypass such features as the device specific DMA pools, DMA offsets, cache flushing, etc, etc. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
On Fri, Jan 25, 2019 at 09:45:26AM +, Peng Fan wrote: > Just have a question, > > Since vmalloc_to_page is ok for cma area, no need to take cma and per device > cma into consideration right? The CMA area itself it a physical memory region. If it is a non-highmem region you can call virt_to_page on the virtual addresses for it. If it is in highmem it doesn't even have a kernel virtual address by default. > we only need to implement a piece code to handle per device specific region > using RESERVEDMEM_OF_DECLARE, just like: > RESERVEDMEM_OF_DECLARE(rpmsg-dma, "rpmsg-dma-pool", > rmem_rpmsg_dma_setup); > And implement the device_init call back and build a map between page and phys. > Then in rpmsg driver, scatter list could use page structure, no need > vmalloc_to_page > for per device dma. > > Is this the right way? I think this should work fine. If you have the cycles for it I'd actually love to be able to have generic CMA DT glue for non DMA API driver allocations, as there obviously is a need for it. So basically the same as above, just added to kernel/cma.c as a generic API. ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel