On Mon, 5 Jan 2026 09:23:45 +0100, "Michael S. Tsirkin" <[email protected]> said:
> The res and ires buffers in struct virtio_gpio_line and struct
> vgpio_irq_line respectively are used for DMA_FROM_DEVICE via
> virtqueue_add_sgs().  However, within these structs, even though these
> elements are tagged as ____cacheline_aligned, adjacent struct elements
> can share DMA cachelines on platforms where ARCH_DMA_MINALIGN >
> L1_CACHE_BYTES (e.g., arm64 with 128-byte DMA alignment but 64-byte
> cache lines).
>
> The existing ____cacheline_aligned annotation aligns to L1_CACHE_BYTES
> which is not always sufficient for DMA alignment. For example, with
> L1_CACHE_BYTES = 32 and ARCH_DMA_MINALIGN = 128
>   - irq_lines[0].ires at offset 128
>   - irq_lines[1].type at offset 192
> both in same 128-byte DMA cacheline [128-256)
>
> When the device writes to irq_lines[0].ires and the CPU concurrently
> modifies one of irq_lines[1].type/disabled/masked/queued flags,
> corruption can occur on non-cache-coherent platforms.
>
> Fix by using __dma_from_device_group_begin()/end() annotations on the
> DMA buffers. Drop ____cacheline_aligned - it's not required to isolate
> request and response, and keeping them would increase the memory cost.
>
> Acked-by: Viresh Kumar <[email protected]>
> Signed-off-by: Michael S. Tsirkin <[email protected]>
> ---

Acked-by: Bartosz Golaszewski <[email protected]>

Reply via email to