Quoting Arnd, We have to do a sync_single_for_device /somewhere/ before the buffer is given to the device. On a non-cache-coherent machine with a write-back cache, there may be dirty cache lines that get written back after the device DMA's data into it (e.g. from a previous memset from before the buffer got freed), so you absolutely need to flush any dirty cache lines on it first.
Since the coherency is configurable in this device make sure we cover all configurations by explicitly syncing the allocated buffer for the device before refilling it's descriptors Signed-off-by: Ilias Apalodimas <ilias.apalodi...@linaro.org> --- Changes since V1: - Make the code more readable drivers/net/ethernet/socionext/netsec.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 5544a722543f..ada7626bf3a2 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -727,21 +727,26 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv, { struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX]; + enum dma_data_direction dma_dir; + dma_addr_t dma_start; struct page *page; page = page_pool_dev_alloc_pages(dring->page_pool); if (!page) return NULL; + dma_start = page_pool_get_dma_addr(page); /* We allocate the same buffer length for XDP and non-XDP cases. * page_pool API will map the whole page, skip what's needed for * network payloads and/or XDP */ - *dma_handle = page_pool_get_dma_addr(page) + NETSEC_RXBUF_HEADROOM; + *dma_handle = dma_start + NETSEC_RXBUF_HEADROOM; /* Make sure the incoming payload fits in the page for XDP and non-XDP * cases and reserve enough space for headroom + skb_shared_info */ *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA; + dma_dir = page_pool_get_dma_dir(dring->page_pool); + dma_sync_single_for_device(priv->dev, dma_start, PAGE_SIZE, dma_dir); return page_address(page); } -- 2.20.1