Le 24/11/2016 à 07:01, Gregory CLEMENT a écrit : > Hi Arnd, > > On jeu., nov. 24 2016, Arnd Bergmann <a...@arndb.de> wrote: > >> On Thursday, November 24, 2016 4:37:36 PM CET Jisheng Zhang wrote: >>> solB (a SW shadow cookie) perhaps gives a better performance: in hot path, >>> such as mvneta_rx(), the driver accesses buf_cookie and buf_phys_addr of >>> rx_desc which is allocated by dma_alloc_coherent, it's noncacheable if the >>> device isn't cache-coherent. I didn't measure the performance difference, >>> because in fact we take solA as well internally. From your experience, >>> can the performance gain deserve the complex code? >> >> Yes, a read from uncached memory is fairly slow, so if you have a chance >> to avoid that it will probably help. When adding complexity to the code, >> it probably makes sense to take a runtime profile anyway quantify how >> much it gains. >> >> On machines that have cache-coherent DMA, accessing the descriptor >> should be fine, as you already have to load the entire cache line >> to read the status field. >> >> Looking at this snippet: >> >> rx_status = rx_desc->status; >> rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + >> MVNETA_MH_SIZE); >> data = (unsigned char *)rx_desc->buf_cookie; >> phys_addr = rx_desc->buf_phys_addr; >> pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc); >> bm_pool = &pp->bm_priv->bm_pools[pool_id]; >> >> if (!mvneta_rxq_desc_is_first_last(rx_status) || >> (rx_status & MVNETA_RXD_ERR_SUMMARY)) { >> err_drop_frame_ret_pool: >> /* Return the buffer to the pool */ >> mvneta_bm_pool_put_bp(pp->bm_priv, bm_pool, >> rx_desc->buf_phys_addr); >> err_drop_frame: >> >> >> I think there is more room for optimizing if you start: you read >> the status field twice (the second one in MVNETA_RX_GET_BM_POOL_ID) >> and you can cache the buf_phys_addr along with the virtual address >> once you add that. > > I agree we can optimize this code but it is not related to the 64 bits > conversion. Indeed this part is running when we use the HW buffer > management, however currently this part is not ready at all for 64 > bits. The virtual address is directly handled by the hardware but it has > only 32 bits to store it in the cookie.So if we want to use the HWBM in > 64 bits we need to redesign the code, (maybe by storing the virtual > address in a array and pass the index in the cookie).
Can't you make sure that skb->data is aligned to a value big enough that you can still cover the <N> bit physical address space of the adapter within a 32-bit quantity if you drop the low bits that would be all zeroes? That way, even though you only have 32-bits of storage/cookie, these don't have to be the actual 32-bits of your original address, but could be addr >> 8 for instance? As you indicate using an index stored in the cookie might be a better scheme though, since you could attach a lot more metadata to an index in an local array (which could be in cached memory) as opposed to just an address. -- Florian