On 3/10/2024 2:58 PM, Brandes, Shai wrote: > > >> -----Original Message----- >> From: Ferruh Yigit <ferruh.yi...@amd.com> >> Sent: Friday, March 8, 2024 7:23 PM >> To: Brandes, Shai <shaib...@amazon.com> >> Cc: dev@dpdk.org; sta...@dpdk.org >> Subject: RE: [EXTERNAL] [PATCH v3 05/33] net/ena: fix fast mbuf free >> >> CAUTION: This email originated from outside of the organization. Do not click >> links or open attachments unless you can confirm the sender and know the >> content is safe. >> >> >> >> On 3/6/2024 12:24 PM, shaib...@amazon.com wrote: >>> From: Shai Brandes <shaib...@amazon.com> >>> >>> In case the application enables fast mbuf release optimization, the >>> driver releases 256 TX mbufs in bulk upon reaching the TX free >>> threshold. >>> The existing implementation utilizes rte_mempool_put_bulk for bulk >>> freeing TXs, which exclusively supports direct mbufs. >>> In case the application transmits indirect bufs, the driver must also >>> decrement the mbuf reference count and unlink the mbuf segment. >>> For such case, the driver should employ rte_pktmbuf_free_bulk. >>> >> >> Ack. >> >> I wonder if you observe any performance impact from this change, just for >> reference if we encounter similar decision in the future. > [Brandes, Shai] we did not see performance impact in our testing. > It was discovered by a new latency application we crafted that uses the bulk > free option, which transmitted one by one packets copied from a common > buffer, but showed that there are missing packets. >
ack.