On Wed, 2021-03-10 at 03:04 +0100, Andrew Lunn wrote:
> On Tue, Mar 09, 2021 at 06:57:06PM +0100, Eric Dumazet wrote:
> > 
> > 
> > On 3/9/21 6:10 PM, Shay Agroskin wrote:
> > > The page cache holds pages we allocated in the past during napi
> > > cycle,
> > > and tracks their availability status using page ref count.
> > > 
> > > The cache can hold up to 2048 pages. Upon allocating a page, we

2048 per core ? IMHO this is too much ! ideally you want twice the napi
budget.

you are trying to mitigate against TCP/L4 delays/congestion but this is
very prone to DNS attacks, if your memory allocators are under stress,
you shouldn't be hogging own pages and worsen the situation. 

> > > check
> > > whether the next entry in the cache contains an unused page, and
> > > if so
> > > fetch it. If the next page is already used by another entity or
> > > if it
> > > belongs to a different NUMA core than the napi routine, we
> > > allocate a
> > > page in the regular way (page from a different NUMA core is
> > > replaced by
> > > the newly allocated page).
> > > 
> > > This system can help us reduce the contention between different
> > > cores
> > > when allocating page since every cache is unique to a queue.
> > 
> > For reference, many drivers already use a similar strategy.
> 
> Hi Eric
> 
> So rather than yet another implementation, should we push for a
> generic implementation which any driver can use?
> 

We already have it:
https://www.kernel.org/doc/html/latest/networking/page_pool.html

also please checkout this fresh page pool extension, SKB buffer
recycling RFC, might be useful for the use cases ena are interested in

https://patchwork.kernel.org/project/netdevbpf/patch/20210311194256.53706-4-mcr...@linux.microsoft.com/



Reply via email to