Jesper Dangaard Brouer <bro...@redhat.com> wrote:
> On Mon, 28 Aug 2017 16:00:32 +0200
> Florian Westphal <f...@strlen.de> wrote:
> 
> > liujian (CE) <liujia...@huawei.com> wrote:
> > > Hi
> > > 
> > > I checked our 3.10 kernel, we had backported all percpu_counter bug fix 
> > > in lib/percpu_counter.c and include/linux/percpu_counter.h.
> > > And I check 4.13-rc6, also has the issue if NIC's rx cpu num big enough.
> > >   
> > > > > > > the issue:
> > > > > > > Ip_defrag fail caused by frag_mem_limit reached 
> > > > > > > 4M(frags.high_thresh).
> > > > > > > At this moment,sum_frag_mem_limit is about 10K.  
> > > 
> > > So should we change ipfrag high/low thresh to a reasonable value ? 
> > > And if it is, is there a standard to change the value?  
> > 
> > Each cpu can have frag_percpu_counter_batch bytes rest doesn't know
> > about so with 64 cpus that is ~8 mbyte.
> > 
> > possible solutions:
> > 1. reduce frag_percpu_counter_batch to 16k or so
> > 2. make both low and high thresh depend on NR_CPUS

I take 2) back.  Its wrong to do this, for large NR_CPU values it
would even overflow.

> To me it looks like we/I have been using the wrong API for comparing
> against percpu_counters.  I guess we should have used 
> __percpu_counter_compare().

Are you sure?  For liujian use case (64 cores) it looks like we would
always fall through to percpu_counter_sum() so we eat spinlock_irqsave
cost for all compares.

Before we entertain this we should consider reducing frag_percpu_counter_batch
to a smaller value.

Reply via email to