On Sun, Jul 02, 2023 at 12:44:17PM +0200, Alexandr Nedvedicky wrote:
> Hello,
> 
> On Thu, Jun 29, 2023 at 01:48:27PM +1000, David Gwynne wrote:
> > On Mon, Jun 26, 2023 at 01:16:40AM +0200, Alexandr Nedvedicky wrote:
> > 
> > >     net/if_pfsync.c
> > >   the diff currently uses two slices (PFSYNC_NSLICES). is there a plan to
> > >   scale it up?  the slice can be simply viewed as a kind of task. IMO the
> > >   number of slices can be aligned with number of cpu cores. Or is this
> > >   too simplified? I'm just trying to get some hints on how to further
> > >   tune performance.
> > 
> > that's part of a bigger discussion which involves how far we should
> > scale the number of nettqs and how parallel pf can go.
> > 
> > 2 slices demonstrates that pfsync can partition work and is safe doing
> > so. there kstats ive added on those slices show there isnt a lot of
> > contention in pfsync. yet.
> > 
> 
>     I was just wondering, because if I remember correct hrvoje@ has noticed
>     small performance degradation (compared with current). I think his test 
> was
>     using 4 net tasks to forward packets through firewall.  now if there are
>     just 2 tasks for pfsync, then this might be a way how the degradation
>     sneaked in. just a thought.

if i remember correctly that result was when i was using the high bits
in the toeplitz hash on pf states to pick a pfsync slice. since i
changed it to use the same bits as the hardware/stack/pf his numbers
showed that old and new pfsync perform pretty much the same.

im running 8 nettqs with 2 pfsync slices in production, and the
pfsync slice mutexes were contended about 1.3% of the time on average
over the last 7 days. i havent tried scaling the number of slices up to
see what effect that has yet.

Reply via email to