On Sun, May 11, 2014 at 08:32:34AM -0700, Adrian Chadd wrote:

> On 11 May 2014 01:31, Slawa Olhovchenkov <s...@zxy.spb.ru> wrote:
> > On Sat, May 10, 2014 at 12:53:37AM +0000, Adrian Chadd wrote:
> >
> >> Author: adrian
> >> Date: Sat May 10 00:53:36 2014
> >> New Revision: 265792
> >> URL: http://svnweb.freebsd.org/changeset/base/265792
> >>
> >> Log:
> >>   Add in support to optionally pin the swi threads.
> >>
> >>   Under enough load, the swi's can actually be preempted and migrated
> >>   to other currently free cores.  When doing RSS experiments, this lead
> >>   to the per-CPU TCP timers not lining up any more with the RX CPU said
> >>   flows were ending up on, leading to increased lock contention.
> >>
> >>   Since there was a little pushback on flipping them on by default,
> >>   I've left the default at "don't pin."
> >>
> >>   The other less obvious problem here is that the default swi
> >>   is also the same as the destination swi for CPU #0.  So if one
> >>   pins the swi on CPU #0, there's no default floating swi.
> >>
> >>   A nice future project would be to create a separate swi for
> >>   the "default" floating swi, as well as per-CPU swis that are
> >>   (optionally) pinned.
> >
> > MFC planed?
> > I have 10.0 box with aprox. 16Gbit TCP at peak.
> 
> I've no plans to MFC it at the present.
> 
> By itself it shouldn't do very much. The rest of RSS stack and driver
> changes have to go in before it'll matter.
> 
> (But if you try it on 10.0 and it changes things, by all means let me know.)

I am try on 10.0, but not sure about significant improvement (may be
10%).

For current CPU (E5-2650 v2 @ 2.60GHz) hwpmc don't working (1. after
collect some data `pmcstat -R sample.out -G out.txt` don't decode any;
2. kldunload hwpmc do kernel crash) and I can't collect detailed
profile information.
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"

Reply via email to