Thanks Navdeep, I figured there had to be more going on than just allowing packets across interfaces. With forwarding automatically disabling TSO/LRO that would entirely explain why my bandwidth throughput tests drop off significantly.
*[image: userimage]Scott Larson[image: los angeles] <https://www.google.com/maps/place/4216+Glencoe+Ave,+Marina+Del+Rey,+CA+90292/@33.9892151,-118.4421334,17z/data=!3m1!4b1!4m2!3m1!1s0x80c2ba88ffae914d:0x14e1d00084d4d09c>Lead Systems Administrator[image: wdlogo] <https://www.wiredrive.com/> [image: linkedin] <https://www.linkedin.com/company/wiredrive> [image: facebook] <https://www.twitter.com/wiredrive> [image: twitter] <https://www.facebook.com/wiredrive> [image: instagram] <https://www.instagram.com/wiredrive>T 310 823 8238 x1106 <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* On Thu, Apr 23, 2015 at 5:14 AM, Navdeep Parhar <npar...@gmail.com> wrote: > On Tue, Apr 21, 2015 at 12:47:45PM -0700, Scott Larson wrote: > > We're in the process of migrating our network into the future with > 40G > > at the core, including our firewall/traffic routers with 40G interfaces. > An > > issue which this exposed and threw me for a week turns out to be directly > > related to net.inet.ip.forwarding and I'm looking to just get some > insight > > on what exactly is occurring as a result of using it. > > Enabling forwarding disables LRO and TSO and that probably accounts for > a large part of the difference in throughput that you've observed. The > number of packets passing through the stack (and not the amount of data > passing through) is the dominant bottleneck. > > fastforwarding _should_ make a difference, but only if packets actually > take the fast-forward path. Check the counters available via netstat: > # netstat -sp ip | grep forwarded > > Regards, > Navdeep > > > What I am seeing is when that knob is set to 0, an identical pair of > > what will be PF/relayd servers with direct DAC links between each other > > using Chelsio T580s can sustain around 38Gb/s on iperf runs. However the > > moment I set that knob to 1, that throughput collapses down into the 3 to > > 5Gb/s range. As the old gear this is replacing is all GigE I'd never > > witnessed this. Twiddling net.inet.ip.fastforwarding has no apparent > effect. > > I've not found any docs going in depth on what deeper changes > enabling > > forwarding does to the network stack. Does it ultimately put a lower > > priority on traffic where the server functioning as the packet router is > > the final endpoint in exchange for having more resources available to > route > > traffic across interfaces as would generally be the case? > > > > > > *[image: userimage]Scott Larson[image: los angeles] > > < > https://www.google.com/maps/place/4216+Glencoe+Ave,+Marina+Del+Rey,+CA+90292/@33.9892151,-118.4421334,17z/data=!3m1!4b1!4m2!3m1!1s0x80c2ba88ffae914d:0x14e1d00084d4d09c > >Lead > > Systems Administrator[image: wdlogo] <https://www.wiredrive.com/> > [image: > > linkedin] <https://www.linkedin.com/company/wiredrive> [image: facebook] > > <https://www.twitter.com/wiredrive> [image: twitter] > > <https://www.facebook.com/wiredrive> [image: instagram] > > <https://www.instagram.com/wiredrive>T 310 823 8238 x1106 > > <310%20823%208238%20x1106> | M 310 904 8818 <310%20904%208818>* > > _______________________________________________ > > freebsd-net@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-net > > To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org" > _______________________________________________ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"