On Wed, Aug 07, 2013 at 12:57:55PM -0400, Maxim Khitrov wrote:
> On Wed, Aug 7, 2013 at 11:44 AM, Florian Obser <flor...@narrans.de> wrote:
> > On Wed, Aug 07, 2013 at 10:26:22AM -0400, Maxim Khitrov wrote:
> > [...]
> >> Increasing the MTU on both ix0 interfaces to 9000 gives me ~7.2 Gbps:
> >
> > you expect a lot of jumbo frames in front of / behind your firewall?
> > (if the answer is no, why are you testing that?)
> 
> It's a possibility. What this tells me, however, is that the the
> throughput isn't the (main) problem. The per-packet processing
> overhead appears to be the limiting factor, which is why I asked about

indeed, during my tests systat showed that the system is spending 99% in
interrupt handlers. Having context switches because you are running
iperf localy is not good[tm] in this situation.

> checksum offloading.
> 
> > anyway, I was testing an Intel 82599 system in July which will become
> > a border router. All of this is forwarding rate; it took me 2 days to
> > beg, borrow and steal enough hw to actually generate the traffic.  (I
> > had 4 systems in front of and 4 systems behind the router, all doing
> > 1Gb/s)
> 
> What tools were you using to generate the traffic and to calculate
> bytes/packets per second? I assume interrupts per second came from
> systat?
> 

right, the interrupt rate came from systat, traffic was generated with
iperf and measured with bwm-mg in 30 second average mode.
iperf was running in dualtest mode and instructed to run for an hour
so that I had a chance to start all iperfs before the first one
would finish ;) no other switches (besides -c and -s of course).

-- 
I'm not entirely sure you are real.

Reply via email to