Hopefully this makes it through , I've been trying to post comments all day
but they don't seem to make it here.

To Bryan, I wasn't running pf originally when I noticed this problem but I
am now just to block ssh from the outside.  I've disabled and re-enabled pf
to see if it affects throughput and it's not, or isn't that noticeable.  As
for what I have done I have performed a number of bandwidth tests.  I've
come from the outside, traversing the gateway while downloading from an
internal host.  I've come from the outside to the gateway downloading from
it, I've come from the local subnet on a machine running the exact same
hardware and installation while transferring a file in each direction.
While under high load all forms of this testing is affected with poor
speeds.  Even when not under high loads I never see the speeds I should.
I've checked interface stats on the switch and have found no errors.  I have
run iperf and can only seem to get 5-16Mb/s.  I even bumped up sendspace and
recvspace to help with edge host to host transfer but I've not seen any
improvement.  I'm going to be tinkering with netperf more because I'm not
sure if I ran into an issue on bsd with it.  On two linux boxes on the
inside it reports line speed between them.

To Max, Cables don't show any problems and I have the problems internally as
well, not just external hosts.  I wish it was that simple.

To Claudio, I've gone through the 4.1 and 4.2 changes in hopes I would find
some clear reason as to why I'm having these issues but I've not seen
anything. The odd thing is that I report a negative value for drops and it's
counting down.

net.inet.ip.ifq.drops=-1381027346

I've put maxlen=256 and it seems to have slowed the count down.


To Stuart, Dmesg has not shown any issues.  I've been a bit confused with
how to interpret the output of vmstat and "systat vmstat".  I was told to
look for interrupts on "systat vmstat" but I haven't seen any being thrown
while under heavy load.  I also don't think I understand how interrupts
work.  As for "vmstat -i", I'm not exactly sure what would signify a problem
but I get the following output:

Gateway1 (about 3-4 times the load of gateway2)
interrupt                       total     rate
irq0/clock                 6455328221      399
irq0/ipi                   2543041813      157
irq19/ohci0                      9166        0
irq17/pciide0                 7630229        0
irq0/bge0                 25346022947     1570
irq1/bge1                 21123330824     1308
Total                     55475363200     3437

Gateway2:
interrupt                       total     rate
irq0/clock                 6455272059      400
irq0/ipi                   1819715207      112
irq19/ohci0                     12574        0
irq17/pciide0                 6232113        0
irq0/bge0                  8118898045      503
irq1/bge1                 12291117020      761
Total                     28691247018     1777



On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
>
> > net.inet.ip.ifq.maxlen defines how many packets can be queued in the IP
> > input queue before further packets are dropped. Packets comming from the
> > network card are first put into this queue and the actuall IP packet
> > processing is done later. Gigabit cards with interrupt mitigation may
> spit
> > out many packets per interrupt plus heavy use of pf can slowdown the
> > packet forwarding. So it is possible that a heavy burst of packets is
> > overflowing this queue. On the other hand you do not want to use a too
> big
> > number because this has negative effects on the system (livelock etc).
> > 256 seems to be a better default then the 50 but additional tweaking may
> > allow you to process a few packets more.
>
> Thanks Claudio...
>
> In the link that Stuart posted here, Henning mentions 256 times the
> number of interfaces:
> http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
>
> I'll try both and see.
>
> Thanks you and Stuart for the hints.

Reply via email to