Hi, we have around 500-600 mbit/s traffic flowing through a 7.1R Dell
PowerEdge w/ 2 GigE bce cards. There are currently around 4 thousand ISP
users online limited by dummynet pipes of various speeds. According to
netstat -s output around 500-1000 packets are being dropped every second
(this accounts for wasting around 7-12 mbit/s worth of traffic according
to systat -ifstat):
# while :; do netstat -z -s 2>/dev/null | fgrep -w "output packets
dropped"; sleep 1; done
16824 output packets dropped due to no bufs, etc.
548 output packets dropped due to no bufs, etc.
842 output packets dropped due to no bufs, etc.
709 output packets dropped due to no bufs, etc.
652 output packets dropped due to no bufs, etc.
^C
Pipes have been created like this:
ipfw pipe 1024 config bw 1024kbit/s mask dst-ip 0xffffffff queue 350KBytes
etc., and then assigned to users by application (ipfw tablearg).
I've tried playing with the queue setting, from as little as 1 slot to
as much as 4096KBytes - packets are still being dropped, more or less.
Should I somehow calculate the proper queue value for the given pipe
width? The manpage says 50 slots is typical for Ethernet devices (not
mentioning whether it's 10, 100 or 1000 mbit/s), and that's it.
sysctls:
kern.ipc.nmbclusters=50000
net.inet.ip.dummynet.io_fast=1
Polling can't be enabled with bce.
Any hints? Should I provide any further info?
Thanks.
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"