You need to tune kern.ipc.maxsockbuf. I normally use 2097152 for the ti
gigabit cards.
Thanks,
Matt Ayres
On Tue, 28 Aug 2001, Deepak Jain wrote:
>
>
> -----Original Message-----
> From: Deepak Jain [mailto:[EMAIL PROTECTED]]
> Sent: Monday, August 27, 2001 7:04 PM
> To: FreeBSD-Questions; freebsd-isp@FreeBSD. ORG
> Subject: Interesting Router Question
>
>
>
> We've got a customer running a FreeBSD router with 2 x 1GE interfaces [ti0
> and ti1]. At no point was bandwidth an issue.
>
> The router was under some kind of ICMP attack:
>
> For about 30 minutes:
> icmp-response bandwidth limit 96304/200 pps
> icmp-response bandwidth limit 97801/200 pps
> icmp-response bandwidth limit 97936/200 pps
> icmp-response bandwidth limit 97966/200 pps
> icmp-response bandwidth limit 98230/200 pps
> icmp-response bandwidth limit 97998/200 pps
> icmp-response bandwidth limit 98132/200 pps
> icmp-response bandwidth limit 98326/200 pps
> icmp-response bandwidth limit 98091/200 pps
> icmp-response bandwidth limit 87236/200 pps
> icmp-response bandwidth limit 85108/200 pps
> icmp-response bandwidth limit 84609/200 pps
> icmp-response bandwidth limit 86915/200 pps
> icmp-response bandwidth limit 88917/200 pps
> icmp-response bandwidth limit 88218/200 pps
> icmp-response bandwidth limit 72871/20000 pps
> icmp-response bandwidth limit 74934/20000 pps
> icmp-response bandwidth limit 74507/20000 pps
> icmp-response bandwidth limit 82928/20000 pps
> icmp-response bandwidth limit 75657/20000 pps
>
> The router is a dual 600mhz PIII and had a load average of about 0.2 peak
> during the entire event, but was running out of buffer space. A ping would
> return "No buffer space available". Performance became atrocious with high
> packet loss and latency, but completely buffer related.
>
> The mbuf settings are as follows:
>
> 1235/2640/67584 mbufs in use (current/peak/max):
> 1195 mbufs allocated to data
> 40 mbufs allocated to packet headers
> 592/1054/16896 mbuf clusters in use (current/peak/max)
> 2768 Kbytes allocated to network (5% of mb_map in use)
> 0 requests for memory denied
> 0 requests for memory delayed
> 0 calls to protocol drain routines
>
>
> sysctl settings:
>
> net.inet.ip.redirect: 0
> net.local.stream.sendspace: 255360
> net.local.stream.recvspace: 8192
> net.inet.icmp.drop_redirect: 1
> net.inet.icmp.log_redirect: 1
> net.inet.icmp.bmcastecho: 0
> net.inet.tcp.sendspace: 524288
> net.inet.tcp.recvspace: 524288
> net.inet.udp.recvspace: 524288
>
>
> What settings need to be tweaked to allow more ICMP-related buffers to allow
> the system's CPU to discard packets normally. ipfw didn't help or hurt this
> performance [i.e., blocking ICMPs or not] same result.
>
> The solution was to install an ICMP filter on the Cisco feeding this
> customer.
>
> Under normal circumstances, this is what a netstat -i 1 returns:
>
> input (Total) output
> packets errs bytes packets errs bytes colls
> 43001 0 12845737 42965 0 12715776 0
> 42589 0 12426503 42624 0 12299112 0
> 42485 0 12804047 42409 0 12675087 0
> 42059 0 12324347 42060 0 12197342 0
> 42989 0 13004977 42985 0 12875017 0
> 42331 0 12608670 42353 0 12481620 0
> 42327 0 12941571 42252 0 12815136 0
> 42435 0 12414956 42451 0 12288774 0
> 43408 0 13065007 43369 0 12932819 0
> 42849 0 12649420 42853 0 12521309 0
> 42328 0 12918886 42349 0 12788549 0
> 44085 0 13469072 44009 0 13337215 0
> 47849 0 14434350 47686 0 14272423 0
>
> Thanks for any assistance,
>
> Deepak Jain
> AiNET
>
>
>
>
> To Unsubscribe: send mail to [EMAIL PROTECTED]
> with "unsubscribe freebsd-hackers" in the body of the message
>
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message