On 8/25/15 10:07 PM, Evgeny Khorokhorin wrote:
Hi,
I have 10.2-STABLE, 2 CPU Intel E5-2643v3, network Intel XL710 with
1.4.0 driver from Intel
I know that going through routing table is very fast (rn_match). But
I decided to optimize routing table.
I'm using 2 interfaces - ixl0 and ixl1.
Behind ixl0 I have 304 networks 172.16.. from /28 to /24 all via the
same gw 1.1.1.1 (because ip on ixl0 with /30 mask). And behind ixl1
I have default route via 2.2.2.2.
That 304 172.16 networks I receive via OSPF (quagga). Now all is OK
- on every interface I have up to 500kpps/395kpps, 4.5Gbps/1.57Gbps
(rx/tx on ixl1 and tx/rx on ixl0).
If I disable OSPF and in zebra add static route 172.16.0.0/12 via
1.1.1.1, the system works good until traffic grow up to
251kpps/181kpps , 2.27Gbps/637Mbps. After that the system is
degrading: ixl's queue threads utilizes 100% CPU and I see many many
traffic drops (netstat -i)
If I turn on ospfd and receive 304 more specific routes the problem
disappears.
Where is the problem? Or I have misunderstanding about how FreeBSD
uses routing table..
P.S. I use this machine as NAT. I checked this on ipfw and pf, all
the same.
without knowing anything, it looks like a lock on the route is a
bottleneck.
lots of routes spreads the pain..
try two manually added static routes 172.16.0.0/13 and 172.24.0.0/13
(I hope I split that correctly) and see if it changes things..
then try 4..
-- Cheers,
Evgeny
_______________________________________________
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
_______________________________________________
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"