On Feb 9, 2013, at 11:08 AM, Olivier Cochard-Labbé <oliv...@cochard.me> wrote:
> > Regarding your sysctl.conf: > > Why "kern.ipc.nmbclusters = 512000", and not a smaller or bigger value > ? How did you choose this exact value ? By the way, I just checked two of my routers, and it seems I am reaching the current BSDRP limit: router1: 229742/4768/234510 mbufs in use (current/cache/total) 229320/3006/232326/262144 mbuf clusters in use (current/cache/total/max) 229320/3000 mbuf+clusters out of packet secondary zone in use (current/cache) 0/279/279/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 516096K/8320K/524416K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines router2: 229352/22678/252030 mbufs in use (current/cache/total) 229324/18988/248312/262144 mbuf clusters in use (current/cache/total/max) 229324/6321 mbuf+clusters out of packet secondary zone in use (current/cache) 0/0/0/12800 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) 515986K/43645K/559632K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines Both have 12 igb interfaces. Also… # netstat -Q Configuration: Setting Current Limit Thread count 1 1 Default queue limit 256 10240 Dispatch policy direct n/a Threads bound to CPUs disabled n/a Protocols: Name Proto QLimit Policy Dispatch Flags ip 1 256 flow default --- igmp 2 256 source default --- rtsock 3 2048 source default --- arp 7 256 source default --- ether 9 256 source direct --- ip6 10 256 flow default --- Workstreams: WSID CPU Name Len WMark Disp'd HDisp'd QDrops Queued Handled 0 0 ip 0 3 443318 0 0 1302534 1745852 0 0 igmp 0 0 4 0 0 0 4 0 0 rtsock 0 10 0 0 0 4786852 4786852 0 0 arp 0 0 6411282 0 0 0 6411282 0 0 ether 0 0 2939522351 0 0 0 2939522351 0 0 ip6 0 0 139165 0 0 0 139165 This is on 4 core CPU, but because it has hyper threading, igb attaches to all of them: igb0: <Intel(R) PRO/1000 Network Connection version - 2.3.4> port 0x6020-0x603f mem 0xb2460000-0xb247ffff,0xb2440000-0xb245ffff,0xb2504000-0xb2507fff irq 37 at device 0.0 on pci13 igb0: Using MSIX interrupts with 9 vectors igb0: Ethernet address: […] igb0: Bound queue 0 to cpu 0 igb0: Bound queue 1 to cpu 1 igb0: Bound queue 2 to cpu 2 igb0: Bound queue 3 to cpu 3 igb0: Bound queue 4 to cpu 4 igb0: Bound queue 5 to cpu 5 igb0: Bound queue 6 to cpu 6 igb0: Bound queue 7 to cpu 7 001.000007 netmap_attach [1496] ok for igb0 To the original question, I believe for a router, it is better to use higher frequency, lower core count CPU and lower latency memory (bus). In your example, the "new" CPU is actually slower per-core and might not have better-enough memory bandwidth/latency to compensate. In this regard, I am wondering if Intel's SpeedStep might help here, to force higher CPU frequency, if you limit the number of loaded cores. Daniel ------------------------------------------------------------------------------ Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before the end March 2013 and get the hardware for free! Learn more. http://p.sf.net/sfu/sophos-d2d-feb _______________________________________________ Bsdrp-users mailing list Bsdrp-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bsdrp-users