On May 21, 2014, at 8:44 PM, Adam Thompson <[email protected]> wrote:

> On 14-05-21 08:27 PM, Joseph H wrote:
>> Hi Everyone,
>> 
>> I was having a debate with a new network engineer we have and we were 
>> discussing how pfSense performs and how it would handle 10G network 
>> connections, setup as a transparent firewall, using snort and a few other 
>> packages to help monitor and graph traffic.
>> 
>> I was saying that as long as it has plenty of CPU and Memory, plus Intel 
>> NIC's for the 10G then it would not have any problems doing transparent 
>> mode, and there would be no noticeable slowdown or sluggishness.
>> 
>> Does anyone have any statistics they would share or what size server to 
>> build, using Intel 10G nic cards?
>> 
>> Thanks in advance.
>> 
>> Joe
>> 
> 
> Jim just had this argument with Henning Brauer at BSDCan…

were you in the room?  you should have said ‘HI’.

I wasn’t so much arguing with Henning as I was asserting that his statement 
(that OpenBSD, which is slower than FreeBSD (and thus pfSense) can forward at 
10Gbps rates) was… suspect.

> at those speeds, bandwidth doesn't really matter, packets-per-second matters.
> In most normal situations, pfSense can pass almost 10Gbit/sec of traffic.

As it stands today, on a fast box, pfSense will forward a bit more than 1Mpps.  
 It’s easy math to get to 10Gbps throughput.  If you’re using 1500 byte frames, 
then presto: 10Gbps “throughput”
without maxing things out.

Good news, right?   Nope.   Not all the world is an FTP session.  So it’s 
actually an issue, and one that has never really be addressed.  So, we’re 
addressing it. 
pfSense needs to “grow up” from being mostly about people’s home networks, into 
a real system that can stand up in the face todays’ high packet rate cloud 
environments.

The dev team behind pfSense spent a lot of time talking at BSDcan about where 
we want to go after pfSense 2.2 is released.

There are two main places we’re going to focus:
        - performance (because everyone enjoys doing performance work, and 
pfSense has some catching up to do)
        - manageability (basically this means that there will be an API for 
pfSense, so bolting it into our product (“pfCenter”) as well as various devops 
stacks (puppet, chef, salt, ansible, open stack, etc)                becomes 
possible.

As I type, to my immediate left, are a pair of Intel i5 NUCs running pkt-gen 
between themselves on a nearly dumb switch.  They’re passing 1.387 - 1.388 Mpps 
between them.
Put pfSense between them, and the throughput drops.  How much it drops depends 
on how much CPU can be thrown at it, and a subject I’m not willing to go delve 
into right now.
Let’s just say “half” in the best scenario, and that a lot of my work @ home 
will be spent trying to make the APU (and systems like it) perform better.   

For larger systems, at work there are a set of about a dozen machines, with 
various Intel, Solarflare and Chelsio 10Gbps NICs installed, and a couple 
10Gbps switches, all in the “test rack”.  That is, none of this is in 
“production”.  There is a whole other set of hardware that constitutes the 
“production” network.  When fully-installed, both clusters of machines will be 
running at 10Gbps.


… Just so you know where we’re going ...

(As previously related, we have a pair of 10Gbps links between the office and 
the datacenter next door, so we’re prepared to dogfood the results.)

Jim

_______________________________________________
List mailing list
[email protected]
https://lists.pfsense.org/mailman/listinfo/list

Reply via email to