-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 9/5/07 1:50 AM, Henning Brauer wrote:
> * Michael Gale <[EMAIL PROTECTED]> [2007-09-05 00:16]:
>> Hey,
>>
>>      It was suggested that we create an OpenBSD server with 9GB 
>>      interfaces to start. 7 Will be used right off the bat.
>>
>> This would function as a core router brining 7 GB networks together on 
>> the inside of a main firewall. I suggested that maybe we would have some 
>> bandwidth issues with trying to push that much traffic through a single 
>> server.
> 
> you might have thruput issues, you might not. depends on the traffic 
> characteristics and hardware you choose.
> 
>> Can any one comment on this ? Would it not be better to use some think 
>> like a Cisco layer 3 GB switch.
> 
> sure it is better, assuming you call "I paid $100,000 for a $5 CPU that 
> falls over at 5000pps*" better.
> 
> *when the packets are just a tiny bit different from what cisco expects 
> and can handle in the fast path, they go to the main cpu, which is 
> incredibly slow on pretty much any cisco you can buy

Here you are referring to slow-path processing for packets with IP
options set. That's normal with all switches, not just Cisco's.

This also suggests 5000 pps is the expected performance, which is not
the case. Spending US$100k on a switch from Cisco, Foundry, or Force10
will get you fast-path processing in the tens of millions of pps or more
(which AFAIK even the studliest of server hardware doesn't do today) and
slow-path processing in the 10000s of pps or more.

OTOH I fully agree that lower end boxes (and even some higher ones such
as older Sup cards on Cat 65xxs) have relatively slow CPUs.

The key question is whether you have slow-path traffic to begin with.
This is a nonissue if you're not using IP options. Five minutes of
testing will tell if a switch is using its slow path.

dn
iD8DBQFG3sm5yPxGVjntI4IRAmtPAKDrRjey1YLPGdhfb9D90bTX1p/kfACgw0MI
qzPPYsE97zwif8TpgEvE9nE=
=1fJu
-----END PGP SIGNATURE-----

Reply via email to