Adrian Chadd wrote:
On Sun, Jul 20, 2008, Joel Jaeggli wrote:
Not saying that they couldn't benefit from it, however on one hand we
have a device with a 36Mbit cam on the other, one with 2GB of ram, which
one fills up first?
Well, the actual data point you should look at is "160k odd FIB from a couple
years ago can fit in under 2 megabytes of memory."
The random fetch time for dynamic RAM is pretty shocking compared to L2
cache access time, and you probably want to arrange your FIB to play well with
your cache.
Its nice that the higher end CPUs have megabytes and megabytes of L2 cache
but placing a high-end Xeon on each of your interface processors is probably
asking a lot. So there's still room for optimising for sensibly-specced
hardware.
If you're putting it on a line card it's probably more like a RAZA XLR,
more memory bandwith and less cpu relative to the say the intel arch
approach.
That said I think you're headed to high end again. It has been
routinetly posited that fib growth hurts the people on the edge more
than in the center. I don't buy that for the reason outlined in my
original response. If my pps requirements are moderate my software
router can carry a fib of effectively arbitrary size at a lower cost
than carrying the same fib in cam.
Of course, -my- applied CPU-cache clue comes from the act of parsing HTTP
requests/
replies, not from building FIBs. I'm just going off the papers I've read on the
subject. :)
Adrian