Bruce M. Simpson wrote:
Claudio Jeker wrote:
Just because you believe that route caches are great doesn't mean it is
true. Show some real code and include benchmarks with various workloads
(e.g. a core router that is hit by many many many sessions).
It is a reasonable approach, for a uniprocessor design, to focus on
optimizing the route lookup as much as possible. Does this approach
scale to SMP, though? This is still a very much open question and from
what I have seen of the OpenBSD implementation, it only addresses the
uniprocessor case - again please correct me here if I have missed any
details.
There isn't much SMP routing going on. A particular interface can
only be served by one CPU. Other than that the routing table lookup
is rather quick and contention seems to be low. It can use RW locks.
Many reads to very few writes/changes.
I believe the Linux dst cache is strongly tied to the IBM-patented
Remote-Copy-Update algorithm based on what I've read about their LC-trie
implementation.
LC-trie is a compiled trie, not a dynamic one. As long as the prefix
stays the same you can just update the nexthop but that's it.
Until now all caching solutions resulted in very bad performance on busy
boxes. Remember ip_fastforward or how was it called? Another example are
all crapy L3 switches that burn down if the CAM (chache) is flodded.
I assume you are referring to NetBSD's flow-based IP forwarding cache,
which was implemented outside of the scope of SMP; spl-style interrupt
priority masking was still in use at that time.
The flow based fastforward was horrible. The FreeBSD fastforward since
FreeBSD 5.3 is really a fast forward and does process to completion. It
is indeed quite a bit faster than normal forwarding.
It is established that saturating content-addressable memory is going to
lead to the slow path being taken, however, that's the trade-off one
makes with these designs.
IMO it is better to make the route lookup faster and forget about
caching.
My concern is that you may be comparing apples with oranges here.
In the case of SMP, locking does become a consideration, and caches, if
carefully implemented, are one way of addressing this.
On the other hand, CPU affinity has been proposed as a limited solution,
however it depends how this is implemented - affinity for lookups,
forwarding, or both?
Incoming traffic has per-interface affinity. The routing lookup
happens on the same CPU. No affinity there. Shared read access.
Perhaps there is something I am missing about how the OpenBSD
implementation deals with SMP, as I am not as familiar with their code
as FreeBSD's.
The OpenBSD kernel still has a big giant lock around pretty much
everything.
--
Andre
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"