Robert Watson wrote:
On Sun, 30 Jul 2006, Sam Leffler wrote:
I have a fair amount of experience with the linux model and it works
ok. The main complication I've seen is when a driver needs to process
multiple queues of packets things get more involved. This is seen in
802.11 drivers where there are two q's, one for data frames and one
for management frames. With the current scheme you have two separate
queues and the start method handles prioritization by polling the mgt
q before the data q. If instead the packet is passed to the start
method then it needs to be tagged in some way so the it's prioritized
properly. Otherwise you end up with multiple start methods; one per
type of packet. I suspect this will be ok but the end result will be
that we'll need to add a priority field to mbufs (unless we pass it
as an arge to the start method).
We have a priority tag in netgraph that we use to keep management frames
on time in the frame relay code
it seems to work ok.
All this is certainly doable but I think just replacing one mechanism
with the other (as you specified) is insufficient.
Linux did a big analysis of what was needed at the time they did most of
their networking and their buffer scheme
(last I looked) had all sorts of fields for this and that. I wonder how
it has held up over time?
Hmm. This is something that I had overlooked. I was loosely aware
that the if_sl code made use of multiple queues, but was under the
impression that the classification to queues occured purely in the
SLIP code. Indeed, it does, but structurally, SLIP is split over the
link layer (if_output) and driver layer (if_start), which I had
forgotten. I take it from your comments that 802.11 also does this,
which I was not aware of.
I'm a little uncomfortable with our current m_tag model, as it
requires significant numbers of additional allocations and frees for
each packet, as well as walking link lists. It's fine for occasional
discretionary use (i.e., MAC labels), but I worry about cases where it
is used with every packet, and we start seeing moderately non-zero
numbers of tags on every packet. I think I would be more comfortable
with an explicit queue identifier argument to if_start, where the link
layer and driver layer agree on how to identify queues.
It would certainly be possible to (for example) have 2 tags
preallocated on each mbuf or something but it is hard to know in advance
what will be needed.
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"