I just read the first page of the paper so far, but it sounds like it is
heading in a good direction.
It would be interesting to apply also to home access-point/switches, especially
since they are now pushing 1 Gb/sec over the air.
I will put it on my very interesting stack.
On Wednesday
On 10/10/14, 7:52 PM, dpr...@reed.com wrote:
The best approach to dealing with "locking overhead" is to stop thinking
that if locks are good, more locking (finer grained locking) is better.
OS designers (and Linux designers in particular) are still putting in
way too much locking. I deal with
I do know that. I would say that benchmarks rarely match real world problems of
real systems- they come from sources like academia and technical marketing
depts. My job for the last few years has been looking at stems with dozens of
processors across 2 and 4 sockets and multiple 10 GigE adapters
I've been watching Linux kernel development for a long time and they add locks
only when benchmarks show that a lock is causing a bottleneck. They don't just
add them because they can.
They do also spend a lot of time working to avoid locks.
One thing that you are missing is that you are think
The best approach to dealing with "locking overhead" is to stop thinking that
if locks are good, more locking (finer grained locking) is better. OS
designers (and Linux designers in particular) are still putting in way too much
locking. I deal with this in my day job (we support systems with
I have some hope that the skb->xmit_more API could be used to make
aggregating packets in wifi on an AP saner. (my vision for it was that
the overlying qdisc would set xmit_more while it still had packets
queued up for a given station and then stop and switch to the next.
But the rest of the infras