On Tue, Aug 11, 2015 at 06:13:49PM -0700, Alexei Starovoitov wrote: > On Tue, Aug 11, 2015 at 06:23:35PM +0200, Phil Sutter wrote: > > > > I have an unfinished solution in the oven, but being kept busy with > > other things for now. The action plan is as follows: > > > > 1) Introduce IFF_NO_QUEUE net_device->priv_flag. > > 2) Have attach_default_qdiscs() and attach_one_default_qdisc() treat > > IFF_NO_QUEUE as alternative to tx_queue_len == 0. > > 3) Add warning to register_netdevice() if tx_queue_len == 0. > > 4) Change virtual NIC drivers to set IFF_NO_QUEUE and leave tx_queue_len > > alone. > > 5) Eventually drop all special handling for tx_queue_len == 0. > > > > I am currently somewhere in 2) and need to implement 4) for veth as PoC to > > check if 2) suffices in all situations we want. Not sure if 3) is > > desireable at all or if there are valid cases for a literally zero > > length TX queue length. > > sounds like you want to change default qdisc from pfifo_fast to noqueue > for veth, right? > In general 'changing the default' may be an acceptable thing, but then > it needs to strongly justified. How much performance does it bring? > Also why introduce the flag? Why not just add 'tx_queue_len = 0;' > to veth_setup() like the whole bunch of devices do?
A quick test on my local VM with veth and netperf (netserver and veth peer in different netns) I see an increase of about 5% of throughput when using noqueue instead of the default pfifo_fast. Cheers, Phil -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html