Hi, On Tue, 2018-04-17 at 17:07 -0400, Willem de Bruijn wrote: > That said, for negotiated flows an inverse GRO feature could > conceivably be implemented to reduce rx stack traversal, too. > Though due to interleaving of packets on the wire, it aggregation > would be best effort, similar to TCP TSO and GRO using the > PSH bit as packetization signal.
Reviving this old thread, before I forgot again. I have some local patches implementing UDP GRO in a dual way to current GSO_UDP_L4 implementation: several datagram with the same length are aggregated into a single one, and the user space receive a single larger packet instead of multiple ones. I hope quic can leverage such scenario, but I really know nothing about the protocol. I measure roughly a 50% performance improvement with udpgso_bench in respect to UDP GSO, and ~100% using a pktgen sender, and a reduced CPU usage on the receiver[1]. Some additional hacking to the general GRO bits is required to avoid useless socket lookups for ingress UDP packets when UDP_GSO is not enabled. If there is interest on this topic, I can share some RFC patches (hopefully somewhat next week). Cheers, Paolo [1] With udpgso_bench_tx, the bottle-neck is again the sender, even with GSO enabled. With a pktgen sender, the bottle-neck become the rx softirqd, and I see a lot of time consumed due to retpolines in the GRO code. In both scenarios skb_release_data() becomes the topmost perf offender for the user space process.