On 01.12.2014 14:59, Zoltan Kiss wrote: > > > On 01/12/14 13:36, David Vrabel wrote: >> On 01/12/14 08:55, Stefan Bader wrote: >>> On 11.08.2014 19:32, Zoltan Kiss wrote: >>>> There is a long known problem with the netfront/netback interface: if the >>>> guest >>>> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring >>>> slots, >>>> it gets dropped. The reason is that netback maps these slots to a frag in >>>> the >>>> frags array, which is limited by size. Having so many slots can occur since >>>> compound pages were introduced, as the ring protocol slice them up into >>>> individual (non-compound) page aligned slots. The theoretical worst case >>>> scenario looks like this (note, skbs are limited to 64 Kb here): >>>> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary, >>>> using 2 slots >>>> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at >>>> the >>>> end and the beginning of a page, therefore they use 3 * 15 = 45 slots >>>> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots >>>> Although I don't think this 51 slots skb can really happen, we need a >>>> solution >>>> which can deal with every scenario. In real life there is only a few slots >>>> overdue, but usually it causes the TCP stream to be blocked, as the retry >>>> will >>>> most likely have the same buffer layout. >>>> This patch solves this problem by linearizing the packet. This is not the >>>> fastest way, and it can fail much easier as it tries to allocate a big >>>> linear >>>> area for the whole packet, but probably easier by an order of magnitude >>>> than >>>> anything else. Probably this code path is not touched very frequently >>>> anyway. >>>> >>>> Signed-off-by: Zoltan Kiss <zoltan.k...@citrix.com> >>>> Cc: Wei Liu <wei.l...@citrix.com> >>>> Cc: Ian Campbell <ian.campb...@citrix.com> >>>> Cc: Paul Durrant <paul.durr...@citrix.com> >>>> Cc: net...@vger.kernel.org >>>> Cc: linux-ker...@vger.kernel.org >>>> Cc: xen-de...@lists.xenproject.org >>> >>> This does not seem to be marked explicitly as stable. Has someone already >>> asked >>> David Miller to put it on his stable queue? IMO it qualifies quite well and >>> the >>> actual change should be simple to pick/backport. >> >> I think it's a candidate, yes. >> >> Can you expand on the user visible impact of the bug this patch fixes? >> I think it results in certain types of traffic not working (because the >> domU always generates skb's with the problematic frag layout), but I >> can't remember the details. > > Yes, this line in the comment talks about it: "In real life there is only a > few > slots overdue, but usually it causes the TCP stream to be blocked, as the > retry > will most likely have the same buffer layout." > Maybe we can add what kind of traffic triggered this so far, AFAIK NFS was one > of them, and Stefan had an another use case. But my memories are blur about > this.
We had some report about some web-app hitting packet losses. I suspect that also was streaming something. For a easy trigger we found redis-benchmark (part of the redis keyserver) with a larger (iirc 1kB) payload would trigger the fragmentation/exceeding pages to happen. Though I think it did not fail but showed a performance drop instead (from memory which also suffers from loosing detail). -Stefan > > Zoli
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel