On Fri, Jan 8, 2021 at 12:11 PM Willem de Bruijn <willemdebruijn.ker...@gmail.com> wrote: > > From: Willem de Bruijn <will...@google.com> > > skb_seq_read iterates over an skb, returning pointer and length of > the next data range with each call. > > It relies on kmap_atomic to access highmem pages when needed. > > An skb frag may be backed by a compound page, but kmap_atomic maps > only a single page. There are not enough kmap slots to always map all > pages concurrently. > > Instead, if kmap_atomic is needed, iterate over each page. > > As this increases the number of calls, avoid this unless needed. > The necessary condition is captured in skb_frag_must_loop. > > I tried to make the change as obvious as possible. It should be easy > to verify that nothing changes if skb_frag_must_loop returns false. > > Tested: > On an x86 platform with > CONFIG_HIGHMEM=y > CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y > CONFIG_NETFILTER_XT_MATCH_STRING=y > > Run > ip link set dev lo mtu 1500 > iptables -A OUTPUT -m string --string 'badstring' -algo bm -j ACCEPT > dd if=/dev/urandom of=in bs=1M count=20 > nc -l -p 8000 > /dev/null & > nc -w 1 -q 0 localhost 8000 < in > > Signed-off-by: Willem de Bruijn <will...@google.com> > --- > include/linux/skbuff.h | 1 + > net/core/skbuff.c | 28 +++++++++++++++++++++++----- > 2 files changed, 24 insertions(+), 5 deletions(-) > > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > index c858adfb5a82..68ffd3f115c1 100644 > --- a/include/linux/skbuff.h > +++ b/include/linux/skbuff.h > @@ -1203,6 +1203,7 @@ struct skb_seq_state { > struct sk_buff *root_skb; > struct sk_buff *cur_skb; > __u8 *frag_data; > + __u16 frag_off;
frags can exceed 64k, so this needs to be __u32, like the other offsets. I'll have to send a v2. There's also something to be said for having a BUILD_BUG_ON(sizeof(struct skb_seq_state) > sizeof(skb->cb)); as it's getting close. But I won't do that in this stable fix.