On 1/7/21 1:39 AM, Dongseok Yi wrote:
skbs in fraglist could be shared by a BPF filter loaded at TC. It
triggers skb_ensure_writable -> pskb_expand_head ->
skb_clone_fraglist -> skb_get on each skb in the fraglist.

While tcpdump, sk_receive_queue of PF_PACKET has the original fraglist.
But the same fraglist is queued to PF_INET (or PF_INET6) as the fraglist
chain made by skb_segment_list.

If the new skb (not fraglist) is queued to one of the sk_receive_queue,
multiple ptypes can see this. The skb could be released by ptypes and
it causes use-after-free.

[ 4443.426215] ------------[ cut here ]------------
[ 4443.426222] refcount_t: underflow; use-after-free.
[ 4443.426291] WARNING: CPU: 7 PID: 28161 at lib/refcount.c:190
refcount_dec_and_test_checked+0xa4/0xc8
[ 4443.426726] pstate: 60400005 (nZCv daif +PAN -UAO)
[ 4443.426732] pc : refcount_dec_and_test_checked+0xa4/0xc8
[ 4443.426737] lr : refcount_dec_and_test_checked+0xa0/0xc8
[ 4443.426808] Call trace:
[ 4443.426813]  refcount_dec_and_test_checked+0xa4/0xc8
[ 4443.426823]  skb_release_data+0x144/0x264
[ 4443.426828]  kfree_skb+0x58/0xc4
[ 4443.426832]  skb_queue_purge+0x64/0x9c
[ 4443.426844]  packet_set_ring+0x5f0/0x820
[ 4443.426849]  packet_setsockopt+0x5a4/0xcd0
[ 4443.426853]  __sys_setsockopt+0x188/0x278
[ 4443.426858]  __arm64_sys_setsockopt+0x28/0x38
[ 4443.426869]  el0_svc_common+0xf0/0x1d0
[ 4443.426873]  el0_svc_handler+0x74/0x98
[ 4443.426880]  el0_svc+0x8/0xc

Fixes: 3a1296a38d0c (net: Support GRO/GSO fraglist chaining.)
Signed-off-by: Dongseok Yi <dseok...@samsung.com>
Acked-by: Willem de Bruijn <will...@google.com>
---
  net/core/skbuff.c | 20 +++++++++++++++++++-
  1 file changed, 19 insertions(+), 1 deletion(-)

v2: Expand the commit message to clarify a BPF filter loaded

diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index f62cae3..1dcbda8 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3655,7 +3655,8 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
        unsigned int delta_truesize = 0;
        unsigned int delta_len = 0;
        struct sk_buff *tail = NULL;
-       struct sk_buff *nskb;
+       struct sk_buff *nskb, *tmp;
+       int err;
skb_push(skb, -skb_network_offset(skb) + offset); @@ -3665,11 +3666,28 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
                nskb = list_skb;
                list_skb = list_skb->next;
+ err = 0;
+               if (skb_shared(nskb)) {
+                       tmp = skb_clone(nskb, GFP_ATOMIC);
+                       if (tmp) {
+                               kfree_skb(nskb);

Should use consume_skb() to not trigger skb:kfree_skb tracepoint when looking
for drops in the stack.

+                               nskb = tmp;
+                               err = skb_unclone(nskb, GFP_ATOMIC);

Could you elaborate why you also need to unclone? This looks odd here. tc layer
(independent of BPF) from ingress & egress side generally assumes unshared skb,
so above clone + dropping ref of nskb looks okay to make the main skb struct 
private
for mangling attributes (e.g. mark) & should suffice. What is the exact purpose 
of
the additional skb_unclone() in this context?

+                       } else {
+                               err = -ENOMEM;
+                       }
+               }
+
                if (!tail)
                        skb->next = nskb;
                else
                        tail->next = nskb;
+ if (unlikely(err)) {
+                       nskb->next = list_skb;
+                       goto err_linearize;
+               }
+
                tail = nskb;
delta_len += nskb->len;


Reply via email to