Hi Willem, On Tue, 2 Feb 2021 at 23:45, Willem de Bruijn <willemdebruijn.ker...@gmail.com> wrote: > > On Tue, Feb 2, 2021 at 11:08 AM Loic Poulain <loic.poul...@linaro.org> wrote: > > > > When device side MTU is larger than host side MTU, the packets > > (typically rmnet packets) are split over multiple MHI transfers. > > In that case, fragments must be re-aggregated to recover the packet > > before forwarding to upper layer. > > > > A fragmented packet result in -EOVERFLOW MHI transaction status for > > each of its fragments, except the final one. Such transfer was > > previoulsy considered as error and fragments were simply dropped. [...] > > +static struct sk_buff *mhi_net_skb_agg(struct mhi_net_dev *mhi_netdev, > > + struct sk_buff *skb) > > +{ > > + struct sk_buff *head = mhi_netdev->skbagg_head; > > + struct sk_buff *tail = mhi_netdev->skbagg_tail; > > + > > + /* This is non-paged skb chaining using frag_list */ > > + > > no need for empty line? > > > + if (!head) { > > + mhi_netdev->skbagg_head = skb; > > + return skb; > > + } > > + > > + if (!skb_shinfo(head)->frag_list) > > + skb_shinfo(head)->frag_list = skb; > > + else > > + tail->next = skb; > > + > > + /* data_len is normally the size of paged data, in our case there > > is no > > data_len is defined as the data excluding the linear len (ref: > skb_headlen). That is not just paged data, but includes frag_list.
Ok, thanks for clarifying this, I'll remove the comment since it's then a valid usage. [...] > > static void mhi_net_dl_callback(struct mhi_device *mhi_dev, > > struct mhi_result *mhi_res) > > { > > @@ -142,19 +175,42 @@ static void mhi_net_dl_callback(struct mhi_device > > *mhi_dev, > > free_desc_count = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); > > > > if (unlikely(mhi_res->transaction_status)) { > > - dev_kfree_skb_any(skb); > > - > > - /* MHI layer stopping/resetting the DL channel */ > > - if (mhi_res->transaction_status == -ENOTCONN) > > + switch (mhi_res->transaction_status) { > > + case -EOVERFLOW: > > + /* Packet can not fit in one MHI buffer and has been > > + * split over multiple MHI transfers, do > > re-aggregation. > > + * That usually means the device side MTU is larger > > than > > + * the host side MTU/MRU. Since this is not optimal, > > + * print a warning (once). > > + */ > > + netdev_warn_once(mhi_netdev->ndev, > > + "Fragmented packets received, fix > > MTU?\n"); > > + skb_put(skb, mhi_res->bytes_xferd); > > + mhi_net_skb_agg(mhi_netdev, skb); > > + break; > > + case -ENOTCONN: > > + /* MHI layer stopping/resetting the DL channel */ > > + dev_kfree_skb_any(skb); > > return; > > - > > - u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); > > - u64_stats_inc(&mhi_netdev->stats.rx_errors); > > - u64_stats_update_end(&mhi_netdev->stats.rx_syncp); > > + default: > > + /* Unknown error, simply drop */ > > + dev_kfree_skb_any(skb); > > + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); > > + u64_stats_inc(&mhi_netdev->stats.rx_errors); > > + u64_stats_update_end(&mhi_netdev->stats.rx_syncp); > > + } > > } else { > > + skb_put(skb, mhi_res->bytes_xferd); > > + > > + if (mhi_netdev->skbagg_head) { > > + /* Aggregate the final fragment */ > > + skb = mhi_net_skb_agg(mhi_netdev, skb); > > + mhi_netdev->skbagg_head = NULL; > > + } > > + > > u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); > > u64_stats_inc(&mhi_netdev->stats.rx_packets); > > - u64_stats_add(&mhi_netdev->stats.rx_bytes, > > mhi_res->bytes_xferd); > > + u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len); > > might this change stats? it will if skb->len != 0 before skb_put. Even > if so, perhaps it doesn't matter. Don't get that point, skb is the received MHI buffer, we simply set its size because MHI core don't (skb->len is always 0 before put). Then if it is part of a fragmented transfer we just do the extra 'skb = skb_agg+ skb', so skb->len should always be right here, whether it's a standalone/linear packet or a multi-frag packet. Regards, Loic