On Thu, Oct 14, 2021 at 07:12:29AM +0000, Xia, Chenbo wrote: > > -----Original Message----- > > From: Ivan Malov <ivan.ma...@oktetlabs.ru> > > Sent: Friday, September 17, 2021 2:50 AM > > To: dev@dpdk.org > > Cc: Maxime Coquelin <maxime.coque...@redhat.com>; sta...@dpdk.org; Andrew > > Rybchenko <andrew.rybche...@oktetlabs.ru>; Xia, Chenbo > > <chenbo....@intel.com>; > > Yuanhan Liu <yuanhan....@linux.intel.com>; Olivier Matz > > <olivier.m...@6wind.com> > > Subject: [PATCH v2] net/virtio: handle Tx checksums correctly for tunnel > > packets > > > > Tx prepare method calls rte_net_intel_cksum_prepare(), which > > handles tunnel packets correctly, but Tx burst path does not > > take tunnel presence into account when computing the offsets. > > > > Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload") > > Cc: sta...@dpdk.org > > > > Signed-off-by: Ivan Malov <ivan.ma...@oktetlabs.ru> > > Reviewed-by: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru> > > --- > > drivers/net/virtio/virtqueue.h | 9 ++++++--- > > 1 file changed, 6 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h > > index 03957b2bd0..b83ff32efb 100644 > > --- a/drivers/net/virtio/virtqueue.h > > +++ b/drivers/net/virtio/virtqueue.h > > @@ -620,19 +620,21 @@ static inline void > > virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie) > > { > > uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK; > > + uint16_t o_l23_len = (cookie->ol_flags & PKT_TX_TUNNEL_MASK) ? > > + cookie->outer_l2_len + cookie->outer_l3_len : 0; > > > > if (cookie->ol_flags & PKT_TX_TCP_SEG) > > csum_l4 |= PKT_TX_TCP_CKSUM; > > > > switch (csum_l4) { > > case PKT_TX_UDP_CKSUM: > > - hdr->csum_start = cookie->l2_len + cookie->l3_len; > > + hdr->csum_start = o_l23_len + cookie->l2_len + cookie->l3_len; > > hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum); > > hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; > > break; > > > > case PKT_TX_TCP_CKSUM: > > - hdr->csum_start = cookie->l2_len + cookie->l3_len; > > + hdr->csum_start = o_l23_len + cookie->l2_len + cookie->l3_len; > > hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum); > > hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; > > break; > > @@ -650,7 +652,8 @@ virtqueue_xmit_offload(struct virtio_net_hdr *hdr, > > struct > > rte_mbuf *cookie) > > VIRTIO_NET_HDR_GSO_TCPV6 : > > VIRTIO_NET_HDR_GSO_TCPV4; > > hdr->gso_size = cookie->tso_segsz; > > - hdr->hdr_len = cookie->l2_len + cookie->l3_len + cookie->l4_len; > > + hdr->hdr_len = o_l23_len + cookie->l2_len + cookie->l3_len + > > + cookie->l4_len; > > } else { > > ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); > > ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); > > -- > > 2.20.1 > > Reviewed-by: Chenbo Xia <chenbo....@intel.com> >
I have one comment to mention that from application perspective, it has to take care that the driver does not support outer tunnel offload (this matches the advertised capabilities). For instance, in case of a vxlan tunnel, if the outer checksum needs to be calculated, it has to be done by the application. In short, the application can ask to offload the inner part if no offload is required on the outer part. Also, since grep "PKT_TX_TUNNEL" in driver/net/ixgbe gives nothing, it seems the ixgbe driver does not support the same offload request than described in this patch: (m->ol_flags & PKT_TX_TUNNEL_MASK) == PKT_TX_TUNNEL_XXXXX m->outer_l2_len = outer l2 length m->outer_l3_len = outer l3 length m->l2_len = outer l4 length + tunnel len + inner l2 len m->l3_len = inner l3 len m->l4_len = inner l4 len An alternative for doing the same (that would work with ixgbe and current virtio) is to give: (m->ol_flags & PKT_TX_TUNNEL_MASK) == 0 m->l2_len = outer lengths + tunnel len + inner l2 len m->l3_len = inner l3 len m->l4_len = inner l4 len I think a capability may be missing to differentiate which drivers support which mode. Or, all drivers could be fixed to support both modes (and this would make this patch valid). Thanks, Olivier