Hello Jerome, You can disable checksum offload on veth pair in Linux.
sudo ethtool -K veth0 tx off sudo ethtool -K veth0 rx off But it will not resolve the actual issue if an interface being used in future has enabled offloads. You need to compute the checksums in your custom node before encapsulating them in IPv6 header. Vxlan encap node has an example of it today. Best Regards, Mohsin From: <vpp-dev@lists.fd.io> on behalf of "jerome.bay...@student.uliege.be" <jerome.bay...@student.uliege.be> Date: Tuesday, May 25, 2021 at 5:31 PM To: Ole Troan <otr...@employees.org> Cc: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>, Justin Iurman <justin.iur...@uliege.be> Subject: Re: [vpp-dev] IPv6 in IPv6 Encapsulation Hello Ole, I implemented the solution you suggested (i.e chaining the buffers) and it seems to work correctly now so thank you ! However, I had another issue : when some TCP or UDP packets arrive in VPP, the latter seems to set their checksum to zero and it also sets the "offload" flag of the associated buffer. In the last VPP nodes the packet traverses, the checksum is recomputed just before the packet is forwarded and everything is fine. Firstly, I don't really understand why it does that ? I create a veth interface on my ubuntu and then I link this interface to VPP by using an "host-interface". Maybe I need to configure something about the interfaces to disable this behavior ? Secondly, in a "normal" case, as I said here above, VPP is able to recompute the checksum at the end of the graph and nothing bad happens. The problem is that, in my case, I need to create a buffer chain and when I do so, VPP is not able to recompute the checksums (probably because some information from the buffer metadata it is usually using are invalidated because of the buffer chains ?). Thanks again for your help, Jérôme ________________________________ De: "jerome bayaux" <jerome.bay...@student.uliege.be> À: "Ole Troan" <otr...@employees.org> Cc: vpp-dev@lists.fd.io, "Neale Ranns" <ne...@graphiant.com>, "Justin Iurman" <justin.iur...@uliege.be> Envoyé: Vendredi 21 Mai 2021 18:20:31 Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation Changing the PRE_DATA_SIZE value in src/vlib/CMakeLists.txt does not appear to be that easy.. Indeed, it seems to require several other changes like the value of DPDK_RTE_PKTMBUF_HEADROOM that appears in src/plugins/dpdk/CMakeLists.txt, and some static assert fail by saying : "save_rewrite_length member must be able to hold the max value of rewrite length". Thus, the best solution is probably the one given by Ole ? Could you help me (guide me) a little bit by pointing me files of interest or by redirecting me towards some examples if some exist ? For instance, I'm not sure to see which functions I should use to create a new buffer and then to chain it to the "main" one. Jérôme ________________________________ De: "Ole Troan" <otr...@employees.org> À: "jerome bayaux" <jerome.bay...@student.uliege.be> Cc: vpp-dev@lists.fd.io, "Neale Ranns" <ne...@graphiant.com>, "Justin Iurman" <justin.iur...@uliege.be> Envoyé: Vendredi 21 Mai 2021 17:21:32 Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation On 21 May 2021, at 17:15, Neale Ranns <ne...@graphiant.com> wrote: Right, there’s only so much space available. You’ll need to recompile VPP to get more space. Change the PRE_DATA_SIZE value in src/vlib/CMakeLists.txt. Alternatively use a new buffer for the new IPv6 header and extension header chain and chain the buffers together. You might want to look at the ioam plugin too btw. Cheers Ole /neale From: jerome.bay...@student.uliege.be <jerome.bay...@student.uliege.be> Date: Friday, 21 May 2021 at 17:06 To: Neale Ranns <ne...@graphiant.com> Cc: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io>, Justin Iurman <justin.iur...@uliege.be> Subject: Re: [vpp-dev] IPv6 in IPv6 Encapsulation I've just run few tests to be sure : It's exactly that ! As long as the extension header is smaller or exactly equal to 128 bytes, everything is fine. Once it gets bigger than 128 bytes, it starts to go wrong and funky. Jérôme ________________________________ De: "Neale Ranns" <ne...@graphiant.com> À: "jerome bayaux" <jerome.bay...@student.uliege.be> Cc: vpp-dev@lists.fd.io, "Justin Iurman" <justin.iur...@uliege.be> Envoyé: Vendredi 21 Mai 2021 16:38:02 Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation Does it all start to go wrong when the extension header gets to about 128 bytes? /neale From: jerome.bay...@student.uliege.be <jerome.bay...@student.uliege.be> Date: Friday, 21 May 2021 at 16:04 To: Neale Ranns <ne...@graphiant.com> Cc: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io>, Justin Iurman <justin.iur...@uliege.be> Subject: Re: [vpp-dev] IPv6 in IPv6 Encapsulation Hi again Neale, Here are some additional observations I've noticed and that could be useful for you to help me : 1) The error only shows up when the Hop-by-Hop extension header I add is big enough (I can give you a more accurate definition of "enough" if you need). When it is quite small, everything seems fine. 2) The faulty MAC address seems to follow a "pattern" : it is always of the form "X:00:00:00:e3:6e", where byte X is a number that increases for the following packets. Moreover, the bytes "e3:6e" (i.e last 16 bytes of MAC address) are correct and correspond to the last 16 bytes of the expected and thus correct destination MAC address. Thank you for the help, Jérôme ________________________________ De: "jerome bayaux" <jerome.bay...@student.uliege.be> À: "Neale Ranns" <ne...@graphiant.com> Cc: vpp-dev@lists.fd.io, "Justin Iurman" <justin.iur...@uliege.be> Envoyé: Vendredi 21 Mai 2021 14:36:43 Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation Hi Neale, Here is a trace of a simple ping packet entering into VPP (let me know if you need more information about the topology I used) : Packet 8 00:00:38:194824: af-packet-input af_packet: hw_if_index 1 next-index 4 tpacket2_hdr: status 0x20000001 len 118 snaplen 118 mac 66 net 80 sec 0x60a7a2bd nsec 0x35392422 vlan 0 vlan_tpid 0 00:00:38:194826: ethernet-input IP6: 8a:f6:cc:53:06:db -> 02:fe:6b:c4:db:06 00:00:38:194848: ip6-input ICMP6: db00::2 -> db03::2 tos 0x00, flow label 0xaaa8d, hop limit 64, payload length 64 ICMP echo_request checksum 0x26b9 00:00:38:194871: ip6-inacl INACL: sw_if_index 1, next_index 1, table 0, offset 1216 00:00:38:194916: ip6-add-hop-by-hop IP6_ADD_HOP_BY_HOP: next index 2 00:00:38:194955: ip6-lookup fib 0 dpo-idx 19 flow hash: 0x00000000 IP6_HOP_BY_HOP_OPTIONS: db01::1 -> db02::2 tos 0x00, flow label 0xaaa8d, hop limit 64, payload length 256 00:00:38:194993: ip6-load-balance fib 0 dpo-idx 6 flow hash: 0x00000000 IP6_HOP_BY_HOP_OPTIONS: db01::1 -> db02::2 tos 0x00, flow label 0xaaa8d, hop limit 64, payload length 256 00:00:38:195032: ip6-hop-by-hop IP6_HOP_BY_HOP: next index 5 len 152 traced 152 namespace id 1, trace type 0xf0f000, 2 elts left, 44 bytes per node [0], ttl: 0x0, node id short: 0x0, ingress sw: 0, egress sw: 0, timestamp (s): 0x0, timestamp (sub-sec): 0x0, ttl: 0x0, node id wide: 0x0, ingress hw: 0, egress hw: 0, appdata wide: 0x0, buffers avail able: 0 [1], ttl: 0x0, node id short: 0x0, ingress sw: 0, egress sw: 0, timestamp (s): 0x0, timestamp (sub-sec): 0x0, ttl: 0x0, node id wide: 0x0, ingress hw: 0, egress hw: 0, appdata wide: 0x0, buffers avail able: 0 [2], ttl: 0x40, node id short: 0x1, ingress sw: 1, egress sw: 2, timestamp (s): 0x60a7a2bd, timestamp (sub-sec): 0x60a7a2bd, ttl: 0x40, node id wide: 0x1, ingress hw: 1, egress hw: 2, appdata wide: 0x 3, buffers available: 16288 unrecognized option 172 length 240 Packet 9 00:00:38:195078: handoff_trace HANDED-OFF: from thread 225 trace index 10354178 00:00:38:195078: ip6-rewrite tx_sw_if_index 2 adj-idx 6 : ipv6 via db01::2 memif1/0: mtu:9000 next:4 flags:[] 02fe9de1e36e02fe666cf11e86dd flow hash: 0x00000000 00000000: 08000000e36e02fe666cf11e86dd600aaa8d0100003fdb010000000000000000 00000020: 000000000001db02000000000000000000000000000229120000318e00010001 00000040: 5802f0f000000000000000000000000000000000000000000000000000000000 00000060: 00000000000000000000000000000000000000000000000000000000 00:00:38:195130: memif1/0-output memif1/0 IP6: 02:fe:66:6c:f1:1e -> 08:00:00:00:e3:6e IP6_HOP_BY_HOP_OPTIONS: db01::1 -> db02::2 tos 0x00, flow label 0xaaa8d, hop limit 63, payload length 256 I've just noticed that the packet is interpreted as 2 packets in the vpp trace output which is weird I guess. Maybe it's a first clue about my issue. As you can see in the last few lines, the destination MAC address is set to "08:00:00:00:e3:6e" which is not the expected value in that case. Jérôme ________________________________ De: "Neale Ranns" <ne...@graphiant.com> À: "jerome bayaux" <jerome.bay...@student.uliege.be>, vpp-dev@lists.fd.io Cc: "Justin Iurman" <justin.iur...@uliege.be> Envoyé: Vendredi 21 Mai 2021 13:34:32 Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation Hi Jérôme, A packet trace would help us help you in this case 😊 /neale From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of jerome.bayaux via lists.fd.io <jerome.bayaux=student.uliege...@lists.fd.io> Date: Friday, 21 May 2021 at 13:05 To: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> Cc: Justin Iurman <justin.iur...@uliege.be> Subject: [vpp-dev] IPv6 in IPv6 Encapsulation Hello all, I'm trying to do some IPv6 in IPv6 encapsulation with no tunnel configuration. The objective is to encapsulate the received packet in an other IPv6 packet that will also "contain" a Hop-by-hop extension header. In summary, the structure of the final packet will look like this : Outer-IP6-Header -> Hop-by-hop-extension-header -> Original packet. To do so, I use an access list to redirect packets to my VPP node that encapsulates the received packets. My node is located between the "ip6-inacl" node and the "ip6-lookup" node. Here is the path that is thus taken by the packet : "ethernet-input" -> "ip6-input" -> "ip6-inacl" -> "My VPP node" -> "ip6-lookup" -> etc. The issue I have is the following : The packets that leave VPP after being encapsulated have some issues regarding their MAC addresses. Indeed, for example, the destination MAC is not the expected one (and is not even a valid one according to the topology I use). It looks like there is an issue with the resolution of the MAC addresses or something like that but I'm not sure. Should I do anything in my implementation to kind of "warn" VPP that I am performing an IPv6 encapsulation or something ? Thanks for your answers, Jérôme
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19472): https://lists.fd.io/g/vpp-dev/message/19472 Mute This Topic: https://lists.fd.io/mt/82983110/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-