> On 21 Feb 2020, at 11:48, chetan bhasin <chetan.bhasin...@gmail.com> wrote:
> 
> Thanks a lot Damjan for quick response !
> 
> We will try latest stable/1908 that has the given patch.
> 
> With Mellanox Technologies MT27710 Family [ConnectX-4 Lx] :
> 1) stable/vpp1908 : If we configure buffers (250k) and have 2048  huge pages 
> of 2MB (total 4GB), we see issue with traffic. "l3 mac mismatch"
> 2) stable/vpp1908 :If we configure 4 huge pages of 1GB via grub parameters , 
> vpp works even with 400K buffers.
> 
> Could you please guide us what's the best approach here ?
> 
> For point 1) we see following logs in one of the vpp thread -
> 
> #5  0x00007f3375afbae2 in rte_vlog (level=<optimized out>, logtype=77,
>     format=0x7f3376768df8 "net_mlx5: port %u unable to find virtually 
> contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s", 
> ap=ap@entry=0x7f3379c4fac8)
>     at 
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
> #6  0x00007f3375ab2c12 in rte_log (level=level@entry=5, logtype=<optimized 
> out>,
>     format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find 
> virtually contiguous chunk for address (%p). rte_memseg_contig_walk() 
> failed.\n%.0s")
>     at 
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
> #7  0x00007f3375dc47fa in mlx5_mr_create_primary 
> (dev=dev@entry=0x7f3376e9d940 <rte_eth_devices>,
>     entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
>     at 
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627

No idea about mlx5 PMD, it is a bit special and we encourage people to use 
rdma-core plugin instead currently performance is lower but
we will have DirectVerbs code merged soon...

— 
Damjan

> 
> 
> Thanks,
> Chetan
> 
> 
> On Fri, Feb 21, 2020 at 3:13 PM Damjan Marion <dmar...@me.com 
> <mailto:dmar...@me.com>> wrote:
> 
> 
>> On 21 Feb 2020, at 10:31, chetan bhasin <chetan.bhasin...@gmail.com 
>> <mailto:chetan.bhasin...@gmail.com>> wrote:
>> 
>> Hi Nitin,Damjan,
>> 
>> For 40G XL710 buffers : 537600  (500K+)
>> 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after 
>> removing intel_iommu=on from Grub params.
>> 2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in Grub 
>> params
>> 
>> 
>> On stable/vpp2001 , I found a check-in before which it did not work with " 
>> intel_iommu=on " as grub params, but after the below change-list it work 
>> even with grub params.
>> commit 45495480c8165090722389b08075df06ccfcd7ef
>> Author: Yulong Pei <yulong....@intel.com <mailto:yulong....@intel.com>>
>> Date:   Thu Oct 17 18:41:52 2019 +0800
>>     vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin
>> 
>> Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp 
>> change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" , and 
>> we face issue with traffic  but after the change  sys file value remain as  
>> "N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and traffic 
>> works fine.
>> 
>> As it is bare metal so we can remove intel_iommu=on from grub to make it 
>> work without any patches . Any suggestions?
> 
> IOMMU gives you following:
>  - protection and security - it prevents misbehaving NIC to read/write 
> intentionally or unintentionally memory it is not supposed to access
>  - VA -> PA translation
> 
> If you are running bare-metal, single tenant security is probably not 
> concern, but still it can protect NIC from doing something bad eventually 
> because of driver issues.
> VA -> PA translation helps with performance, as driver doesn’t need to lookup 
> for PA when submitting descriptors but this is not critical perf issue.
> 
> So it is up to you to decide, work without IOMMU or patch your old VPP 
> version….
> 
>> 
>> Regards,
>> Chetan
>> 
>> On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena <nsax...@marvell.com 
>> <mailto:nsax...@marvell.com>> wrote:
>> HI Chethan,
>> 
>>
>> 
>> Your packet trace shows that packet data is all 0 and that’s why you are 
>> running into l3 mac mismatch.
>> 
>> I am guessing something messed with IOMMU due to which translation is not 
>> happening. Although packet length is correct.
>> 
>> You can try out AVF plugin to iron out where problem exists, in dpdk plugin 
>> or vlib
>> 
>>
>> 
>> Thanks,
>> 
>> Nitin
>> 
>>
>> 
>> From: chetan bhasin <chetan.bhasin...@gmail.com 
>> <mailto:chetan.bhasin...@gmail.com>> 
>> Sent: Tuesday, February 18, 2020 12:50 PM
>> To: me <chetan.bhasin...@gmail.com <mailto:chetan.bhasin...@gmail.com>>
>> Cc: Nitin Saxena <nsax...@marvell.com <mailto:nsax...@marvell.com>>; vpp-dev 
>> <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>>
>> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>> 
>>
>> 
>> Hi,
>> 
>> One more finding related to intel nic and number of buffers (537600)
>> 
>>
>> 
>> vpp branch
>> 
>> driver
>> 
>> card
>> 
>> buffers
>> 
>> Traffic
>> 
>> Err
>> 
>> stable/1908
>> 
>> uio_pci_genric
>> 
>> X722(10G)
>> 
>> 537600
>> 
>>  Working
>> 
>>
>> 
>> stable/1908
>> 
>> vfio-pci
>> 
>> XL710(40G)
>> 
>> 537600 
>> 
>> Not Working
>> 
>> l3 mac mismatch
>> 
>> stable/2001
>> 
>> uio_pci_genric
>> 
>> X722(10G)
>> 
>> 537600
>> 
>>  Working
>> 
>>
>> 
>> stable/2001
>> 
>> vfio-pci
>> 
>> XL710(40G)
>> 
>> 537600
>> 
>>  Working
>> 
>>
>> 
>>
>> 
>>
>> 
>> Thanks,
>> 
>> Chetan
>> 
>>
>> 
>> On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io 
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io&d=DwMFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg&s=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8&e=>
>>  <chetan.bhasin017=gmail....@lists.fd.io <mailto:gmail....@lists.fd.io>> 
>> wrote:
>> 
>> Hi Nitin,
>> 
>>
>> 
>> https://github.com/FDio/vpp/commits/stable/2001/src/vlib 
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FDio_vpp_commits_stable_2001_src_vlib&d=DwMFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg&s=LljqKCmXwjl4uzuLM_oB-jhjYV5xVGFpHPDomTZwKAU&e=>
>> As per stable/2001 branch , the given change is checked-in around Oct 28 
>> 2019.
>> 
>>
>> 
>> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of 
>> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>> Yes (branch vpp 20.01)
>> 
>>
>> 
>> Thanks,
>> 
>> Chetan Bhasin
>> 
>>
>> 
>> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena <nsax...@marvell.com 
>> <mailto:nsax...@marvell.com>> wrote:
>> 
>> Hi Damjan,
>> 
>> >> if you read Chetan’s email bellow, you will see that this one is already 
>> >> excluded…
>> Sorry I missed that part. After seeing diffs between stable/1908 and 
>> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only 
>> visible git commit in dpdk plugin which is playing with mempool buffers. If 
>> it does not solve the problem then I suspect problem lies outside dpdk 
>> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>> 
>> Hi Chetan,
>> > > 3) I took previous commit of  "vlib: don't use vector for keeping buffer
>> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
>> > Everything looks fine with Buffers 537600.
>> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is 
>> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>> 
>> Thanks,
>> Nitin
>> > -----Original Message-----
>> > From: Damjan Marion <dmar...@me.com <mailto:dmar...@me.com>>
>> > Sent: Monday, February 17, 2020 3:47 PM
>> > To: Nitin Saxena <nsax...@marvell.com <mailto:nsax...@marvell.com>>
>> > Cc: chetan bhasin <chetan.bhasin...@gmail.com 
>> > <mailto:chetan.bhasin...@gmail.com>>; vpp-dev@lists.fd.io 
>> > <mailto:vpp-dev@lists.fd.io>
>> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>> > 
>> > 
>> > Dear Nitin,
>> > 
>> > if you read Chetan’s email bellow, you will see that this one is already
>> > excluded…
>> > 
>> > Also, it will not be easy to explain how this patch blows tx function in 
>> > dpdk
>> > mlx5 pmd…
>> > 
>> > —
>> > Damjan
>> > 
>> > > On 17 Feb 2020, at 11:12, Nitin Saxena <nsax...@marvell.com 
>> > > <mailto:nsax...@marvell.com>> wrote:
>> > >
>> > > Hi Prashant/Chetan,
>> > >
>> > > I would try following change first to solve the problem in 1908
>> > >
>> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
>> > > Author: Damjan Marion <damar...@cisco.com <mailto:damar...@cisco.com>>
>> > > Date:   Tue Mar 12 18:14:15 2019 +0100
>> > >
>> > >     vlib: don't use vector for keeping buffer indices in
>> > >
>> > >     Type: refactor
>> > >
>> > >     Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
>> > >     Signed-off-by: Damjan Marion damar...@cisco.com 
>> > > <mailto:damar...@cisco.com>
>> > >
>> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
>> > branch to stable/1908
>> > >
>> > > Thanks,
>> > > Nitin
>> > >
>> > > From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
>> > > <vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan
>> > Marion via Lists.Fd.Io 
>> > <https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io&d=DwMFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg&s=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8&e=>
>> > > Sent: Monday, February 17, 2020 1:52 PM
>> > > To: chetan bhasin <chetan.bhasin...@gmail.com 
>> > > <mailto:chetan.bhasin...@gmail.com>>
>> > > Cc: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
>> > >
>> > > External Email
>> > >
>> > > On 17 Feb 2020, at 07:37, chetan bhasin <chetan.bhasin...@gmail.com 
>> > > <mailto:chetan.bhasin...@gmail.com>>
>> > wrote:
>> > >
>> > > Bottom line is stable/vpp 908 does not work with higher number of buffers
>> > but stable/vpp2001 does. Could you please advise which area we can look at
>> > ,as it would be difficult for us to move to vpp2001 at this time.
>> > >
>> > > I really don’t have idea what caused this problem to disappear.
>> > > You may try to use “git bisect” to find out which commit fixed it….
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io 
>> > > <https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io&d=DwMFaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg&s=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8&e=>
>> > <chetan.bhasin017=gmail....@lists.fd.io <mailto:gmail....@lists.fd.io>> 
>> > wrote:
>> > > Thanks Damjan for the reply!
>> > >
>> > > Following are my observations on Intel X710/XL710 pci-
>> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
>> > ethernet-input             l3 mac mismatch"
>> > >                         With Buffers 537600
>> > > vpp# show buffers
>> > |
>> > > Pool Name            Index NUMA  Size  Data Size  Total  Avail  Cached   
>> > > Used
>> > > default-numa-0         0     0   2496     2048   537600 510464   1319    
>> > > 25817
>> > > default-numa-1         1     1   2496     2048   537600 528896    390    
>> > > 8314
>> > >
>> > > vpp# show hardware-interfaces
>> > >               Name                Idx   Link  Hardware
>> > > BondEthernet0                      3     up   BondEthernet0
>> > >   Link speed: unknown
>> > >   Ethernet address 3c:fd:fe:b5:5e:40
>> > > FortyGigabitEthernet12/0/0         1     up   FortyGigabitEthernet12/0/0
>> > >   Link speed: 40 Gbps
>> > >   Ethernet address 3c:fd:fe:b5:5e:40
>> > >   Intel X710/XL710 Family
>> > >     carrier up full duplex mtu 9206
>> > >     flags: admin-up pmd rx-ip4-cksum
>> > >     rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
>> > >     tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
>> > >     pci: device 8086:1583 subsystem 8086:0001 address 0000:12:00.00 numa
>> > 0
>> > >     max rx packet len: 9728
>> > >     promiscuous: unicast off all-multicast on
>> > >     vlan offload: strip off filter off qinq off
>> > >     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum 
>> > > qinq-strip
>> > >                        outer-ipv4-cksum vlan-filter vlan-extend 
>> > > jumbo-frame
>> > >                        scatter keep-crc
>> > >     rx offload active: ipv4-cksum
>> > >     tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum 
>> > > sctp-cksum
>> > >                        tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>> > >                        gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>> > >                        mbuf-fast-free
>> > >     tx offload active: none
>> > >     rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
>> > > ipv6-frag
>> > >                        ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
>> > >     rss active:        ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
>> > > ipv6-tcp
>> > >                        ipv6-udp ipv6-other
>> > >     tx burst function: i40e_xmit_pkts_vec_avx2
>> > >     rx burst function: i40e_recv_pkts_vec_avx2
>> > >     tx errors                                             17
>> > >     rx frames ok                                        4585
>> > >     rx bytes ok                                       391078
>> > >     extended stats:
>> > >       rx good packets                                   4585
>> > >       rx good bytes                                   391078
>> > >       tx errors                                           17
>> > >       rx multicast packets                              4345
>> > >       rx broadcast packets                               243
>> > >       rx unknown protocol packets                       4588
>> > >       rx size 65 to 127 packets                         4529
>> > >       rx size 128 to 255 packets                          32
>> > >       rx size 256 to 511 packets                          26
>> > >       rx size 1024 to 1522 packets                         1
>> > >       tx size 65 to 127 packets                           33
>> > > FortyGigabitEthernet12/0/1         2     up   FortyGigabitEthernet12/0/1
>> > >   Link speed: 40 Gbps
>> > >   Ethernet address 3c:fd:fe:b5:5e:40
>> > >   Intel X710/XL710 Family
>> > >     carrier up full duplex mtu 9206
>> > >     flags: admin-up pmd rx-ip4-cksum
>> > >     rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
>> > >     tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
>> > >     pci: device 8086:1583 subsystem 8086:0000 address 0000:12:00.01 numa
>> > 0
>> > >     max rx packet len: 9728
>> > >     promiscuous: unicast off all-multicast on
>> > >     vlan offload: strip off filter off qinq off
>> > >     rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum 
>> > > qinq-strip
>> > >                        outer-ipv4-cksum vlan-filter vlan-extend 
>> > > jumbo-frame
>> > >                        scatter keep-crc
>> > >     rx offload active: ipv4-cksum
>> > >     tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum 
>> > > sctp-cksum
>> > >                        tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>> > >                        gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>> > >                        mbuf-fast-free
>> > >     tx offload active: none
>> > >     rss avail:         ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
>> > > ipv6-frag
>> > >                        ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
>> > >     rss active:        ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
>> > > ipv6-tcp
>> > >                        ipv6-udp ipv6-other
>> > >     tx burst function: i40e_xmit_pkts_vec_avx2
>> > >     rx burst function: i40e_recv_pkts_vec_avx2
>> > >     rx frames ok                                        4585
>> > >     rx bytes ok                                       391078
>> > >     extended stats:
>> > >       rx good packets                                   4585
>> > >       rx good bytes                                   391078
>> > >       rx multicast packets                              4344
>> > >       rx broadcast packets                               243
>> > >       rx unknown protocol packets                       4587
>> > |
>> > >       rx size 65 to 127 packets                         4528
>> > >       rx size 128 to 255 packets                          32
>> > >       rx size 256 to 511 packets                          26
>> > >       rx size 1024 to 1522 packets                         1
>> > >       tx size 65 to 127 packets                           33
>> > >
>> > >
>> > > As per packet trace -
>> > > Packet 4
>> > > 00:00:54:955863: dpdk-input
>> > >   FortyGigabitEthernet12/0/0 rx queue 0
>> > >   buffer 0x13fc728: current data 0, length 68, buffer-pool 0, ref-count 
>> > > 1,
>> > totlen-nifb 0, trace handle 0x1000003
>> > >                     ext-hdr-valid
>> > |
>> > >                     l4-cksum-computed l4-cksum-correct
>> > >   PKT MBUF: port 0, nb_segs 1, pkt_len 68
>> > >     buf_len 2176, data_len 68, ol_flags 0x180, data_off 128, phys_addr
>> > 0xde91ca80
>> > >     packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
>> > >     rss 0x0 fdir.hi 0x0 fdir.lo 0x0
>> > >     Packet Offload Flags
>> > >       PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>> > >       PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>> > >     Packet Types
>> > >       RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>> > >   0x0000: 00:00:00:00:00:00 -> 00:00:00:00:00:00
>> > > 00:00:54:955864: bond-input
>> > >   src 00:00:00:00:00:00, dst 00:00:00:00:00:00, 
>> > > FortyGigabitEthernet12/0/0 -
>> > > BondEthernet0
>> > > 00:00:54:955864: ethernet-input
>> > >   0x0000: 00:00:00:00:00:00 -> 00:00:00:00:00:00
>> > > 00:00:54:955865: error-drop
>> > >   rx:BondEthernet0
>> > > 00:00:54:955865: drop
>> > >   ethernet-input: l3 mac mismatch
>> > >
>> > > 2) I have took latest code-base from stable/vpp2001 branch: Everything
>> > looks fine with  Buffers 537600
>> > >
>> > > 3) I took previous commit of  "vlib: don't use vector for keeping buffer
>> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
>> > Everything looks fine with Buffers 537600.
>> > > So this cleary shows the above commit will not fix our problem.
>> > >
>> > >
>> > >
>> > > Thanks,
>> > > Chetan
>> > >
>> > > On Wed, Feb 12, 2020 at 9:07 PM Damjan Marion <dmar...@me.com 
>> > > <mailto:dmar...@me.com>>
>> > wrote:
>> > >
>> > > Shouldn’t be too hard to checkout commit prior to that one and test if
>> > problem is still there…
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > > On 12 Feb 2020, at 14:50, chetan bhasin <chetan.bhasin...@gmail.com 
>> > > <mailto:chetan.bhasin...@gmail.com>>
>> > wrote:
>> > >
>> > > Hi,
>> > >
>> > > Looking into the changes in vpp 20.1 , the below change looks good
>> > important related to buffer indices .
>> > >
>> > > vlib: don't use vector for keeping buffer indices in the pool
>> > > Type: refactor
>> > >
>> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
>> > > Signed-off-by: Damjan Marion <damar...@cisco.com 
>> > > <mailto:damar...@cisco.com>>
>> > >
>> > > https://urldefense.proofpoint.com/v2/url?u=https- 
>> > > <https://urldefense.proofpoint.com/v2/url?u=https->
>> > 3A__github.com_FDio_vpp_commit_b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7
>> > a6897b-23diff-
>> > 2D2260a8080303fbcc30ef32f782b4d6df&d=DwIFaQ&c=nKjWec2b6R0mOyPaz
>> > 7xtfQ&r=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0&m=IYJSlvQW
>> > nHZRFb7PgVq0RR9rayZkIR_eLIsg4QLU3VU&s=82mrobM4Iis3mDVPbnr526Wv
>> > 1yxa4TtVoa-WH8oCguI&e=
>> > >
>> > > Can anybody suggest  ?
>> > > Shouldn’t be too hard to checkout commit prior to that one and test if
>> > problem is still there…
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > >
>> 
>> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15489): https://lists.fd.io/g/vpp-dev/message/15489
Mute This Topic: https://lists.fd.io/mt/71346533/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to