Re: [vpp-dev] question about l2 multicast #vpp

2020-04-08 Thread Neale Ranns via lists.fd.io

Hi Yan,

I’m not quite sure I understand the question. However, if you are asking 
whether VPP supports IGMP snooping in a bridge-domain to provide more efficient 
L2 multicast, the answer is no. L2 multicast is flooded within the BD.

/neale


From:  on behalf of "comeon...@outlook.com" 

Date: Wednesday 8 April 2020 at 08:58
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] question about l2 multicast #vpp


Hi guys,



I found "igmp" in the mpls plugins. Is the vpp support L2 multicast now ?



Thank you very much for your reply.



Thanks,

Yan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16031): https://lists.fd.io/g/vpp-dev/message/16031
Mute This Topic: https://lists.fd.io/mt/72869222/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] question about l2 multicast #vpp

2020-04-08 Thread comeonyan
I'm sorry for bothering you because I didn't describe the question clearly.

I just want to know if vpp supports IGMP Snooping and your answer helps me.

Thanks,
Yan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16032): https://lists.fd.io/g/vpp-dev/message/16032
Mute This Topic: https://lists.fd.io/mt/72869222/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FW: vlib_buffer_copy(...) can return a NULL pointer

2020-04-08 Thread Dave Barach via lists.fd.io
Folks,

I built a test tool to make vlib_buffer_alloc(...) pretend that the system is 
running out of buffers. In so doing, I found a certain number of 1-off bugs. 
That's not the subject here.

I found a number of these floating around:

  c0 = vlib_buffer_copy (vm, p0);
  ci0 = vlib_get_buffer_index (vm, c0);

C0 can be NULL. This is a coredump waiting to happen. Please don't do this. It 
is not correct.

I'll eventually add "ASSERT ( != 0)" to vlib_get_buffer_index, and I'm 
going to check every call to vlib_buffer_copy(...) to get rid of this class of 
bug. See https://gerrit.fd.io/r/c/vpp/+/26422

FWIW... Dave
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16033): https://lists.fd.io/g/vpp-dev/message/16033
Mute This Topic: https://lists.fd.io/mt/72872694/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] query on pool memory usage

2020-04-08 Thread Satya Murthy
Hi,

Is there anyway we can get info what are all vectors and pools each plugin is 
using and their corresponding memory usage.
We are chasing some memory leak issue and if VPP has a builtin way of getting 
this info, we want to take leverage of it.

If VPP does not have any built-in way, do you guys think, its better to go 
through each pool and print pool_bytes and entries in use/free to track this.
Is this the way to go (or) any other tricks in this area ?

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16034): https://lists.fd.io/g/vpp-dev/message/16034
Mute This Topic: https://lists.fd.io/mt/72873560/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] query on pool memory usage

2020-04-08 Thread Dave Barach via lists.fd.io
Vpp has a built-in leakfinder which should help. Try:

“memory-trace on main-heap”



“show memory main-heap [verbose]”

Which will display objects allocated since “memory-trace on”, complete with the 
allocation backtrace.

Suggest turning off debug CLI history to avoid looking at newly-allocated debug 
CLI history items.

Suggest changing the cmake option VPP_VECTOR_GROW_BY_ONE 
CLIB_VECTOR_GROW_BY_ONE to ON. Multiple ways to do that. Simplest / crudest: 
edit src/vppinfra/CMakeLists.txt and rebuild.

Each time a vector or pool expands by a single element, you’ll see it in the 
memory allocator trace...

Don’t forget to turn off “grow_by_one” its definitely not to be used in 
production.

HTH... Dave

From: vpp-dev@lists.fd.io  On Behalf Of Satya Murthy
Sent: Wednesday, April 8, 2020 9:14 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] query on pool memory usage

Hi,

Is there anyway we can get info what are all vectors and pools each plugin is 
using and their corresponding memory usage.
We are chasing some memory leak issue and if VPP has a builtin way of getting 
this info, we want to take leverage of it.

If VPP does not have any built-in way, do you guys think, its better to go 
through each pool and print pool_bytes and entries in use/free to track this.
Is this the way to go (or) any other tricks in this area ?

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16035): https://lists.fd.io/g/vpp-dev/message/16035
Mute This Topic: https://lists.fd.io/mt/72873560/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp 19.08 failed to load in CENTOS 7

2020-04-08 Thread Matthew Smith via lists.fd.io
On Tue, Apr 7, 2020 at 4:36 AM  wrote:

> maybe this is releated also?
> https://github.com/spdk/spdk/issues/1012
>  -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
>
Yes, it's probably the same issue.

I saw the same error using vfio-pci with iommu enabled on a C3000 board a
couple of months ago. Setting iova-mode to pa in the arguments to
rte_eal_init() made it work. One of my colleagues submitted a patch which
will allow this to be set in startup.conf like so:

dpdk {
iova-mode pa
}

The patch was submitted after 20.01 was released. So it won't help you if
you have to run 19.08. You could build from the current master branch, try
using igb_uio (which should result in the iova mode being set to pa) or
wait for the next release.

-Matt
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16036): https://lists.fd.io/g/vpp-dev/message/16036
Mute This Topic: https://lists.fd.io/mt/72808669/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP not learning the MAC from RA (Router Advertisement)

2020-04-08 Thread vyshakh krishnan via lists.fd.io
Hi,

VPP is not learning the MAC address on receiving an RA. Even after
receiving  multiple RAs from a neighbor, ping to the link local address of
that neighbor results in a NS/NA resulting in the drop of 1st ping packet.
Is this expected?
Just like learning the MAC from NA/NS, why can't learning be triggered for
an RA?

Packet 3

00:30:08:261191: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
00:30:08:261208: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  IP6: 7a:55:92:76:00:01 -> 33:33:00:00:00:01
00:30:08:261217: ip6-input
  ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0xfc3d
00:30:08:261236: ip6-mfib-forward-lookup
  fib 0 entry 4
00:30:08:261246: ip6-mfib-forward-rpf
  entry 4 itf 1 flags Accept,
00:30:08:261250: ip6-replicate
  replicate: 2 via [@1]: dpo-receive
00:30:08:261255: ip6-local
ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
  tos 0x00, flow label 0x0, hop limit 255, payload length 32
ICMP router_advertisement checksum 0xfc3d
00:30:08:261279: ip6-icmp-input
  ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0xfc3d
00:30:08:261288: icmp6-router-advertisement
  ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0xfc3d
00:30:08:261299: ip6-drop
ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
  tos 0x00, flow label 0x0, hop limit 255, payload length 32
ICMP router_advertisement checksum 0xfc3d
00:30:08:261302: error-drop
  rx:memif11/11
00:30:08:261303: drop
  ip6-icmp-input: valid packets

Packet 4

00:30:17:270341: memif-input
  memif: hw_if_index 1 next-index 4
slot: ring 0
00:30:17:286366: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  IP6: 7a:55:92:76:00:01 -> 33:33:00:00:00:01
00:30:17:286375: ip6-input
  ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0xfc3d
00:30:17:286381: ip6-mfib-forward-lookup
  fib 0 entry 4
00:30:17:286388: ip6-mfib-forward-rpf
  entry 4 itf 1 flags Accept,
00:30:17:286392: ip6-replicate
  replicate: 2 via [@1]: dpo-receive
00:30:17:286396: ip6-local
ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
  tos 0x00, flow label 0x0, hop limit 255, payload length 32
ICMP router_advertisement checksum 0xfc3d
00:30:17:286404: ip6-icmp-input
  ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0xfc3d
00:30:17:286407: icmp6-router-advertisement
  ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
tos 0x00, flow label 0x0, hop limit 255, payload length 32
  ICMP router_advertisement checksum 0xfc3d
00:30:17:286415: ip6-drop
ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
  tos 0x00, flow label 0x0, hop limit 255, payload length 32
ICMP router_advertisement checksum 0xfc3d
00:30:17:286417: error-drop
  rx:memif11/11
00:30:17:286422: drop
  ip6-icmp-input: valid packets

DBGvpp# ping fe80::7855:92ff:fe76:1 source memif-0/0/0/10
76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=2 ttl=63 time=28.0089 ms
76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=3 ttl=63 time=28.0221 ms
76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=4 ttl=63 time=24.0218 ms
76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=5 ttl=63 time=32.0252 ms

Statistics: 5 sent, 4 received, 20% packet loss
DBGvpp#

Thanks
Vyshakh
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16037): https://lists.fd.io/g/vpp-dev/message/16037
Mute This Topic: https://lists.fd.io/mt/72890243/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP not learning the MAC from RA (Router Advertisement)

2020-04-08 Thread Ole Troan
Vyshakh,

That’s certainly what the RFC says. 
Care to submit a patch?

Cheers 
Ole

> On 9 Apr 2020, at 06:10, vyshakh krishnan via lists.fd.io 
>  wrote:
> 
> 
> Hi,
> 
> VPP is not learning the MAC address on receiving an RA. Even after receiving  
> multiple RAs from a neighbor, ping to the link local address of that neighbor 
> results in a NS/NA resulting in the drop of 1st ping packet. Is this expected?
> Just like learning the MAC from NA/NS, why can't learning be triggered for an 
> RA?
> 
> Packet 3
> 
> 00:30:08:261191: memif-input
>   memif: hw_if_index 1 next-index 4
> slot: ring 0
> 00:30:08:261208: ethernet-input
>   frame: flags 0x1, hw-if-index 1, sw-if-index 1
>   IP6: 7a:55:92:76:00:01 -> 33:33:00:00:00:01
> 00:30:08:261217: ip6-input
>   ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0xfc3d
> 00:30:08:261236: ip6-mfib-forward-lookup
>   fib 0 entry 4
> 00:30:08:261246: ip6-mfib-forward-rpf
>   entry 4 itf 1 flags Accept,
> 00:30:08:261250: ip6-replicate
>   replicate: 2 via [@1]: dpo-receive
> 00:30:08:261255: ip6-local
> ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
>   tos 0x00, flow label 0x0, hop limit 255, payload length 32
> ICMP router_advertisement checksum 0xfc3d
> 00:30:08:261279: ip6-icmp-input
>   ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0xfc3d
> 00:30:08:261288: icmp6-router-advertisement
>   ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0xfc3d
> 00:30:08:261299: ip6-drop
> ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
>   tos 0x00, flow label 0x0, hop limit 255, payload length 32
> ICMP router_advertisement checksum 0xfc3d
> 00:30:08:261302: error-drop
>   rx:memif11/11
> 00:30:08:261303: drop
>   ip6-icmp-input: valid packets
> 
> Packet 4
> 
> 00:30:17:270341: memif-input
>   memif: hw_if_index 1 next-index 4
> slot: ring 0
> 00:30:17:286366: ethernet-input
>   frame: flags 0x1, hw-if-index 1, sw-if-index 1
>   IP6: 7a:55:92:76:00:01 -> 33:33:00:00:00:01
> 00:30:17:286375: ip6-input
>   ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0xfc3d
> 00:30:17:286381: ip6-mfib-forward-lookup
>   fib 0 entry 4
> 00:30:17:286388: ip6-mfib-forward-rpf
>   entry 4 itf 1 flags Accept,
> 00:30:17:286392: ip6-replicate
>   replicate: 2 via [@1]: dpo-receive
> 00:30:17:286396: ip6-local
> ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
>   tos 0x00, flow label 0x0, hop limit 255, payload length 32
> ICMP router_advertisement checksum 0xfc3d
> 00:30:17:286404: ip6-icmp-input
>   ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0xfc3d
> 00:30:17:286407: icmp6-router-advertisement
>   ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
> tos 0x00, flow label 0x0, hop limit 255, payload length 32
>   ICMP router_advertisement checksum 0xfc3d
> 00:30:17:286415: ip6-drop
> ICMP6: fe80::7855:92ff:fe76:1 -> ff02::1
>   tos 0x00, flow label 0x0, hop limit 255, payload length 32
> ICMP router_advertisement checksum 0xfc3d
> 00:30:17:286417: error-drop
>   rx:memif11/11
> 00:30:17:286422: drop
>   ip6-icmp-input: valid packets
> 
> DBGvpp# ping fe80::7855:92ff:fe76:1 source memif-0/0/0/10
> 76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=2 ttl=63 time=28.0089 ms
> 76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=3 ttl=63 time=28.0221 ms
> 76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=4 ttl=63 time=24.0218 ms
> 76 bytes from fe80::7855:92ff:fe76:1: icmp_seq=5 ttl=63 time=32.0252 ms
> 
> Statistics: 5 sent, 4 received, 20% packet loss
> DBGvpp# 
> 
> Thanks
> Vyshakh
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16038): https://lists.fd.io/g/vpp-dev/message/16038
Mute This Topic: https://lists.fd.io/mt/72890243/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-