[vpp-dev] New perfmon plugin

2020-12-11 Thread Damjan Marion via lists.fd.io

Guys,

I just submitted patch with the new perfmon plugin: [1]

It takes significantly different approach compared to current one.

 - it support multiple sources of perf counters (linux, intel core, intel 
uncore) and it is extensible to other vendors
 - it have concept instances so it can monitor multiple instances of specific 
PMU (DRAM channels, UPI/QPU links, ..)
 - it supports node, thread and system metrics
 - different metrics are organized in bundles, where bundle consists of 
multiple counters and format functions which calculates and presents metric. 
Yuo can find example of bundle here [2]

To se how this looks in action, I captured small asciinema video: [3]

As this new plugin is significantly different than old one, I wonder if anyone 
thinks we should keep old une.
Also, any other feedback is wellcome.

Thanks,

Damjan


[1] https://gerrit.fd.io/r/c/vpp/+/30186
[2] 
https://gerrit.fd.io/r/c/vpp/+/30186/12/src/plugins/perfmon/intel/bundle/load_blocks.c
[3] https://asciinema.org/a/aFN5rMFYw0RPvGOZiFsziXV5w


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18309): https://lists.fd.io/g/vpp-dev/message/18309
Mute This Topic: https://lists.fd.io/mt/78882331/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] New perfmon plugin

2020-12-11 Thread Dave Barach
Looks really cool! Feel free to move the old one to deprecated... 😉... Dave

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
lists.fd.io
Sent: Friday, December 11, 2020 11:14 AM
To: vpp-dev 
Subject: [vpp-dev] New perfmon plugin


Guys,

I just submitted patch with the new perfmon plugin: [1]

It takes significantly different approach compared to current one.

 - it support multiple sources of perf counters (linux, intel core, intel 
uncore) and it is extensible to other vendors
 - it have concept instances so it can monitor multiple instances of specific 
PMU (DRAM channels, UPI/QPU links, ..)
 - it supports node, thread and system metrics
 - different metrics are organized in bundles, where bundle consists of 
multiple counters and format functions which calculates and presents metric. 
Yuo can find example of bundle here [2]

To se how this looks in action, I captured small asciinema video: [3]

As this new plugin is significantly different than old one, I wonder if anyone 
thinks we should keep old une.
Also, any other feedback is wellcome.

Thanks,

Damjan


[1] https://gerrit.fd.io/r/c/vpp/+/30186
[2] 
https://gerrit.fd.io/r/c/vpp/+/30186/12/src/plugins/perfmon/intel/bundle/load_blocks.c
[3] https://asciinema.org/a/aFN5rMFYw0RPvGOZiFsziXV5w



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18310): https://lists.fd.io/g/vpp-dev/message/18310
Mute This Topic: https://lists.fd.io/mt/78882331/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] New perfmon plugin

2020-12-11 Thread Florin Coras
+1

Really cool! Thanks!

Cheers,
Florin

> On Dec 11, 2020, at 9:33 AM, Dave Barach  wrote:
> 
> Looks really cool! Feel free to move the old one to deprecated... 😉... Dave
> 
> -Original Message-
> From: vpp-dev@lists.fd.io   > On Behalf Of Damjan Marion via lists.fd.io 
> 
> Sent: Friday, December 11, 2020 11:14 AM
> To: vpp-dev mailto:vpp-dev@lists.fd.io>>
> Subject: [vpp-dev] New perfmon plugin
> 
> 
> Guys,
> 
> I just submitted patch with the new perfmon plugin: [1]
> 
> It takes significantly different approach compared to current one.
> 
> - it support multiple sources of perf counters (linux, intel core, intel 
> uncore) and it is extensible to other vendors
> - it have concept instances so it can monitor multiple instances of specific 
> PMU (DRAM channels, UPI/QPU links, ..)
> - it supports node, thread and system metrics
> - different metrics are organized in bundles, where bundle consists of 
> multiple counters and format functions which calculates and presents metric. 
> Yuo can find example of bundle here [2]
> 
> To se how this looks in action, I captured small asciinema video: [3]
> 
> As this new plugin is significantly different than old one, I wonder if 
> anyone thinks we should keep old une.
> Also, any other feedback is wellcome.
> 
> Thanks,
> 
> Damjan
> 
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/30186
> [2] 
> https://gerrit.fd.io/r/c/vpp/+/30186/12/src/plugins/perfmon/intel/bundle/load_blocks.c
> [3] https://asciinema.org/a/aFN5rMFYw0RPvGOZiFsziXV5w
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18311): https://lists.fd.io/g/vpp-dev/message/18311
Mute This Topic: https://lists.fd.io/mt/78882331/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] #vpp #vpp-memif #vppcom

2020-12-11 Thread tahir . a . sanglikar
in our application I have to send out multicast packets, but the packet is 
getting dropped in vpp

The error message says *memif81/0-output: interface is down* but I am 
forwarding the packet through some other interface 
"HundredGigabitEthernet12/0/0.501", could you please help me understand why its 
trying to go out on memif and not using HundredGigabitEthernet12/0/0.501?
Thanks in advance for the help!

*This is the rules added to send packet out*
ip mroute add ff38:23:2001:5b0:2000::8000/128 via tuntap-0 Accept
ip mroute add ff38:23:2001:5b0:2000::8000/128 via 
HundredGigabitEthernet12/0/0.501 Forward

02:39:27:201439: ip6-input
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
length 26, checksum 0x3e97
02:39:27:201439: ip6-mfib-forward-lookup
fib 0 entry 9
02:39:27:201439: ip6-mfib-forward-rpf
entry 9 itf 1 flags Accept,
02:39:27:201439: ip6-replicate
replicate: 7 via [@2]: ipv4-mcast: HundredGigabitEthernet12/0/0.501: mtu:9000 
next:11 01005e00b883039e7890810001f586dd
replicate: 7 via [@1]: dpo-receive
02:39:27:201439: ip6-rewrite-mcast
tx_sw_if_index 6 adj-idx 54 : ipv4-mcast: HundredGigabitEthernet12/0/0.501: 
mtu:9000 next:11 01005e00b883039e7890810001f586dd flow hash: 0x
: 01005e008000b883039e7890810001f586dd600af8f3001a1101200105b0
0020: 0501b883031f039e7890ff380023200105b0200080001f401f40001a
0040: 3e97010102000204000342af5fd3b38700210001005c000307d0
0060: 
02:39:27:201440: ip6-local
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
length 26, checksum 0x3e97
02:39:27:201440: memif81/0-output
HundredGigabitEthernet12/0/0.501
: 01005e008000b883039e7890810001f586dd600af8f3001a1101200105b0
0020: 0501b883031f039e7890ff380023200105b0200080001f401f40001a
0040: 3e97010102000204000342af5fd3b38700210001005c000307d0
0060: 
02:39:27:201440: ip6-drop
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
length 26, checksum 0x3e97
02:39:27:201440: error-drop
rx:tuntap-0
rx:tuntap-0
02:39:27:201441: drop
*memif81/0-output: interface is down*
ip6-input: valid ip6 packets

vpp# sh int addr
HundredGigabitEthernet12/0/0 (up):
HundredGigabitEthernet12/0/0.501 (up):
L3 2001:5b0::501:b883:31f:19e:7890/64
L3 2001:5b0::501:b883:31f:29e:7890/64
L3 2001:5b0::501:b883:31f:39e:7890/64
HundredGigabitEthernet12/0/0.1100 (up):
L3 192.168.115.11/24
L3 192.168.115.12/24
L3 2001:5b0::1100::11/64
L3 2001:5b0::1100::12/64
HundredGigabitEthernet12/0/0.1103 (up):
L3 192.168.118.11/24
L3 192.168.118.12/24
L3 2001:5b0::1103::11/64
L3 2001:5b0::1103::12/64
HundredGigabitEthernet12/0/1 (dn):
HundredGigabitEthernetd8/0/0 (dn):
HundredGigabitEthernetd8/0/1 (dn):
local0 (dn):
memif1/0 (up):
L3 192.168.1.3/24
memif11/0 (up):
L3 192.168.11.3/24
L3 fd11::11/64
memif2/0 (up):
L3 192.168.2.3/24
memif21/0 (up):
L3 192.168.21.3/24
L3 fd21::21/64
memif3/0 (up):
L3 192.168.3.3/24
memif31/0 (up):
L3 192.168.31.3/24
L3 fd31::31/64
memif4/0 (up):
L3 192.168.4.3/24
memif41/0 (up):
L3 192.168.41.3/24
L3 fd41::41/64
memif5/0 (up):
L3 192.168.5.3/24
memif51/0 (up):
L3 192.168.51.3/24
L3 fd51::51/64
memif6/0 (up):
L3 192.168.6.3/24
memif61/0 (up):
L3 192.168.61.3/24
L3 fd61::61/64
memif7/0 (up):
L3 192.168.7.3/24
memif71/0 (up):
L3 192.168.71.3/24
L3 fd71::71/64
memif8/0 (up):
L3 192.168.8.3/24
memif81/0 (up):
L3 192.168.81.3/24
L3 fd81::81/64
tuntap-0 (up):

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18312): https://lists.fd.io/g/vpp-dev/message/18312
Mute This Topic: https://lists.fd.io/mt/78889239/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #vpp-memif:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-memif
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] #vpp #vpp-memif #vppcom

2020-12-11 Thread steven luong via lists.fd.io
Can you check the output of show hardware? I suspect the link is down for the 
corresponding memif interface.

Steven

From:  on behalf of "tahir.a.sangli...@gmail.com" 

Date: Friday, December 11, 2020 at 1:14 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] #vpp #vpp-memif #vppcom

in our application I have to send out multicast packets, but the packet is 
getting dropped in vpp

The error message says memif81/0-output: interface is down but I am forwarding 
the packet through some other interface "HundredGigabitEthernet12/0/0.501", 
could you please help me understand why its trying to go out on memif and not 
using HundredGigabitEthernet12/0/0.501?
Thanks in advance for the help!

This is the rules added to send packet out
ip mroute add ff38:23:2001:5b0:2000::8000/128 via tuntap-0 Accept
ip mroute add ff38:23:2001:5b0:2000::8000/128 via 
HundredGigabitEthernet12/0/0.501 Forward

02:39:27:201439: ip6-input
  UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
  UDP: 8000 -> 8000
length 26, checksum 0x3e97
02:39:27:201439: ip6-mfib-forward-lookup
  fib 0 entry 9
02:39:27:201439: ip6-mfib-forward-rpf
  entry 9 itf 1 flags Accept,
02:39:27:201439: ip6-replicate
  replicate: 7 via [@2]: ipv4-mcast: HundredGigabitEthernet12/0/0.501: mtu:9000 
next:11 01005e00b883039e7890810001f586dd
  replicate: 7 via [@1]: dpo-receive
02:39:27:201439: ip6-rewrite-mcast
  tx_sw_if_index 6 adj-idx 54 : ipv4-mcast: HundredGigabitEthernet12/0/0.501: 
mtu:9000 next:11 01005e00b883039e7890810001f586dd flow hash: 0x
  : 01005e008000b883039e7890810001f586dd600af8f3001a1101200105b0
  0020: 0501b883031f039e7890ff380023200105b0200080001f401f40001a
  0040: 3e97010102000204000342af5fd3b38700210001005c000307d0
  0060: 
02:39:27:201440: ip6-local
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
  tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
  length 26, checksum 0x3e97
02:39:27:201440: memif81/0-output
  HundredGigabitEthernet12/0/0.501
  : 01005e008000b883039e7890810001f586dd600af8f3001a1101200105b0
  0020: 0501b883031f039e7890ff380023200105b0200080001f401f40001a
  0040: 3e97010102000204000342af5fd3b38700210001005c000307d0
  0060: 
02:39:27:201440: ip6-drop
UDP: 2001:5b0::501:b883:31f:39e:7890 -> ff38:23:2001:5b0:2000::8000
  tos 0x00, flow label 0xaf8f3, hop limit 2, payload length 26
UDP: 8000 -> 8000
  length 26, checksum 0x3e97
02:39:27:201440: error-drop
  rx:tuntap-0
  rx:tuntap-0
02:39:27:201441: drop
  memif81/0-output: interface is down
  ip6-input: valid ip6 packets
vpp# sh int addr
HundredGigabitEthernet12/0/0 (up):
HundredGigabitEthernet12/0/0.501 (up):
  L3 2001:5b0::501:b883:31f:19e:7890/64
  L3 2001:5b0::501:b883:31f:29e:7890/64
  L3 2001:5b0::501:b883:31f:39e:7890/64
HundredGigabitEthernet12/0/0.1100 (up):
  L3 192.168.115.11/24
  L3 192.168.115.12/24
  L3 2001:5b0::1100::11/64
  L3 2001:5b0::1100::12/64
HundredGigabitEthernet12/0/0.1103 (up):
  L3 192.168.118.11/24
  L3 192.168.118.12/24
  L3 2001:5b0::1103::11/64
  L3 2001:5b0::1103::12/64
HundredGigabitEthernet12/0/1 (dn):
HundredGigabitEthernetd8/0/0 (dn):
HundredGigabitEthernetd8/0/1 (dn):
local0 (dn):
memif1/0 (up):
  L3 192.168.1.3/24
memif11/0 (up):
  L3 192.168.11.3/24
  L3 fd11::11/64
memif2/0 (up):
  L3 192.168.2.3/24
memif21/0 (up):
  L3 192.168.21.3/24
  L3 fd21::21/64
memif3/0 (up):
  L3 192.168.3.3/24
memif31/0 (up):
  L3 192.168.31.3/24
  L3 fd31::31/64
memif4/0 (up):
  L3 192.168.4.3/24
memif41/0 (up):
  L3 192.168.41.3/24
  L3 fd41::41/64
memif5/0 (up):
  L3 192.168.5.3/24
memif51/0 (up):
  L3 192.168.51.3/24
  L3 fd51::51/64
memif6/0 (up):
  L3 192.168.6.3/24
memif61/0 (up):
  L3 192.168.61.3/24
  L3 fd61::61/64
memif7/0 (up):
  L3 192.168.7.3/24
memif71/0 (up):
  L3 192.168.71.3/24
  L3 fd71::71/64
memif8/0 (up):
  L3 192.168.8.3/24
memif81/0 (up):
  L3 192.168.81.3/24
  L3 fd81::81/64
tuntap-0 (up):



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18313): https://lists.fd.io/g/vpp-dev/message/18313
Mute This Topic: https://lists.fd.io/mt/78889239/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #vpp-memif:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-memif
Mute #vppcom:https://lists.fd.io/g/vpp-dev/mutehashtag/vppcom
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-