-----Original Message-----
From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Pawel
Staszewski
Sent: Thursday, September 15, 2022 0:03
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mellanox mlx5 + rdma + lcpng + bond - performance
(tuning ? or just FIB/RIB processing limit)
Hi
So hmm...
maybee there is some problem with rx/tx queue creation because after this
commands:
create interface rdma host-if enp59s0f0 name enp59s0f0-rdma rx-queue-size
4096 tx-queue-size 512 num-rx-queues 8 no-multi-seg
lcp create enp59s0f0-rdma host-if e0-rdma
When i do show for tap interface:Interface: tap3 (ifindex 5)
name "e0-rdma"
host-mtu-size "9000"
host-mac-addr: 02:fe:ad:1c:47:da
host-carrier-up: 1
vhost-fds 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
tap-fds 58
gso-enabled 0
csum-enabled 0
packet-coalesce 0
packet-buffering 0
Mac Address: 02:fe:57:90:b6:d7
Device instance: 1
flags 0x1
admin-up (0)
features 0x110008000
VIRTIO_NET_F_MRG_RXBUF (15)
VIRTIO_RING_F_INDIRECT_DESC (28)
VIRTIO_F_VERSION_1 (32)
remote-features 0x33d008000
VIRTIO_NET_F_MRG_RXBUF (15)
VIRTIO_F_NOTIFY_ON_EMPTY (24)
VHOST_F_LOG_ALL (26)
VIRTIO_F_ANY_LAYOUT (27)
VIRTIO_RING_F_INDIRECT_DESC (28)
VIRTIO_RING_F_EVENT_IDX (29)
VIRTIO_F_VERSION_1 (32)
VIRTIO_F_IOMMU_PLATFORM (33)
Number of RX Virtqueue 1
Number of TX Virtqueue 17
hmm
Number of RX Virtqueue 1
Number of TX Virtqueue 17
Why ?
When
vpp# show hardware-interfaces enp59s0f0-rdma
Name Idx Link Hardware
enp59s0f0-rdma 3 up enp59s0f0-rdma
Link speed: 100 Gbps
RX Queues:
queue thread mode
0 vpp_wk_1 (2) polling
1 vpp_wk_2 (3) polling
2 vpp_wk_3 (4) polling
3 vpp_wk_4 (5) polling
4 vpp_wk_5 (6) polling
5 vpp_wk_6 (7) polling
6 vpp_wk_7 (8) polling
7 vpp_wk_8 (9) polling
Ethernet address 0c:42:a1:a9:6c:a6
netdev enp59s0f0 pci-addr 0000:3b:00.0
product name: CX516A - ConnectX-5 QSFP28
part number: MCX516A-CCAT
revision: AC
serial number: MT2024K11216
flags: admin-up link-up mlx5dv
rss: ipv4-tcp ipv6-tcp
mlx5: version 0
device flags: cqe-v1 mpw-allowed enhanced-mpw cqe-128b-comp cqe-128b-
pad
vpp#
remember this is not dpdk - but I need to create rdma interfaces first
But it looks like there is some problems with tap interface queue assign
On 9/12/22 12:20 PM, Benoit Ganne (bganne) via lists.fd.io wrote:
This is what we measure on CSIT with Intel Cascade Lake for 1 CPU
core:
http://csit.fd.io/trending/#eNrVlMFuwyAMhp8mvVSWwC3JduihXd6jYuAu0Uj
KgFbtnn6kyuRE2ibtUm0HjNBn-8e_EDEdA-
0juU2hdkW1K7BqbQ7FarvM29l7wB6Mu4AU4oXQS3NRqtQQbKeh9Wso188gDVBqhlNe0WhHKF4h
9Ba6EIZu-
DR0s6c0a83EN1cm3wtygQ6kuSKrMkoUJzJf34uzD0F3FNt34pI8EXOT_WEkzVwnXf2EjsNW9S3
jt2Y6nHiZT14n09zJQIc_-
Pd5l79iGj1IYd5Anw_3eYCs92_en6oXMemQNigQQaxAlkshctoYFtTbkT2CxDnrj6G7fQeq_gA
VCBnd
- red line is Mellanox Cx5 l2 patch (ie max perf we can sustain
for this HW setup) and yields ~42Mpps
- back line is Mellanox Cx5 IPv4 forwarding with 20k prefixes and
random traffic (should be close to your case I suppose) and yields ~15Mpps
- blue line is Intel Columbiaville IPv4 forwarding with 20k
prefixes and random traffic (for reference) and yields ~18Mpps
Your results seem to be close to what we see *for a single core*.
So now I'm wondering, is your traffic distributed on all the cores?
To go further, can you share the output of:
~# vppctl clear run && sleep 1 && vppctl show run
While the traffic is going through? That should show us where CPU
cycles are spent and how traffic distribution looks like.
Best
ben
-----Original Message-----
From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
<vpp-dev@lists.fd.io> <mailto:vpp-dev@lists.fd.io> On Behalf Of Pawel
Staszewski
Sent: Sunday, September 11, 2022 22:34
To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] mellanox mlx5 + rdma + lcpng + bond -
performance
(tuning ? or just FIB/RIB processing limit)
Hi
First I want to thanks to all ppl that created RDMA native
support in
vpp and also ppl from LCPNG / Linux-CP - it is working and
looks stable :)
But
Was testing some scenarions with rdma+vpp+lcpng+frr BGP with
200k routes
- with mellanox mlx5 2x100G nics - where 24 cores are used
for RX queue
- and vpp config has cpu configuration where we are using
25(vpp only)
cores for all workers (controll+24 workers and 8 cores left
for frr
software routes and other stuff )
We are using isolcpus where last 24 cores are isolated with
nohz nocb
etc...
Hardware used is :
Intel 6246R
96GB ram
Mellanox Connect-X 5 2x 100GB Ethernet NIC
--------
And the problem is that we can reach only like 7Mpps per
port (2 ports
are bonded) - so in total 14Mpps for two ports in bonding
Bonding is done via vppctl
bond is reporting drops in stats - but this stats shows "no
error"
Example drops from bond:
vpp# show interface
Name Idx State MTU
(L3/IP4/IP6/MPLS) Counter Count
BondEthernet0 5 up 1500/0/0/0
rx
packets 16140265734
rx
bytes 14149099692091
tx
packets 16136936492
tx
bytes 14148733722122
drops 3451254
punt 1326
tx-error 1
And show errors:
vpp# show errors
Count Node Reason
Severity
16 ip6-icmp-input neighbor
solicitations for
unknown error
952172 bond-input
no
error error
1 null-node
blackholed
packets error
2 lacp-input good lacp
packets --
cache hit error
24 arp-reply ARP
replies
sent info
9 ip4-input
ip4 ttl <=
1 error
32 ip6-icmp-input neighbor
solicitations for
unknown error
1 ip6-icmp-input neighbor
advertisements sent info
9 ip4-icmp-error hop limit
exceeded
response sent info
1772997 bond-input
no
error error
2 bond-input pass through
(CDP, LLDP,
slow proto error
140 snap-input unknown
oui/snap
protocol error
180 ethernet-input
no
error error
1 ethernet-input unknown
ethernet
type error
61 ethernet-input
unknown
vlan error
1 null-node
blackholed
packets error
12 ip4-input
ip4 ttl <=
1 error
1 ip6-icmp-input neighbor
solicitations
from source error
30 ip6-icmp-input neighbor
solicitations for
unknown error
12 ip4-icmp-error hop limit
exceeded
response sent info
1853434 bond-input
no
error error
1 ethernet-input
unknown
vlan error
1 ip4-input
ip4 ttl <=
1 error
1 ip4-local ip4
source lookup
miss error
30 ip6-icmp-input neighbor
solicitations for
unknown error
1 ip6-icmp-input neighbor
advertisements sent info
1 ip4-icmp-error hop limit
exceeded
response sent info
1864191 bond-input
no
error error
1 null-node
blackholed
packets error
5 ip4-input
ip4 ttl <=
1 error
35 ip6-icmp-input neighbor
solicitations for
unknown error
5 ip4-icmp-error hop limit
exceeded
response sent info
1743015 bond-input
no
error error
3 ethernet-input
unknown
vlan error
1 null-node
blackholed
packets error
4 ip4-input
ip4 ttl <=
1 error
1 ip6-icmp-input neighbor
solicitations
from source error
35 ip6-icmp-input neighbor
solicitations for
unknown error
4 ip4-icmp-error hop limit
exceeded
response sent info
1917745 bond-input
no
error error
2 ip4-input
ip4 ttl <=
1 error
18 ip6-icmp-input neighbor
solicitations for
unknown error
2 ip4-icmp-error hop limit
exceeded
response sent info
886249 bond-input
no
error error
1 ethernet-input
unknown
vlan error
7 ip4-input
ip4 ttl <=
1 error
1 ip6-icmp-input
valid
packets info
14 ip6-icmp-input neighbor
solicitations for
unknown error
1 ip6-icmp-input router
advertisements
received info
This counter :
1743015 bond-input
no
error error
Is wild :) and it increasing really fast - and also simple
icmp ping to
this host or from this host is reporting froms from 2 to 5%
- when both
100Gnics are receiving and transmitting 7Mpps - per port (
so in total
14Mpps RX and 14Mpps TX)
Basically it looks like something cant handle more - and
probabbly
FIB/RIB processing is the case here - because without
forwarding and
frr+routes we was able to reach like 40Mpps per 100G port
Bonded nics: (m0rdma0+m1rdma1 - created by lcp from two
physical mlx
ports as RDMA pairs) - create commands:
create int rdma host-if ens1f1np1 name m1rdma1 num-rx-queues
12
create int rdma host-if ens1f0np0 name m0rdma0 num-rx-queues
12
create bond mode lacp load-balance l34
bond add BondEthernet0 m0rdma0
bond add BondEthernet0 m1rdma1
Nics attached via rdma:
m0rdma0 4 up m0rdma0
Link speed: 100 Gbps
RX Queues:
queue thread mode
0 vpp_wk_13 (14) polling
1 vpp_wk_14 (15) polling
2 vpp_wk_15 (16) polling
3 vpp_wk_16 (17) polling
4 vpp_wk_17 (18) polling
5 vpp_wk_18 (19) polling
6 vpp_wk_19 (20) polling
7 vpp_wk_20 (21) polling
8 vpp_wk_21 (22) polling
9 vpp_wk_22 (23) polling
10 vpp_wk_23 (24) polling
11 vpp_wk_0 (1) polling
Ethernet address 98:03:9b:67:f6:1e
netdev ens1f0np0 pci-addr 0000:b3:00.0
product name: CX516A - ConnectX-5 QSFP28
part number: MCX516A-CCAT
revision: B2
serial number: MT2043K02404
flags: admin-up mlx5dv
rss: ipv4-tcp ipv6-tcp
mlx5: version 0
device flags: cqe-v1 mpw-allowed enhanced-mpw cqe-
128b-comp
cqe-128b-pad
m1rdma1 3 up m1rdma1
Link speed: 100 Gbps
RX Queues:
queue thread mode
0 vpp_wk_1 (2) polling
1 vpp_wk_2 (3) polling
2 vpp_wk_3 (4) polling
3 vpp_wk_4 (5) polling
4 vpp_wk_5 (6) polling
5 vpp_wk_6 (7) polling
6 vpp_wk_7 (8) polling
7 vpp_wk_8 (9) polling
8 vpp_wk_9 (10) polling
9 vpp_wk_10 (11) polling
10 vpp_wk_11 (12) polling
11 vpp_wk_12 (13) polling
Ethernet address 98:03:9b:67:f6:1e
netdev ens1f1np1 pci-addr 0000:b3:00.1
product name: CX516A - ConnectX-5 QSFP28
part number: MCX516A-CCAT
revision: B2
serial number: MT2043K02404
flags: admin-up mlx5dv
rss: ipv4-tcp ipv6-tcp
mlx5: version 0
device flags: cqe-v1 mpw-allowed enhanced-mpw cqe-
128b-comp
cqe-128b-pad
....
....
Same NIC same Queue threads numbers (12 per card from 24
available cores
that are isolcpus - at kernel boot)
vpp# show interface rx-placement
Thread 1 (vpp_wk_0):
node rdma-input:
m0rdma0 queue 11 (polling)
node virtio-input:
tap1 queue 0 (polling)
Thread 2 (vpp_wk_1):
node rdma-input:
m1rdma1 queue 0 (polling)
node virtio-input:
tap5 queue 0 (polling)
Thread 3 (vpp_wk_2):
node rdma-input:
m1rdma1 queue 1 (polling)
Thread 4 (vpp_wk_3):
m1rdma1 queue 2 (polling)
Thread 5 (vpp_wk_4):
m1rdma1 queue 3 (polling)
Thread 6 (vpp_wk_5):
m1rdma1 queue 4 (polling)
Thread 7 (vpp_wk_6):
m1rdma1 queue 5 (polling)
Thread 8 (vpp_wk_7):
m1rdma1 queue 6 (polling)
Thread 9 (vpp_wk_8):
m1rdma1 queue 7 (polling)
Thread 10 (vpp_wk_9):
m1rdma1 queue 8 (polling)
Thread 11 (vpp_wk_10):
m1rdma1 queue 9 (polling)
Thread 12 (vpp_wk_11):
m1rdma1 queue 10 (polling)
Thread 13 (vpp_wk_12):
m1rdma1 queue 11 (polling)
Thread 14 (vpp_wk_13):
m0rdma0 queue 0 (polling)
Thread 15 (vpp_wk_14):
m0rdma0 queue 1 (polling)
Thread 16 (vpp_wk_15):
m0rdma0 queue 2 (polling)
Thread 17 (vpp_wk_16):
m0rdma0 queue 3 (polling)
Thread 18 (vpp_wk_17):
m0rdma0 queue 4 (polling)
Thread 19 (vpp_wk_18):
m0rdma0 queue 5 (polling)
Thread 20 (vpp_wk_19):
m0rdma0 queue 6 (polling)
Thread 21 (vpp_wk_20):
m0rdma0 queue 7 (polling)
Thread 22 (vpp_wk_21):
m0rdma0 queue 8 (polling)
Thread 23 (vpp_wk_22):
m0rdma0 queue 9 (polling)
Thread 24 (vpp_wk_23):
m0rdma0 queue 10 (polling)
Below also some stats about memory used:
vpp# show memory
show memory: Need one of api-segment, stats-segment, main-
heap,
numa-heaps or map
vpp# show memory main-heap
Thread 0 vpp_main
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.37M, free: 1.64G, trimmable:
1.62G
Thread 1 vpp_wk_0
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.37M, free: 1.64G, trimmable:
1.62G
Thread 2 vpp_wk_1
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 3 vpp_wk_2
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 4 vpp_wk_3
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 5 vpp_wk_4
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 6 vpp_wk_5
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 7 vpp_wk_6
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 8 vpp_wk_7
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 9 vpp_wk_8
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 10 vpp_wk_9
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 11 vpp_wk_10
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 12 vpp_wk_11
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 13 vpp_wk_12
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 14 vpp_wk_13
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 15 vpp_wk_14
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 16 vpp_wk_15
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 17 vpp_wk_16
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 18 vpp_wk_17
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 19 vpp_wk_18
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 20 vpp_wk_19
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 21 vpp_wk_20
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 22 vpp_wk_21
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 23 vpp_wk_22
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 24 vpp_wk_23
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 25 vpp_wk_24
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 26 vpp_wk_25
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.38M, free: 1.64G, trimmable:
1.62G
Thread 27 vpp_wk_26
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.39M, free: 1.64G, trimmable:
1.62G
vpp# show memory main-heap
Thread 0 vpp_main
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 1 vpp_wk_0
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 2 vpp_wk_1
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 3 vpp_wk_2
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 4 vpp_wk_3
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 5 vpp_wk_4
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 6 vpp_wk_5
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 7 vpp_wk_6
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 8 vpp_wk_7
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
vpp# show memory main-heap
Thread 0 vpp_main
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 1 vpp_wk_0
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 2 vpp_wk_1
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 3 vpp_wk_2
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 4 vpp_wk_3
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 5 vpp_wk_4
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 6 vpp_wk_5
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 7 vpp_wk_6
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
page stats: page-size 1G, total 2, mapped 2, not-mapped
0
numa 0: 2 pages, 2g bytes
total: 1.99G, used: 369.58M, free: 1.64G, trimmable:
1.62G
Thread 8 vpp_wk_7
base 0x7f64c0000000, size 2g, locked, unmap-on-destroy,
name 'main
heap'
vpp# show memory ?
show memory show memory
[api-segment][stats-segment][verbose]
[numa-heaps][map]
vpp# show memory
show memory: Need one of api-segment, stats-segment, main-
heap,
numa-heaps or map
vpp# show memory numa-heaps
Numa 0 uses the main heap...
vpp# show memory map
StartAddr size FD PageSz Pages Numa0 NotMap Name
00007f64c0000000 2g 1G 2 2 0 main
heap
00007f655b663000 2m 4K 512 9 503
thread stack:
thread 0
00007f6440000000 1g 4 1G 1 1 0 stat
segment
00007f655bc9a000 32k 4K 8 2 6
process stack:
wg-timer-manager
00007f655b642000 128k 4K 32 2 30
process stack:
vrrp-periodic-process
00007f655bc7a000 32k 4K 8 1 7
process stack:
prom-scraper-process
00007f655b639000 32k 4K 8 2 6
process stack:
nsh-md2-ioam-export-process
00007f655b630000 32k 4K 8 2 6
process stack:
nat44-ei-ha-process
00007f655b627000 32k 4K 8 2 6
process stack:
memif-process
00007f655b61e000 32k 4K 8 2 6
process stack:
lldp-process
00007f655b5fd000 128k 4K 32 3 29
process stack:
linux-cp-netlink-process
00007f655b5f4000 32k 4K 8 2 6
process stack:
udp-ping-process
00007f655b5eb000 32k 4K 8 2 6
process stack:
vxlan-gpe-ioam-export-process
00007f655b5e2000 32k 4K 8 2 6
process stack:
ioam-export-process
00007f655b5d9000 32k 4K 8 2 6
process stack:
ikev2-manager-process
00007f655b5d0000 32k 4K 8 2 6
process stack:
igmp-timer-process
00007f655b5c7000 32k 4K 8 1 7
process stack:
http-timer-process
00007f655b5be000 32k 4K 8 2 6
process stack:
flowprobe-timer-process
00007f655b59d000 128k 4K 32 2 30
process stack:
dpdk-process
00007f655b57c000 128k 4K 32 2 30
process stack:
admin-up-down-process
00007f655b573000 32k 4K 8 2 6
process stack:
send-dhcp6-pd-client-message-process
00007f655b56a000 32k 4K 8 2 6
process stack:
dhcp6-pd-client-cp-process
00007f655b561000 32k 4K 8 2 6
process stack:
dhcp6-client-cp-process
00007f655b558000 32k 4K 8 2 6
process stack:
send-dhcp6-client-message-process
00007f655b54f000 32k 4K 8 2 6
process stack:
dhcp6-pd-reply-publisher-process
00007f655b546000 32k 4K 8 2 6
process stack:
dhcp6-reply-publisher-process
00007f655b535000 64k 4K 16 2 14
process stack:
dhcp-client-process
00007f655b52c000 32k 4K 8 2 6
process stack:
cnat-scanner-process
00007f655b523000 32k 4K 8 2 6
process stack:
avf-process
00007f655b51a000 32k 4K 8 2 6
process stack:
acl-plugin-fa-cleaner-process
00007f655b4d9000 256k 4K 64 3 61
process stack:
api-rx-from-ring
00007f655b4d0000 32k 4K 8 2 6
process stack:
rd-cp-process
00007f655b4c7000 32k 4K 8 2 6
process stack:
ip6-ra-process
00007f655b4be000 32k 4K 8 2 6
process stack:
ip6-rs-process
00007f655b4b5000 32k 4K 8 2 6
process stack:
ip6-mld-process
00007f655b4ac000 32k 4K 8 2 6
process stack:
fib-walk
00007f655b4a3000 32k 4K 8 1 7
process stack:
session-queue-process
00007f655b49a000 32k 4K 8 2 6
process stack:
vhost-user-process
00007f655b491000 32k 4K 8 2 6
process stack:
vhost-user-send-interrupt-process
00007f655b488000 32k 4K 8 2 6
process stack:
flow-report-process
00007f655b47f000 32k 4K 8 2 6
process stack:
bfd-process
00007f655b476000 32k 4K 8 2 6
process stack:
ip-neighbor-event
00007f655b46d000 32k 4K 8 2 6
process stack:
ip6-neighbor-age-process
00007f655b464000 32k 4K 8 2 6
process stack:
ip4-neighbor-age-process
00007f655b45b000 32k 4K 8 2 6
process stack:
ip6-sv-reassembly-expire-walk
00007f655b452000 32k 4K 8 2 6
process stack:
ip6-full-reassembly-expire-walk
00007f655b449000 32k 4K 8 2 6
process stack:
ip4-sv-reassembly-expire-walk
00007f655b440000 32k 4K 8 2 6
process stack:
ip4-full-reassembly-expire-walk
00007f655b437000 32k 4K 8 2 6
process stack:
bond-process
00007f655b42e000 32k 4K 8 2 6
process stack:
l2fib-mac-age-scanner-process
00007f655b425000 32k 4K 8 2 6
process stack:
l2-arp-term-publisher
00007f655b41c000 32k 4K 8 2 6
process stack:
vpe-link-state-process
00007f655b3db000 256k 4K 64 5 59
process stack:
startup-config-process
00007f655b3d2000 32k 4K 8 2 6
process stack:
statseg-collector-process
00007f655a5a8000 2m 4K 512 7 505
thread stack:
thread 1
00007f655a3a7000 2m 4K 512 10 502
thread stack:
thread 2
00007f655a1a6000 2m 4K 512 10 502
thread stack:
thread 3
00007f6559fa5000 2m 4K 512 10 502
thread stack:
thread 4
00007f6559da4000 2m 4K 512 10 502
thread stack:
thread 5
00007f6559ba3000 2m 4K 512 10 502
thread stack:
thread 6
00007f65599a2000 2m 4K 512 10 502
thread stack:
thread 7
00007f65597a1000 2m 4K 512 11 501
thread stack:
thread 8
00007f65595a0000 2m 4K 512 10 502
thread stack:
thread 9
00007f655939f000 2m 4K 512 10 502
thread stack:
thread 10
00007f655919e000 2m 4K 512 10 502
thread stack:
thread 11
00007f6558f9d000 2m 4K 512 10 502
thread stack:
thread 12
00007f6558d9c000 2m 4K 512 10 502
thread stack:
thread 13
00007f6558b9b000 2m 4K 512 10 502
thread stack:
thread 14
00007f655899a000 2m 4K 512 10 502
thread stack:
thread 15
00007f6558799000 2m 4K 512 10 502
thread stack:
thread 16
00007f6558598000 2m 4K 512 10 502
thread stack:
thread 17
00007f6558397000 2m 4K 512 10 502
thread stack:
thread 18
00007f6558196000 2m 4K 512 10 502
thread stack:
thread 19
00007f6553e00000 2m 4K 512 10 502
thread stack:
thread 20
00007f6553bff000 2m 4K 512 10 502
thread stack:
thread 21
00007f65539fe000 2m 4K 512 10 502
thread stack:
thread 22
00007f65537fd000 2m 4K 512 10 502
thread stack:
thread 23
00007f65535fc000 2m 4K 512 10 502
thread stack:
thread 24
00007f65533fb000 2m 4K 512 10 502
thread stack:
thread 25
00007f65531fa000 2m 4K 512 10 502
thread stack:
thread 26
00007f6552ff9000 2m 4K 512 10 502
thread stack:
thread 27
00007f6552f98000 64k 4K 16 2 14
process stack:
lacp-process
00007f6552f57000 256k 4K 64 3 61
process stack:
unix-cli-local:0
00007f6552f46000 64k 4K 16 2 14
process stack:
unix-cli-new-session
--
Paweł Staszewski
CEO at ITCare
Cell: +48 609911040
pstaszew...@itcare.pl <mailto:pstaszew...@itcare.pl>
This message (including any attachments) contains confidential information
intended
for a specific individual and purpose, and is protected by law. If you
are not the
intended recipient, you should delete this message. Any disclosure,
copying, or
distribution of this message, or the taking of any action based on it, is
strictly
prohibited.
--------------------------------------------------------------------------
-----------
Wiadomość ta jest przeznaczona jedynie dla osoby lub podmiotu będącego jej
adresatem i
może zawierać poufne lub uprzywilejowane informacje. Zakazane jest
przeglądanie,
przesyłanie, rozpowszechnianie lub inne wykorzystywanie tych informacji,
jak również
podejmowanie działań na ich podstawie, przez osoby lub podmioty inne niż
zamierzony
adresat. Jeśli otrzymali Państwo tę wiadomość przez pomyłkę, prosimy o
poinformowanie
nadawcy i usunięcie jej z komputera.