[vpp-dev] using vpp node

2020-11-09 Thread Merve
Hello friends,
I have created a new node and am sending packets by the trex traffic generator 
to process packets to my node. When the packages I send are finished, I want to 
complete the task (for ex. print packet count one time) and leave the vpp ( 
When the packages I send are finished, vpp continues to run normally. How can 
vpp know that my packets are finished? ). How can I do that?? a second 
question; How can I perform a task at certain times while the vpp is running. 
What feature of my node should I use? For example in Vpp, I count the packets 
coming to my node every ten seconds.

Thanks for your help!!

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17972): https://lists.fd.io/g/vpp-dev/message/17972
Mute This Topic: https://lists.fd.io/mt/78139052/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Packet Generation with trex- Sending VPP

2020-11-10 Thread Merve
Hi everyone,

I generate packages with the Trex tool and send them to Vpp, but the number of 
packages I see in VPP is much less than that sent. Anyone have any suggestions 
on this topic?

Trex:

Per port stats table
ports |               0 |               1
-
opackets |       218175928 |       274757392
obytes |     61817310862 |    224941226070
ipackets |       217748905 |           14156
ibytes |     61673972630 |          934296
ierrors |               0 |               0
oerrors |               0 |               0
Tx Bw |       3.05 Gbps |       9.73 Gbps

-Global stats enabled
Cpu Utilization : 88.9  %  1.6 Gb/core
Platform_factor : 1.0
Total-Tx        :      12.78 Gbps
Total-Rx        :       2.98 Gbps
Total-PPS       :       2.75 Mpps
Total-CPS       :      52.89 Kcps

Expected-PPS    :      12.91 Mpps
Expected-CPS    :     247.36 Kcps
Expected-BPS    :      60.25 Gbps

Active-flows    :   241345  Clients :      504   Socket-util : 0.8963 %
Open-flows      : 11008918  Servers :     5616   Socket :   284497 
Socket/Clients :  564.5
Total_queue_full : 2430913857
drop-rate       :       9.79 Gbps
current time    : 194.9 sec
test duration   : 0.0 sec

-Latency stats enabled
Cpu Utilization : 0.3 %
if|   tx_ok , rx_ok  , rx check ,error,       latency (usec) ,    Jitter        
  max window
|         ,        ,          ,     ,   average   ,   max  ,    (usec)

0 |   184525,  184082,         0,   92,        186  ,  172472,      77      |  
477  441  542  495  479  172472  3553  536  544  533  514  428  495
1 |   184525,      26,         0,14131,      11770  ,       0,     849      |  
0  0  0  0  0  0  0  0  0  0  0  0  0
*** TRex is shutting down - cause: 'CTRL + C detected'
latency daemon has stopped
All cores stopped !!
**
vpp:

DBGvpp# show int
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1/0/0   1  up  9000/0/0/0 rx packets  
    20492976
rx bytes 16539337003
tx packets  20492738
tx bytes 1625267
drops    238
ip4 20492974
rx-miss    254406215
tx-error   1
TenGigabitEthernet1/0/1   2  up  9000/0/0/0 rx packets  
   217851963
rx bytes 60835065339
tx packets 217851960
tx bytes 60835065117
drops  5
ip4    217851960
rx-miss   426572
tx-error   1
local0    0 down  0/0/0/0   drops   
   2

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17976): https://lists.fd.io/g/vpp-dev/message/17976
Mute This Topic: https://lists.fd.io/mt/78161706/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Packet Generation with trex- Sending VPP

2020-11-10 Thread Merve
Thank you reply Ben.
when i run show err
Count  Node  Reason   
Severity
1   TenGigabitEthernet1/0/0-output   interface is down    error
1   TenGigabitEthernet1/0/1-output   interface is down    error
1 *dpdk-input  no error    error*
6 arp-reply   ARP replies sent    error

what exactly does that mean?
I still can't get enough packages.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17982): https://lists.fd.io/g/vpp-dev/message/17982
Mute This Topic: https://lists.fd.io/mt/78161706/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Packet Generation with trex- Sending VPP

2020-11-11 Thread Merve
then I see like this: (show error)

1   TenGigabitEthernet1/0/1-output   interface is down    error
11667420 null-node  blackholed packets  
 error

what exactly does that mean?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17983): https://lists.fd.io/g/vpp-dev/message/17983
Mute This Topic: https://lists.fd.io/mt/78161706/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] rx-no-buf and rx-miss :packet loss

2020-11-11 Thread Merve
Hi everyone,
when the rx and tx queue number is low, dpdk catches the packets faster, but 
when the rx and tx queue numbe r is high, I get a result like, below. I have  
rx-no-buf and rx-miss. So i n both cases I have packet losses. how to get rid 
of this situation...

startup.conf

dev default {
num-rx-queues 2
num-tx-queues 2
num-rx-desc 1024
num-tx-desc 1024
}

Name   Idx    State  MTU (L3/IP4/IP6/MPLS)  
   Counter  Count
TenGigabitEthernet1/0/1   2  up  9000/0/0/0 rx packets  
    34285615
rx bytes  2057136900
tx packets  34285610
tx bytes  1577138074
drops  5
ip4 34285609
rx-miss    222380741
tx-error   4

**

startup.conf

dev default {
num-rx-queues 8
num-tx-queues 8
num-rx-desc 1024
num-tx-desc 1024
}

Name   Idx    State  MTU (L3/IP4/IP6/MPLS)  
   Counter  Count
TenGigabitEthernet1/0/1   2  up  9000/0/0/0 rx packets  
    7705
rx bytes  462300
tx packets  7702
tx bytes  354306
drops  3
ip4 7701
rx-no-buf  768530880
rx-miss    204257196
tx-error   2

thanks for your help!

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17985): https://lists.fd.io/g/vpp-dev/message/17985
Mute This Topic: https://lists.fd.io/mt/78179815/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] dpdk buffer

2020-11-11 Thread Merve
Hi, everyone. I want to increase num-mbufs in startup.conf. when ı checked dpdk 
buffer:

DBGvpp# show dpdk buffer
name="vpp pool 0"  available =   0 allocated =   16800 total =   16800

I have no available memory? what can i do in this situation?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17986): https://lists.fd.io/g/vpp-dev/message/17986
Mute This Topic: https://lists.fd.io/mt/78180185/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] dpdk buffer

2020-11-11 Thread Merve
Thanks, I fixed my mistake.

DBGvpp# show buffers
Pool Name    Index NUMA  Size  Data Size  Total  Avail  Cached   Used
default-numa-0 0 0   2496 2048    16800  16800 0   0
DBGvpp#

I want to increase number of buffers allocated. what can i do in this situation?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17989): https://lists.fd.io/g/vpp-dev/message/17989
Mute This Topic: https://lists.fd.io/mt/78180185/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] increase number of buffers allocated

2020-11-11 Thread Merve
Hi everyone,

DBGvpp# show buffers
Pool Name    Index NUMA  Size  Data Size  Total  Avail  Cached   Used
default-numa-0 0 0   2496 2048    16800  16800 0   0
DBGvpp#

I want to increase number of buffers allocated. what can i do in this situation?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17990): https://lists.fd.io/g/vpp-dev/message/17990
Mute This Topic: https://lists.fd.io/mt/78180599/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Total Packet Process Time in VPP

2020-11-12 Thread Merve
Hi everyone, I'm using VPP with trex. I send and receive packets. Is there any 
place on  Vpp where I can view the total processing time? I see graph runtime 
statistic
but I haven't found anything about the time passed here.

I see like this; İn here "time" is not total processing time.
(show runtime command)
Time 1603.9, 10 sec internal node vector rate 0.00 loops/sec 1367859.23
vector rates in 1.5238e5, out 1.5159e5, drop 0.e0, punt 0.e0

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18003): https://lists.fd.io/g/vpp-dev/message/18003
Mute This Topic: https://lists.fd.io/mt/78204265/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] _GI_raise Error with 8 worker thread

2020-11-18 Thread Merve
Hi Everyone,

I send packages with the trex tool and process packages with vpp. However, when 
I increase the thread count to 8, I get an error like this.

double free or corruption (!prev)

Thread 9 "vpp_wk_6" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fff8bfff700 (LWP 18733)]
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51    ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb)

When my thread count is less, I don't get any errors. How can I resolve this 
issue?

nano /etc/vpp/startup.conf

cpu {
skip-cores 1
workers 8
}

buffers {

buffers-per-numa 128000
}

dpdk {

dev default {

num-rx-queues 8
num-tx-queues 8

num-rx-desc 1024
num-tx-desc 1024
}
}

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18081): https://lists.fd.io/g/vpp-dev/message/18081
Mute This Topic: https://lists.fd.io/mt/78338173/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] pktgen- vpp-error

2020-11-20 Thread Merve
Hi everyone,
I use dpdk-pktgen for packet generate. Then I send packets to vpp. But in vpp 
all received packets dropped.
rx packets    464384
drops 464384

when i see errors:

DBGvpp# show errors
Count  Node  Reason 
 Severity
464384   ip4-udp-lookup    No listener for dst port 
  error
464384   ip4-icmp-error  destination unreachable response se   
error

configuration in vpp:
DBGvpp# set int ip address TenGigabitEthernet1/0/1 10.99.204.8/24
DBGvpp# set int state TenGigabitEthernet1/0/1 up
DBGvpp#  set ip neighbor TenGigabitEthernet1/0/1 10.99.204.3 ac:1f:6b:ab:99:f2
DBGvpp# set interface mac address TenGigabitEthernet1/0/1 44:ec:ce:c1:a8:20

configuration in pktgen:

Pktgen:/> stop 0
Pktgen:/> set 0 rate 0.1
Pktgen:/> set 0 ttl 10
Pktgen:/> set 0 proto udp
Pktgen:/> set 0 dst mac 44:ec:ce:c1:a8:20
Pktgen:/> set 0 dst ip 10.99.204.8
Pktgen:/> set 0 src ip 10.99.204.3/30
Pktgen:/> set 0 size 64
Pktgen:/> start 0

Why does vpp drop packages? Does anyone have a suggestion?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18106): https://lists.fd.io/g/vpp-dev/message/18106
Mute This Topic: https://lists.fd.io/mt/78386404/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] pktgen- vpp-error

2020-11-20 Thread Merve
When I ran the node I just created to process the packet, it worked fine.
Thanks,

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18108): https://lists.fd.io/g/vpp-dev/message/18108
Mute This Topic: https://lists.fd.io/mt/78386404/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp worker threads access in vpp plugin

2020-11-30 Thread Merve
Hi everyone,
Can I access worker threads inside the vpp node? for example:

worker_threads[id]=process_packet_function();

I want to define a callback function for the created threads. How can I do this 
in vpp?

Thanks,

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18199): https://lists.fd.io/g/vpp-dev/message/18199
Mute This Topic: https://lists.fd.io/mt/78605485/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Packages are not visible

2020-12-06 Thread Merve
Hi,
When i want to send packet from DPDK based generator to VPP. packages are not 
visible when "show int"

DBGvpp# show int
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1/0/0   1 down 9000/0/0/0
TenGigabitEthernet1/0/1   2  up  9000/0/0/0
local0    0 down  0/0/0/0

but when "show hardware-interfaces", "rx_total_packets"  is not fill

TenGigabitEthernet1/0/1    2    down  TenGigabitEthernet1/0/1
Link speed: unknown
Ethernet address ac:1f:6b:f8:1f:11
Intel 82599
carrier down
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
rx: queues 1 (max 128), desc 2048 (min 32 max 4096 align 8)
tx: queues 1 (max 64), desc 2048 (min 32 max 4096 align 8)
pci: device 8086:1528 subsystem 15d9:0734 address :01:00.01 numa 0
max rx packet len: 15872
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
macsec-strip vlan-filter vlan-extend jumbo-frame scatter
security keep-crc rss-hash
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
tcp-tso macsec-insert multi-segs security
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp
ipv6-udp ipv6-ex ipv6
rss active:    none
tx burst function: ixgbe_xmit_pkts
rx burst function: ixgbe_recv_scattered_pkts_vec

extended stats:
mac_local_errors    84
mac_remote_errors    3
rx_total_packets   637
rx_total_bytes  581074

What can I do in this situation? Are my packets not being delivered properly to 
my nodes?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18256): https://lists.fd.io/g/vpp-dev/message/18256
Mute This Topic: https://lists.fd.io/mt/78753720/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] can't reach at line rate

2020-12-13 Thread Merve
Hi everyone, İ send 64 byte udp packets via trex generator. But cant reach line 
rate in vpp. Although I increased the number of cores, I cannot get enough 
results. İ have high rate rx miss.

TenGigabitEthernet1/0/1   2  up  9000/0/0/0 rx packets  
   283319158
rx bytes 16999149480
tx packets 283319157
tx bytes 16999149420
drops 1
ip4 283319156
rx-miss  135184448

/etc/vpp/startup.conf:
cpu {
main-core 1
corelist-workers 2-10
isolcpus=2-10
}
buffers {

buffers-per-numa 128000
default data-size 8192
}

dpdk {

dev default {

num-rx-queues 9
num-tx-queues 9
num-rx-desc 4096
num-tx-desc 4096
}

how can i fix this situation? Du you have any suggestion?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18320): https://lists.fd.io/g/vpp-dev/message/18320
Mute This Topic: https://lists.fd.io/mt/78925929/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Packages are not visible

2020-12-16 Thread Merve
DBGvpp# show hardware-interfaces 
 NameIdx   Link  Hardware
TenGigabitEthernet1/0/01 up   TenGigabitEthernet1/0/0
 Link speed: 10 Gbps
 Ethernet address ac:1f:6b:f8:1f:10
 Intel 82599
   carrier up full duplex mtu 9206 
   flags: admin-up pmd tx-offload intel-phdr-cksum rx-ip4-cksum
   rx: queues 1 (max 128), desc 2048 (min 32 max 4096 align 8)
   tx: queues 1 (max 64), desc 2048 (min 32 max 4096 align 8)
   pci: device 8086:1528 subsystem 15d9:0734 address :01:00.00 numa 0
   max rx packet len: 15872
   promiscuous: unicast off all-multicast on
   vlan offload: strip off filter off qinq off
   rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro 
  macsec-strip vlan-filter vlan-extend jumbo-frame scatter 
  security keep-crc rss-hash 
   rx offload active: ipv4-cksum 
   tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum 
  tcp-tso macsec-insert multi-segs security 
   tx offload active: udp-cksum tcp-cksum 
   rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp 
  ipv6-udp ipv6-ex ipv6 
   rss active:none
   tx burst function: ixgbe_xmit_pkts
   rx burst function: ixgbe_recv_pkts_vec

   extended stats:
 mac_local_errors   488
 mac_remote_errors3
 rx_total_packets   4130112
 rx_total_bytes   247806720
TenGigabitEthernet1/0/12 up   TenGigabitEthernet1/0/1
 Link speed: 10 Gbps
 Ethernet address ac:1f:6b:f8:1f:11
 Intel 82599
   carrier up full duplex mtu 9206 
   flags: admin-up pmd tx-offload intel-phdr-cksum rx-ip4-cksum
   rx: queues 1 (max 128), desc 2048 (min 32 max 4096 align 8)
   tx: queues 1 (max 64), desc 2048 (min 32 max 4096 align 8)
   pci: device 8086:1528 subsystem 15d9:0734 address :01:00.01 numa 0
   max rx packet len: 15872
   promiscuous: unicast off all-multicast on
   vlan offload: strip off filter off qinq off
   rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro 
  macsec-strip vlan-filter vlan-extend jumbo-frame scatter 
  security keep-crc rss-hash 
   rx offload active: ipv4-cksum 
   tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum 
  tcp-tso macsec-insert multi-segs security 
   tx offload active: udp-cksum tcp-cksum 
   rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp 
  ipv6-udp ipv6-ex ipv6 
   rss active:none
   tx burst function: ixgbe_xmit_pkts
   rx burst function: ixgbe_recv_pkts_vec

   extended stats:
 mac_local_errors   485
 mac_remote_errors4
 rx_total_packets 529673216
 rx_total_bytes 31780392960

DBGvpp# show int 
 Name   IdxState  MTU (L3/IP4/IP6/MPLS) Counter 
 Count 
TenGigabitEthernet1/0/0   1  up  9000/0/0/0 
TenGigabitEthernet1/0/1   2  up  9000/0/0/0 
local00 down  0/0/0/0

İ checked connectivity. packages are not visible when "show int" 

İ try packet forwarding over VPP. İ used Pktgen tool. Pktgen sends packet but 
vpp not forwarded packets.






My vpp configuration:

set int ip address TenGigabitEthernet1/0/0 10.10.10.2/24

set int ip address TenGigabitEthernet1/0/1 10.10.11.2/24

set int state TenGigabitEthernet1/0/0 up

set int state TenGigabitEthernet1/0/1 up

set ip neighbor TenGigabitEthernet1/0/0 10.10.10.3 e4:43:4b:2e:b1:d1

set ip neighbor TenGigabitEthernet1/0/1 10.10.11.3 e4:43:4b:2e:b1:d3

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18378): https://lists.fd.io/g/vpp-dev/message/18378
Mute This Topic: https://lists.fd.io/mt/78753720/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP Vectors/Call rate #dpdk

2020-12-17 Thread Merve
Hi everyone,
When I send mixed traffic in varying sizes ( usually larger than 64 bytes ) 
packets in line rate, miss rates is low and vectors/call is low.

But when i send 64 byte udp packets (16K flow) in line rate, i has high miss 
rate and  Vector/call is high. So vpp cant prosess enough packet.
Something is wrong here? Anyone have any suggestions on this matter?
(mixed traffic in varying sizes ( usually larger than 64 bytes ) packets)

Thread 1 vpp_wk_0 (lcore 2)
Time 173.8, 10 sec internal node vector rate 0.00 loops/sec 1430309.12
 vector rates in 6.4933e5, out 6.2164e5, drop 2.8775e-2, punt 0.e0
Name State Calls  Vectors
Suspends Clocks   Vectors/Call
TenGigabitEthernet1/0/0-output   active138831859870375  
 0  9.54e1   43.12
TenGigabitEthernet1/0/0-tx   active138831755058452  
 0  2.60e2   39.66
TenGigabitEthernet1/0/1-output   active132430252957837  
 0  1.05e2   39.99
TenGigabitEthernet1/0/1-tx   active132430152957836  
 0  2.47e2   39.99
arp-inputactive  4   7  
 0  2.89e31.75
arp-replyactive  4   7  
 0  2.13e41.75
dpdk-input   polling  95342497   112828215  
 0  1.01e31.18
drop active  5   5  
 0  5.07e31.00
error-drop   active  5   5  
 0  4.61e31.00
ethernet-input   active2341216   112828215  
 0  3.22e2   48.19
interface-output active  3   4  
 0  4.19e31.33
ip4-input-no-checksumactive1598737   112828208  
 0  1.98e2   70.57
ip4-load-balance active1598737   112828208  
 0  1.69e2   70.57
ip4-lookup   active1598737   112828208  
 0  3.14e2   70.57
ip4-rewrite  active1598737   112828208  
 0  8.14e2   70.57
unix-epoll-input polling 93018   0  
 0  1.69e30.00
---
Thread 2 vpp_wk_1 (lcore 3)
Time 173.8, 10 sec internal node vector rate 0.00 loops/sec 1454623.01
 vector rates in 6.4994e5, out 6.2287e5, drop 0.e0, punt 0.e0
Name State Calls  Vectors
Suspends Clocks   Vectors/Call
TenGigabitEthernet1/0/0-output   active122764159821627  
 0  9.79e1   48.73
TenGigabitEthernet1/0/0-tx   active122764155117639  
 0  2.63e2   44.89
TenGigabitEthernet1/0/1-output   active114650253113046  
 0  1.07e2   46.33
TenGigabitEthernet1/0/1-tx   active114650253113046  
 0  2.51e2   46.33
dpdk-input   polling  94368347   112934673  
 0  9.87e21.19
ethernet-input   active2012117   112934673  
 0  2.85e2   56.13
ip4-input-no-checksumactive1435435   112934673  
 0  2.00e2   78.68
ip4-load-balance active1435435   112934673  
 0  1.73e2   78.68
ip4-lookup   active1435435   112934673  
 0  3.16e2   78.68
ip4-rewrite  active1435435   112934673  
 0  8.59e2   78.68
unix-epoll-input polling 92067   0  
 0  1.68e30.00
---
Thread 3 vpp_wk_2 (lcore 4)
Time 173.8, 10 sec internal node vector rate 0.00 loops/sec 1456729.77
 vector rates in 6.3335e5, out 6.1369e5, drop 0.e0, punt 0.e0
Name State Calls  Vectors
Suspends Clocks   Vectors/Call
TenGigabitEthernet1/0/0-output   active205908658390205  
 0  9.89e1   28.36
TenGigabitEthernet1/0/0-tx   active205908654975223  
 0  2.49e2   26.69
TenGigabitEthernet1/0/1-output   active20509

[vpp-dev] Blackholed packets after forwarding interface output

2020-12-20 Thread Merve
Hi everyone. I created a plugin for process packet. After process İ forward 
packets interface output. For testing, I generate packet with trex and send 
vpp. Trex send packet vpp but after process in my node, vpp not send to trex.  
But when show int:
TenGigabitEthernet1/0/1   2  up  9000/0/0/0 rx packets  
47034936
   rx bytes 
 2822096160
   tx packets   
   47034934
   tx bytes 
 2163606978
it seems to be transfer packet, but not see in trex.

vpp# show errors 
  Count  Node  Reason   
Severity 
1   TenGigabitEthernet1/0/0-output   interface is down  
  error
1   TenGigabitEthernet1/0/1-output   interface is down  
  error
 11961361 null-node  blackholed packets 
  error
1 dpdk-input  no error  
  error
2 arp-reply   ARP replies sent  
  error
1   TenGigabitEthernet1/0/0-output   interface is down  
  error
 23381499 null-node  blackholed packets 
  error
1 dpdk-input  no error  
  error
2 arp-reply   ARP replies sent  
  error
1   TenGigabitEthernet1/0/1-output   interface is down  
  error
 11692073 null-node  blackholed packets 
  error

these packets "blackholed"

Intel 82599
   carrier down 
   flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
   rx: queues 4 (max 128), desc 2048 (min 32 max 4096 align 8)
   tx: queues 4 (max 64), desc 2048 (min 32 max 4096 align 8)
   pci: device 8086:1528 subsystem 15d9:0734 address :01:00.01 numa 0
   max rx packet len: 15872
   promiscuous: unicast off all-multicast on
   vlan offload: strip off filter off qinq off
   rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro 
  macsec-strip vlan-filter vlan-extend jumbo-frame scatter 
  security keep-crc rss-hash 
   rx offload active: ipv4-cksum jumbo-frame scatter 
   tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum 
  tcp-tso macsec-insert multi-segs security 
   tx offload active: udp-cksum tcp-cksum multi-segs 
   rss avail: ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp 
  ipv6-udp ipv6-ex ipv6 
   rss active:ipv4-tcp ipv4-udp ipv4 ipv6-tcp-ex ipv6-udp-ex ipv6-tcp 
  ipv6-udp ipv6-ex ipv6 
   tx burst function: ixgbe_xmit_pkts
   rx burst function: ixgbe_recv_scattered_pkts_vec

   tx frames ok47034934
   tx bytes ok   2822096040
   rx frames ok47034936
   rx bytes ok   2822096160
   rx missed  384981066
   extended stats:
 rx_good_packets   47034936
 tx_good_packets   47034934
 rx_good_bytes   2822096160
 tx_good_bytes   2822096040
 rx_missed_errors 384981066
 rx_q0_packets 47034936
 rx_q0_bytes 2822096160
 tx_q0_packets 47034934
 tx_q0_bytes 2163606978
 mac_local_errors52
 mac_remote_errors2
 rx_size_64_packets   432016002
 rx_broadcast_packets 3
 rx_total_packets 432016002
 rx_total_bytes 25920960120
 tx_total_packets  47034934
 tx_size_64_packets47034934
 tx_multicast_packets  47034933
 out_pkts_untagged 47034934
 rx_priority0_dropped 384981066

it seem to be received. I cant understand problem.

for trex configuration :

ip route add 16.0.0.0/8 via 10.10.1.2

ip route add 48.0.0.0/8 via 10.10.2.2

Do you have any advice about this problem?
Thanks,

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18400): https://lists.fd.io/g/vpp-dev/message/18400
Mute This Topic: https://lists.fd.io/mt/79101024/21656
Group Owner: vpp-dev

[vpp-dev] VPP New Plugin-Packet Forwarding

2021-01-08 Thread Merve
Hi everyone,
I added a new node to vpp and passed packets through this node during packet 
forwarding. Packet forwarding is between Trex and VPP. My packets passing 
through the new node were forwarded to vpp, but the packet transmission was not 
continued. There is no problem transmitting packets that I have not passed 
through my node. For my new node, my packets that appeared as black holes were 
not retransmitted to trex. Since I pass the packets through my own node, do I 
need to set up an additional route? İ defined as next node "interface-output" 
for my new node.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#18491): https://lists.fd.io/g/vpp-dev/message/18491
Mute This Topic: https://lists.fd.io/mt/79521036/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-