[Edited Message Follows]

Hi,
we found that the nsh make our TCP bandwidth loss. s cene as following
threre ar two host: vm1(192.168.128.2), vm2(192.168.128.3)
1: host(vm1/vm2) send traffic to vpp_server0
2: vpp_server0 configrate the classify table and session for river the traffic 
to vpp_server1
3: vpp_server1 configrate the classify table and session to send the traffic 
back to vpp_server0
4: vpp_server0 send traffic to host

vpp_server 0/1 nsh configration (hit-next 27 mean nsh-classifier)
vpp_server0:
create vxlan-gpe tunnel local 100.64.1.8 remote 100.64.1.9 vni 1000 next-nsh    
   (#if_name:vxlan_gpe_tunnel0 if index:19)
create nsh entry nsp 1000 nsi 255 md-type 1 c1 0 c2 0 c3 0 c4 0 next-ethernet
create nsh map nsp 1000 nsi 255 mapped-nsp 1000 mapped-nsi 255 nsh_action push 
encap-vxlan-gpe-intf 19

classify table mask l3 ip4 src    (table-index 0)
classify session hit-next 27 table-index 0 match l3 ip4 src 192.168.128.2 
opaque-index 256255
classify session hit-next 27 table-index 0 match l3 ip4 src 192.168.128.3 
opaque-index 256255
set interface l2 input classify intfc pipe1000.0 ip4-table 0

create nsh map nsp 1001 nsi 255 mapped-nsp 1001 mapped-nsi 255 nsh_action pop 
encap-none 3 0
create nsh map nsp 1002 nsi 255 mapped-nsp 1002 mapped-nsi 255 nsh_action pop 
encap-none 3 0

vpp_server1:
create vxlan-gpe tunnel local 100.64.1.9 remote 100.64.1.8 vni 1000 next-nsh    
(#if_name:vxlan_gpe_tunnel0 if_index: 7)

create nsh map nsp 1000 nsi 255 mapped-nsp 1000 mapped-nsi 255 nsh_action pop 
encap-none 1 0   (# nsh_tunnel0)
set interface feature nsh_tunnel0 ip4-not-enabled arc ip4-unicast disable

create nsh entry nsp 1001 nsi 255 md-type 1 c1 0 c2 0 c3 0 c4 0 next-ethernet
create nsh map nsp 1001 nsi 255 mapped-nsp 1001 mapped-nsi 255 nsh_action push 
encap-vxlan-gpe-intf 7
create nsh entry nsp 1002 nsi 255 md-type 1 c1 0 c2 0 c3 0 c4 0 next-ethernet
create nsh map nsp 1002 nsi 255 mapped-nsp 1002 mapped-nsi 255 nsh_action push 
encap-vxlan-gpe-intf 7
classify table mask l3 ip4 dst    (table-index 0)
classify session hit-next 27 table-index 0 match l3 ip4 dst 192.168.128.2 
opaque-index 256511
classify session hit-next 27 table-index 0 match l3 ip4 dst 192.168.128.3 
opaque-index 256767
set interface l2 input classify intfc loop1000 ip4-table 0

we use iperf to test vm1/vm2 tcp bandwidth, it just 1Gbps
if we just modify the configration of vpp_server1 as following, it mean that 
use the same nsh sp(1001) and si(255), tcp bandwidth will increate to 6Gbps we 
expected

classify session hit-next 27 table-index 0 match l3 ip4 dst 192.168.128.3 del
classify session hit-next 27 table-index 0 match l3 ip4 dst 192.168.128.3 
opaque-index 256511

the vpp version we used is 19.08. we don't know why nsh cause this issue.

B.R.
joseph
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19634): https://lists.fd.io/g/vpp-dev/message/19634
Mute This Topic: https://lists.fd.io/mt/83775537/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to