Here is my take: currently in the ike plugin we expect all related ike packets to arrive on the same NIC queue. I expect all ike packets to use the same UDP 5-tuple, hence I think this assumption is correct. If you can share a scenario (in the RFC or even better with an existing ike implementation) where it is not correct, we should probably reconsider this. If it is a RSS issue, you should fix it. You can use 'vppctl set interface handoff' for software RSS as a workaround.
Best ben > -----Original Message----- > From: zhangguangm...@baicells.com <zhangguangm...@baicells.com> > Sent: vendredi 5 novembre 2021 02:55 > To: Benoit Ganne (bganne) <bga...@cisco.com> > Cc: vpp-dev <vpp-dev@lists.fd.io> > Subject: Re: RE: Is there a bug in IKEv2 when enable multithread ? > > Yes, the flow with same source/dest ip and ports was assgin to the > same nic queue is the expected. But the resust is the init and auth > packet was assign the same queue, but the informatuon reply (infomation > request was send by vpp) was > assigned the other queue. > > I also make another test , cpature all the IKEv2 packets and relpay > the packets those dest address is vpp, all the pakcet was assgined to > the same queue. > > I think there are two cause ,the one is NIC rss is not work , the other > is IKEv2 code . The firt is most possibility , but the first is not > able to explain the the second test result . > > I have reported the first cause through the Intel® 禸鑌髈\xF2 綅琹焱\xF4 \xBE 籅罃\xAE \xBE 瘭 珡迠髃邐 盁\xC3 髬 塕哆妟 纚\xE5 膐崱篅砢 \xA9 蝥\xF3 𦪷\xE5 醳 \xE1 膘 \xAE 纚\xE5 𥁊鑣𦉰焈 \xBE 遧煓\xF4 篪\xD3 𥹥琹焱\xF4 髺韑鑭\xE6 \xBE \xBE 秂煊摭\x8F逧餳淎屩欗琤\xA3 鑨饞焌\xEC 斊 鑕牣窬\xB3 \xBE 鍞髵鑡\xBA 髺韑鑭\xE6 \xBE 蔌秖髤沉 匬侟丰\xEB \xBE 隟禴蝥祧昈鑡籅焈\xBA \xBE 鑯玵渼髤毱秂鼙蔌秖髤沉 \xBE 醮窂髠隯\xBA 乁乁嬈僔伻暱 \xBE 𥹥琹焱羐旹䍃𦉰𥹖髃竧 轘\xF3 \xBE 𥹥琹焱羐旼鑣絰 轘\xF3 \xBE 𥹥琹焱羐敺鑜秂鼙遬鉝𥸮\xBA 渢 \xBE 𥹥琹焱羐旲鐿髬罃䁘鍧\x8F逅\xBA 轘\xF3 \xBE 𥹥琹焱羐旔禖茝隦邅竧 渢 \xBE \xBE \xBE 湋邗韯髠\xE7 \xBE \xBE 纚邗鵔 \xBE \xBE 跅跅跅跅跅跅跅跅跅跅跅跅跅跅跅跅 \xBE \xBE 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED \xBE \xBE \xBE 沟焇\xBA 曏渢髲 淩渞\xE5 嶙韐渞鎋 弴邋黕澐醆邗洴摟髬鉵桕焇\xBE \xBE 榷罃\xBA 傺傽捎侞乜 乄嬗\xB2 \xBE 缿\xBA 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED 弴邋黕澐鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵鼿 \xBE 柡\xBA 蕑爹錴\xF6 弴邋黕澐蕑爹錴菹麨𥹖窊隉桼澥 \xBE 綅醑鐲絰 箻\xBA 煅 㓁鑡\xE5 \xE1 醮\xE7 髠 炆歘\xB2 螋鑕 鑕遧麘 \x8F逡黕髲饛鐯\xE4 \xBF \xBE 澐\xAC \xBE \xBE 茡\xF9 鍗 迊\xF5 祧鉝髵\xE5 㓁煐\xE5 玵鉩鑨\xF3 焈 鍈隖鑡鑕\xF4 蟁禡鑡笇 \xC9 蟁䑶\xE4 \xBE 鑯玿鉻 邐\xEC 㓁\xE5 炆\xC5 玵鉩鑨\xF3 罭 𦪷\xE5 㓁\xE5 篚\x8F辣 籩艣鉝櫆鑣\xF4 炘 邗\xE4 琱䅈\xF3 餺涴\xE5 \xBE 邠禖蔫淎 髠 㓁\xE5 篚\x8F辣 蟁禡鑡\xAE 煅 髲 渢\xF4 㓁\xE5 鉑簋\xBF \xBE \xBE 曏𥹖 \xBE 醃\xEE \xBE \xBE \xBE 挻挻撑禖韠涫\xEC 瘃𥸮邅鎕挻挻 \xBE \xBE 沟焇\xBA 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED 彛餳淎頙邗韯髠靽酹髃鑊黋桕焇\xBE \xBE \xBE 絗湈\xBA 龢䄅\xE9 \xB2 渢蔌龣祧 傺傽 侷嬗\xB5 \xBE \xBE 缿\xBA 榷\x8F迥邗 痱禖焈 嶲邕邠髤毚 弡邕邠髤泂鉧篰漘鉵鼿\xBB 沅麨\xF0 纑饈邠 擥 \xBE \xBE 𡽶罃饈邠 \xAD 礞盬漍砅 縝柷 紓\xCF 邢 桻篰滫 弣罃饈邠摟髬鉵桕焇愌 渲邗渼 \xBE \xBE 弶祊渞𥫱鉧篰漘鉵鼿\xBB 曏渢髲 淩渞\xE5 嶙韐渞鎋 弜韐渞鏇鉧篰漘鉵鼿 \xBE \xBE 綅醑鐲絰 煅 㓁鑡\xE5 \xE1 醮\xE7 髠 炆歘\xB2 螋鑕 鑕遧麘 \x8F逡黕髲饛鐯\xE4 \xBF \xBE \xBE \xBE \xBE \xBE \xBE \xBE \xBE \xBE \xBE 跅跅跅跅跅跅跅跅跅跅跅跅跅跅跅跅 \xBE \xBE \xBE \xBE 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED \xBE \xBE \xBE \xBE \xBE \xBE 沟焇\xBA 鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵\xED \xBE \xBE 弴邋黕澐鉽邗韝舡淎\x8F迚淎摝邋鉝麯窊鉵鼿 \xBE \xBE 榷罃\xBA 傺傽捎侞乇 俁嬈\xB1 \xBE \xBE 缿\xBA 蕑爹錴\xF6 弴邋黕澐蕑爹錴菹麨𥹖窊隉桼澥 \xBE \xBE 柡\xBA 嶲鐽麇\xF2 弴邋黕澐嶲鐽麇砉鉧篰漘鉵鼿 \xBE \xBE 綅醑鐲絰 煅 㓁鑡\xE5 \xE1 醮\xE7 髠 炆歘\xB2 螋鑕 鑕遧麘 \x8F逡黕髲饛鐯\xE4 \xBF \xBE \xBE 澐, > > > > When I test IKEv2, i found when enable multithread , the > ike > > sa will be detelte quickly after IKE negotiation complete. > > The root casue is the inti and auth packet handle by one worker > > thread ,but the informational packet was handled by the other > thread. > > RSS is enable. > > > > > > The follow is my configuration > > > > > > > > cpu { > > ## In the VPP there is one main thread and optionally the user can > > create worker(s) > > ## The main thread and worker thread(s) can be pinned to CPU > core(s) > > manually or automatically > > > > > > ## Manual pinning of thread(s) to CPU core(s) > > > > > > ## Set logical CPU core where main thread runs, if main core is > not > > set > > ## VPP will use core 1 if available > > main-core 1 > > > > > > ## Set logical CPU core(s) where worker threads are running > > # corelist-workers 2-3,18-19 > > corelist-workers 2-3,4-5 > > > > > > ## Automatic pinning of thread(s) to CPU core(s) > > > > > > ## Sets number of CPU core(s) to be skipped (1 ... N-1) > > ## Skipped CPU core(s) are not used for pinning main thread and > > working thread(s). > > ## The main thread is automatically pinned to the first available > > CPU core and worker(s) > > ## are pinned to next free CPU core(s) after core assigned to main > > thread > > # skip-cores 4 > > > > > > ## Specify a number of workers to be created > > ## Workers are pinned to N consecutive CPU cores while skipping > > "skip-cores" CPU core(s) > > ## and main thread's CPU core > > #workers 2 > > > > > > ## Set scheduling policy and priority of main and worker threads > > > > > > ## Scheduling policy options are: other (SCHED_OTHER), batch > > (SCHED_BATCH) > > ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR) > > # scheduler-policy fifo > > > > > > ## Scheduling priority is used only for "real-time policies (fifo > > and rr), > > ## and has to be in the range of priorities supported for a > > particular policy > > # scheduler-priority 50 > > } > > > > > > > > > > dpdk { > > ## Change default settings for all interfaces > > dev default { > > ## Number of receive queues, enables RSS > > ## Default is 1 > > num-rx-queues 4 > > > > > > ## Number of transmit queues, Default is equal > > ## to number of worker threads or 1 if no workers treads > > num-tx-queues 4 > > > > > > ## Number of descriptors in transmit and receive rings > > ## increasing or reducing number can impact performance > > ## Default is 1024 for both rx and tx > > # num-rx-desc 512 > > # num-tx-desc 512 > > > > > > ## VLAN strip offload mode for interface > > ## Default is off > > # vlan-strip-offload on > > > > > > ## TCP Segment Offload > > ## Default is off > > ## To enable TSO, 'enable-tcp-udp-checksum' must be set > > # tso on > > > > > > ## Devargs > > ## device specific init args > > ## Default is NULL > > # devargs safe-mode-support=1,pipeline-mode-support=1 > > > > #rss 3 > > ## rss-queues > > ## set valid rss steering queues > > # rss-queues 0,2,5-7 > > #rss-queues 0,1 > > } > > > > > > ## Whitelist specific interface by specifying PCI address > > # dev 0000:02:00.0 > > > > dev 0000:00:14.0 > > dev 0000:00:15.0 > > dev 0000:00:10.0 > > dev 0000:00:11.0 > > #vdev crypto_aesni_mb0,socket_id=1 > > #vdev crypto_aesni_mb1,socket_id=1 > > > > ## Blacklist specific device type by specifying PCI vendor:device > > ## Whitelist entries take precedence > > # blacklist 8086:10fb > > > > > > ## Set interface name > > # dev 0000:02:00.1 { > > # name eth0 > > # } > > > > > > ## Whitelist specific interface by specifying PCI address and in > > ## addition specify custom parameters for this interface > > # dev 0000:02:00.1 { > > # num-rx-queues 2 > > # } > > > > > > ## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci, > > ## uio_pci_generic or auto (default) > > # uio-driver vfio-pci > > #uio-driver igb_uio > > > > > > ## Disable multi-segment buffers, improves performance but > > ## disables Jumbo MTU support > > # no-multi-seg > > > > > > ## Change hugepages allocation per-socket, needed only if there is > > need for > > ## larger number of mbufs. Default is 256M on each detected CPU > > socket > > # socket-mem 2048,2048 > > > > > > ## Disables UDP / TCP TX checksum offload. Typically needed for > use > > ## faster vector PMDs (together with no-multi-seg) > > # no-tx-checksum-offload > > > > > > ## Enable UDP / TCP TX checksum offload > > ## This is the reversed option of 'no-tx-checksum-offload' > > # enable-tcp-udp-checksum > > > > > > ## Enable/Disable AVX-512 vPMDs > > # max-simd-bitwidth <256|512> > > } > > > > DBGvpp# show threads > > ID Name Type LWP Sched Policy > > (Priority) lcore Core Socket State > > 0 vpp_main 2306 other (0) > > 1 1 0 > > 1 vpp_wk_0 workers 2308 other (0) > > 2 2 0 > > 2 vpp_wk_1 workers 2309 other (0) > > 3 3 0 > > 3 vpp_wk_2 workers 2310 other (0) > > 4 4 0 > > 4 vpp_wk_3 workers 2311 other (0) > > 5 5 0 > > DBGvpp# > > DBGvpp# show hardware-interfaces > > Name Idx Link Hardware > > 0: format_dpdk_device:598: rte_eth_dev_rss_hash_conf_get returned > - > > 95 > > GigabitEthernet0/14/0 1 up > GigabitEthernet0/14/0 > > Link speed: 4294 Gbps > > RX Queues: > > queue thread mode > > 0 vpp_wk_0 (1) polling > > 1 vpp_wk_1 (2) polling > > 2 vpp_wk_2 (3) polling > > 3 vpp_wk_3 (4) polling > > Ethernet address 5a:9b:03:80:93:cf > > Red Hat Virtio > > carrier up full duplex mtu 9206 > > flags: admin-up pmd maybe-multiseg int-supported > > Devargs: > > rx: queues 4 (max 4), desc 256 (min 0 max 65535 align 1) > > tx: queues 4 (max 4), desc 256 (min 0 max 65535 align 1) > > pci: device 1af4:1000 subsystem 1af4:0001 address > 0000:00:14.00 > > numa 0 > > max rx packet len: 9728 > > promiscuous: unicast off all-multicast on > > vlan offload: strip off filter off qinq off > > rx offload avail: vlan-strip udp-cksum tcp-cksum tcp-lro > vlan- > > filter > > jumbo-frame scatter > > rx offload active: jumbo-frame scatter > > tx offload avail: vlan-insert udp-cksum tcp-cksum tcp-tso > > multi-segs > > tx offload active: multi-segs > > rss avail: none > > rss active: none > > tx burst function: virtio_xmit_pkts > > rx burst function: virtio_recv_mergeable_pkts > > > > DBGvpp# show ikev2 profile > > profile profile1 > > auth-method shared-key-mic auth data foobarblah > > local id-type ip4-addr data 10.10.10.15 > > remote id-type ip4-addr data 10.10.10.2 > > local traffic-selector addr 10.10.20.0 - 10.10.20.255 port 0 - > > 65535 protocol 0 > > remote traffic-selector addr 172.16.2.0 - 172.16.2.255 port 0 - > > 65535 protocol 0 > > lifetime 0 jitter 0 handover 0 maxdata 0 > > > > DBGvpp# show interface addr > > GigabitEthernet0/14/0 (up): > > L3 10.10.10.15/24 > > GigabitEthernet0/15/0 (up): > > L3 10.10.20.15/24 > > local0 (dn): > > > > > > > > I also config a flow with rss > > > > DBGvpp# show flow entry > > flow-index 0 type ipv4 active 0 > > match: src_addr any, dst_addr any, protocol UDP > > action: rss > > rss function default, rss types ipv4-udp > > > > Is it a bug or my wrong configure ? > > > > Thanks > > guangming > > > > ________________________________ > > > > > > zhangguangm...@baicells.com >
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#20432): https://lists.fd.io/g/vpp-dev/message/20432 Mute This Topic: https://lists.fd.io/mt/86821392/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-