> -----Original Message-----
> From: Hideyuki Yamashita <yamashita.hidey...@ntt-tx.co.jp>
> Sent: Thursday, October 31, 2019 11:52
> To: Slava Ovsiienko <viachesl...@mellanox.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on
> VLAN header
> 
> Dear Slava,
> 
> Your guess is corrrect.
> When I put flow into Connect-X5, it was successful.
Very nice.

> 
> General question.
As we know - general questions are the most hard ones to answer 😊.

> Are there any way to input flow to ConnectX-4?
As usual - with RTE flow API.  Just omit dv_flow_en, or specify dv_flow_en=0
and mlx5 PMD will handle RTE flow API via Verbs engine, supported by 
ConnectX-4. 

> In another word, are there any way to activate Verb?
> And which type of flow is supported in Verb?
Please, see flow_verbs_validate() routine in the mlx5_flow_verbs.c,
it shows which RTE flow items and actions are actually supported by Verbs.

With best regards, Slava


> 
> -----------------------------------------------------------
> tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1 
> --socket-
> mem 512,512 --huge-dir=/mnt/h
> uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2 --txq=16 --rxq=16
> [sudo] password for tx_h-yamashita:
> EAL: Detected 48 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on device
> mlx5_          1
> 
> Interactive-mode selected
> 
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> size=2176, socke          t=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> size=2176, socke          t=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> 
> Warning! port-topology=paired and odd forward ports number, the last port
> will p          air with itself.
> 
> Configuring Port 0 (socket 0)
> Port 0: B8:59:9F:C1:4A:CE
> Checking link statuses...
> Done
> testpmd>
> testpmd>  flow create 0 ingress group 1 priority 0 pattern eth dst is
> 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan  / 
> queue
> index 0 / end
> Flow rule #0 created
> testpmd>
> ---------------------------------------------------------------------------------------------
> -----------------
> 
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
> 
> > Hi, Hideyuki
> >
> > > -----Original Message-----
> > > From: Hideyuki Yamashita <yamashita.hidey...@ntt-tx.co.jp>
> > > Sent: Wednesday, October 30, 2019 12:46
> > > To: Slava Ovsiienko <viachesl...@mellanox.com>
> > > Cc: dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > action on VLAN header
> > >
> > > Hello Slava,
> > >
> > > Thanks for your help.
> > > I added magic phrase. with chaging PCI number with proper one in my
> env.
> >
> > > It changes situation but still result in error.
> > >
> > > I used /usertools/dpdk-setup.sh to allocate hugepage dynamically.
> > > Your help is appreciated.
> > >
> > > I think it is getting closer.
> > > tx_h-yamashita@R730n10:~/dpdk-next-net/x86_64-native-linuxapp-
> > > gcc/app$
> > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1 --socket-mem
> > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > --portmask=0x1 --nb-cores=2
> >
> > mlx5 PMD supports two flow engines:
> > - Verbs, this is legacy one, almost no new features are being added, just
> bug fixes,
> >   provides slow rule insertion rate, etc.
> > - Direct Rules, the new one, all new features are being added here.
> >
> > (We had one more intermediate engine  - Direct Verbs, it was dropped,
> > but prefix dv in dv_flow_en remains ??)
> >
> > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX, ConnectX-5,
> ConnectX-6, etc.
> > Direct Rules is supported for NICs starting from ConnectX-5.
> > "dv_flow_en=1" partameter engages Direct Rules, but I see you run
> > testpmd over 03:00.0 which is ConnectX-4, not  supporting Direct Rules.
> > Please, run over ConnectX-5 you have on your host.
> >
> > As for error - it is not related to memory, rdma core just failed to
> > create the group table, because ConnectX-4 does not support DR.
> >
> > With best regards, Slava
> >
> > > --txq=16 --rxq=16
> > > EAL: Detected 48 lcore(s)
> > > EAL: Detected 2 NUMA nodes
> > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: Probing VFIO support...
> > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port 1 on
> > > device
> > > mlx5_3
> > >
> > > Interactive-mode selected
> > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> > > size=2176, socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456,
> > > size=2176, socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > >
> > > Warning! port-topology=paired and odd forward ports number, the last
> > > port will pair with itself.
> > >
> > > Configuring Port 0 (socket 0)
> > > Port 0: B8:59:9F:DB:22:20
> > > Checking link statuses...
> > > Done
> > > testpmd> flow create 0 ingress group 1 priority 0 pattern eth dst is
> > > testpmd> 00:16:3e:2e:7b:6a / vlan vid is 1480 / end actions
> > > testpmd> of_pop_vlan / queue index 0 / end
> > > Caught error type 1 (cause unspecified): cannot create table: Cannot
> > > allocate memory
> > >
> > >
> > > BR,
> > > Hideyuki Yamashita
> >
> 

Reply via email to