Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru>
> Sent: Monday, 3 October 2022 12:44
> 
> On 10/3/22 11:23, Ori Kam wrote:
> > Hi Andrew
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybche...@oktetlabs.ru>
> >> Sent: Monday, 3 October 2022 10:54
> >> On 9/29/22 17:54, Michael Savisko wrote:
> >>> In some cases application may receive a packet that should have been
> >>> received by the kernel. In this case application uses KNI or other means
> >>> to transfer the packet to the kernel.
> >>>
> >>> With bifurcated driver we can have a rule to route packets matching
> >>> a pattern (example: IPv4 packets) to the DPDK application and the rest
> >>> of the traffic will be received by the kernel.
> >>> But if we want to receive most of the traffic in DPDK except specific
> >>> pattern (example: ICMP packets) that should be processed by the
> kernel,
> >>> then it's easier to re-route these packets with a single rule.
> >>>
> >>> This commit introduces new rte_flow action which allows application to
> >>> re-route packets directly to the kernel without software involvement.
> >>>
> >>> Add new testpmd rte_flow action 'send_to_kernel'. The application
> >>> may use this action to route the packet to the kernel while still
> >>> in the HW.
> >>>
> >>> Example with testpmd command:
> >>>
> >>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> >>> type mask 0xffff / end actions send_to_kernel / end
> >>>
> >>> Signed-off-by: Michael Savisko <michael...@nvidia.com>
> >>> Acked-by: Ori Kam <or...@nvidia.com>
> >>> ---
> >>> v4:
> >>> - improve description comment above
> >> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >>>
> >>> v3:
> >>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
> >> michael...@nvidia.com/
> >>>
> >>> v2:
> >>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
> >> michael...@nvidia.com/
> >>>
> >>> ---
> >>>    app/test-pmd/cmdline_flow.c                 |  9 +++++++++
> >>>    doc/guides/testpmd_app_ug/testpmd_funcs.rst |  2 ++
> >>>    lib/ethdev/rte_flow.c                       |  1 +
> >>>    lib/ethdev/rte_flow.h                       | 12 ++++++++++++
> >>>    4 files changed, 24 insertions(+)
> >>>
> >>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
> pmd/cmdline_flow.c
> >>> index 7f50028eb7..042f6b34a6 100644
> >>> --- a/app/test-pmd/cmdline_flow.c
> >>> +++ b/app/test-pmd/cmdline_flow.c
> >>> @@ -612,6 +612,7 @@ enum index {
> >>>           ACTION_PORT_REPRESENTOR_PORT_ID,
> >>>           ACTION_REPRESENTED_PORT,
> >>>           ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> >>> + ACTION_SEND_TO_KERNEL,
> >>>    };
> >>>
> >>>    /** Maximum size for pattern in struct rte_flow_item_raw. */
> >>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> >>>           ACTION_CONNTRACK_UPDATE,
> >>>           ACTION_PORT_REPRESENTOR,
> >>>           ACTION_REPRESENTED_PORT,
> >>> + ACTION_SEND_TO_KERNEL,
> >>>           ZERO,
> >>>    };
> >>>
> >>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> >>>                   .help = "submit a list of associated actions for red",
> >>>                   .next = NEXT(next_action),
> >>>           },
> >>> + [ACTION_SEND_TO_KERNEL] = {
> >>> +         .name = "send_to_kernel",
> >>> +         .help = "send packets to kernel",
> >>> +         .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> >>> +         .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> >>> +         .call = parse_vc,
> >>> + },
> >>>
> >>>           /* Top-level command. */
> >>>           [ADD] = {
> >>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> index 330e34427d..c259c8239a 100644
> >>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> >> attributes, if any.
> >>>
> >>>      - ``ethdev_port_id {unsigned}``: ethdev port ID
> >>>
> >>> +- ``send_to_kernel``: send packets to kernel.
> >>> +
> >>>    Destroying flow rules
> >>>    ~~~~~~~~~~~~~~~~~~~~~
> >>>
> >>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>> index 501be9d602..627c671ce4 100644
> >>> --- a/lib/ethdev/rte_flow.c
> >>> +++ b/lib/ethdev/rte_flow.c
> >>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> >> rte_flow_desc_action[] = {
> >>>           MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> >> rte_flow_action_conntrack)),
> >>>           MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> >> rte_flow_action_ethdev)),
> >>>           MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> >> rte_flow_action_ethdev)),
> >>> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> >>>    };
> >>>
> >>>    int
> >>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>> index a79f1e7ef0..2c15279a3b 100644
> >>> --- a/lib/ethdev/rte_flow.h
> >>> +++ b/lib/ethdev/rte_flow.h
> >>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> >>>            * @see struct rte_flow_action_ethdev
> >>>            */
> >>>           RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> >>> +
> >>> + /**
> >>> +  * Send packets to the kernel, without going to userspace at all.
> >>> +  * The packets will be received by the kernel driver sharing
> >>> +  * the same device as the DPDK port on which this action is
> >>> +  * configured. This action is mostly suits bifurcated driver
> >>> +  * model.
> >>> +  * This is an ingress non-transfer action only.
> >>
> >> May be we should not limit the definition to ingress only?
> >> It could be useful on egress as a way to reroute packet
> >> back to kernel.
> >>
> >
> > Interesting, but there are no Kernel queues on egress that can receive
> packets (by definition of egress)
> > do you mean that this will also do loopback from the egress back to the
> ingress of the same port and then
> > send to kernel?
> > if so, I think we need a new action "loop_back"
> 
> Yes, I meant intercept packet on egress and send to kernel.
> But we still need loopback+send_to_kernel. Loopback itself
> cannot send to kernel. Moreover it should be two rules:
> loopback on egress plus send-to-kernel on ingress. Does
> it really worse it? I'm not sure. Yes, it sounds a bit
> better from arch point of view, but I'm still unsure.
> I'd allow send-to-kernel on egress. Up to you.
> 

It looks more correct with loop_back on the egress and send-to-kernel on egress
I suggest to keep the current design,
and if we see that we can merge those to commands, we will change it

> >
> >>
> >>> +  *
> >>> +  * No associated configuration structure.
> >>> +  */
> >>> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> >>>    };
> >>>
> >>>    /**
> >

Reply via email to