On 7/7/2020 6:24 PM, Ori Kam wrote: > Hi Ferruh, > >> -----Original Message----- >> From: Ferruh Yigit <ferruh.yi...@intel.com> >> >> On 7/7/2020 7:21 AM, Ori Kam wrote: >>> Hi Jerin, >>> Thanks you for your quick reply. >>> >>>> -----Original Message----- >>>> From: Jerin Jacob <jerinjac...@gmail.com> >>>> Subject: Re: [dpdk-dev] [PATCH] add flow shared action API >>>> >>>> On Mon, Jul 6, 2020 at 7:02 PM Andrey Vesnovaty >>>> <andrey.vesnov...@gmail.com> wrote: >>>>> >>>>> Hi, Jerin. >>>> >>>> Hi Ori and Andrey, >>>> >>>> >>>>> >>>>> Please see below Ori's suggestion below to implement your >>>> rte_flow_action_update() idea >>>>> with some API changes of rte_flow_shared_action_xxx API changes. >>>>> >>>>> On Mon, Jul 6, 2020 at 3:28 PM Ori Kam <or...@mellanox.com> wrote: >>>>>> >>>>>> Hi Jerin, >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Jerin Jacob <jerinjac...@gmail.com> >>>>>>> Sent: Monday, July 6, 2020 12:00 PM >>>>>>> Subject: Re: [dpdk-dev] [PATCH] add flow shared action API >>>>>>> >>>>>>> On Sun, Jul 5, 2020 at 3:56 PM Ori Kam <or...@mellanox.com> wrote: >>>>>>>> >>>>>>>> Hi Jerin, >>>>>>>> PSB, >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Ori >>>>>>>> >>>>>>>>> -----Original Message----- >>>>>>>>> From: Jerin Jacob <jerinjac...@gmail.com> >>>>>>>>> Sent: Saturday, July 4, 2020 3:33 PM >>>>>>>>> dpdk-dev <dev@dpdk.org> >>>>>>>>> Subject: Re: [dpdk-dev] [PATCH] add flow shared action API >>>>>>>>> >>>>>>>>> On Sat, Jul 4, 2020 at 3:40 PM Andrey Vesnovaty >>>>>>>>> <andrey.vesnov...@gmail.com> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> Andrey Vesnovaty >>>>>>>>>> (+972)526775512 | Skype: andrey775512 >>>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [..Nip ..] >>>>>>>> >>>>>>>>>> I need to mention the locking issue once again. >>>>>>>>>> If there is a need to maintain "shared session" in the generic >>>> rte_flow >>>>>>> layer >>>>>>>>> all >>>>>>>>>> calls to flow_create() with shared action & all delete need to take >>>>>>>>> sharedsession >>>>>>>>>> management locks at least for verification. Lock partitioning is also >>>> bit >>>>>>>>> problematic >>>>>>>>>> since one flow may have more than one shared action. >>>>>>>>> >>>>>>>>> Then, I think better approach would be to introduce >>>>>>>>> rte_flow_action_update() public >>>>>>>>> API which can either take "const struct rte_flow_action []" OR shared >>>>>>>>> context ID, to cater to >>>>>>>>> both cases or something on similar lines. This would allow HW's >>>>>>>>> without have the shared context ID >>>>>>>>> to use the action update. >>>>>>>> >>>>>>>> Can you please explain your idea? >>>>>>> >>>>>>> I see two types of HW schemes supporting action updates without going >>>>>>> through call `rte_flow_destroy()` and call `rte_flow_create()` >>>>>>> - The shared HW action context feature >>>>>>> - The HW has "pattern" and "action" mapped to different HW objects >> and >>>>>>> action can be updated any time. >>>>>>> Other than above-mentioned RSS use case, another use case would be to >>>>>>> a) create rte_flow and set the action as DROP (Kind of reserving the HW >>>> object) >>>>>>> b) Update the action only when the rest of the requirements ready. >>>>>>> >>>>>>> Any API schematic that supports both notions of HW is fine with me. >>>>>>> >>>>>> I have an idea if the API will be changed to something like this, >>>>>> Rte_flow_shared_action_update(uint16_port port, rte_shared_ctx *ctx, >>>> rte_flow_action *action, error) >>>>>> This will enable the application to send a different action than the >>>>>> original >>>> one to be switched. >>>>>> Assuming the PMD supports this. >>>>>> Does it answer your concerns? >>>>> >>>>> >>>>> This allows both: >>>>> 1. Update action configuration >>>>> 2. Replace action by some other action >>>>> For 2 pure software implementation may carate shred action (that can be >>>> shared >>>>> with one flow only, depends on PMD) and later on >>>> rte_flow_shared_action_update may replace this >>>>> action with some other action by handle returned from >>>> rte_flow_shared_action_create >>>>> Doesign between 1 and 2 is per PMD. >>>> >>>> struct rte_flow * object holds the driver representation of the >>>> pattern + action. >>>> So in order to update the action, we would need struct rte_flow * in API. >>>> >>> Why is that? The idea is to change the action, the action itself is >>> connected to >> flows. >>> The PMD can save in the shared_ctx all flows that are connected to this >> action. >>> >>>> I think, simple API change would be to accommodate "rte_shared_ctx >>>> *ctx, rte_flow_action *action" modes >>>> without introducing the emulation for one or other mode, will be. >>>> >>>> enum rte_flow_action_update_type { >>>> RTE_FLOW_ACTION_UPDATE_TYPE_SHARED_ACTION, >>>> RTE_FLOW_ACTION_UPDATE_TYPE_ACTION, >>>> }; >>>> >>>> struct rte_flow_action_update_type_param { >>>> enum rte_flow_action_update_type type; >>>> union { >>>> struct >>>> rte_flow_action_update_type_shared_action_param { >>>> rte_shared_ctx *ctx; >>>> } shared_action; >>>> struct >>>> rte_flow_action_update_type_shared_action_param { >>>> rte_flow *flow, >>>> rte_flow_action *action; >>>> } action; >>>> } >>>> } >>>> >>> Thank you for the idea but I fall to see how your suggested API is simpler >>> than >> the one suggested by me? >>> In my suggestion the PMD simply needs to check if the new action and >> change the >>> context and to that action, or just change parameters in the action, if it >>> is the >> same action. >>> >>> Let's go with the original patch API modified to support like you requested >> also changing the action, >>> based on my comments. >>> >>>> rte_flow_action_update(uint16_port port, struct >>>> rte_flow_action_update_type_param *param, error) >>>> >>>>> >>>>>> >>>>>>> >>>>>>>> As I can see if we use the flow_action array it may result in bugs. >>>>>>>> For example, the application created two flows with the same RSS (not >>>> using >>>>>>> the context) >>>>>>>> Then he wants to change one flow to use different RSS, but the result >> will >>>> that >>>>>>> both flows >>>>>>>> will be changed. >>>>>>> >>>>>>> Sorry. I don't quite follow this. >>>>>>> >>>>>> I was trying to show that there must be some context. But I don’t think >>>>>> this >> is >>>> relevant to >>>>>> your current ideas. >>>>>> >>>>>>>> Also this will enforce the PMD to keep track on all flows which will >>>>>>>> have >>>>>>> memory penalty for >>>>>>>> some PMDs. >> >> Hi Ori, Andrey, >> >> This is a set of new APIs and we are very close to the -rc1, so we have only >> a >> few days to close the feature to merge them for this release. >> >> Also accompanying PMD and testpmd implementation with the proposed API >> changes >> looks missing. >> >> We can either postpone the patchset to next release to give time for more >> PMD >> owners to participate, which can give better API for long term. >> Or try to to squeeze into this release taking into account that the APIs >> will be >> experimental. >> >> What do you think, what is you schedule for the feature, do you have room to >> postpone it? > Not so much it is an important API for Mellanox.
Got it. > >> If not, first existing discussions needs to resolved, and it is good to have >> the >> PMD and testpmd implementations, do you think can this be done for next few >> days? >> > I think that this is the correct API to implement, I fully agree that this > API is experimental > just like any other new API, and might change based on comments and use cases. > I know that Mellanox is committed to this feature and that Andrey is working > around the clock > to complete the missing parts, and should have a version by tomorrow (July > 8th ) evening. OK > (with update to flow filtering sample app, testpmd will not be ready by RC1, > but it will be for RC2) > We would like very much to push it in this version. OK, please conclude the existing discussion before finalizing.