> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Wednesday, August 30, 2017 1:43 PM
> To: Ananyev, Konstantin <konstantin.anan...@intel.com>; Shahaf Shuler 
> <shah...@mellanox.com>; Thomas Monjalon
> <tho...@monjalon.net>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to the 
> new offloads API
> 
> On 8/30/2017 11:16 AM, Ananyev, Konstantin wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Yigit, Ferruh
> >> Sent: Wednesday, August 30, 2017 8:51 AM
> >> To: Shahaf Shuler <shah...@mellanox.com>; Thomas Monjalon 
> >> <tho...@monjalon.net>; Ananyev, Konstantin
> >> <konstantin.anan...@intel.com>
> >> Cc: dev@dpdk.org
> >> Subject: Re: [dpdk-dev] [RFC PATCH 4/4] ethdev: add helpers to move to the 
> >> new offloads API
> >>
> >> On 8/30/2017 7:30 AM, Shahaf Shuler wrote:
> >>> Tuesday, August 29, 2017 3:55 PM, Ferruh Yigit:
> >>>>>> Considering the re-configuration is risky, and without other ideas I 
> >>>>>> will
> >>>> need to fall back to the error flow case.
> >>>>>> Are we OK with that?
> >>>>>
> >>>>> I think we can take the risk of keeping this call to
> >>>>> rte_eth_dev_configure() in the middle of rte_eth_rx_queue_setup().
> >>>>> In theory it should be acceptable.
> >>>>> If we merge it soon, it can be better tested with every drivers.
> >>>>
> >>>> I doubt about taking that risk. Some driver does HW configuration via
> >>>> configure() and combination of start/stop, setup_queue and configure can
> >>>> be complex.
> >>>>
> >>>> I am for generating error for this case.
> >>>>
> >>>> Generating error also can be good motivation for PMDs to adapt new
> >>>> method.
> >>>
> >>> Adding Ananyev suggestion from other thread:
> >>> For tx_prepare() work, we used the following approach:
> >>> 1. submitted patch with changes in rte_ethdev and PMDs we  are familiar 
> >>> with (Intel ones).
> >>>     For other PMDs - patch contained just minimal changes to make it 
> >>> build cleanly.
> >>> 2. Asked other PMD maintainers to review rte_ethdev changes and provide a 
> >>> proper patch
> >>>     for the PMD they own.
> >>
> >> tx_prepare() is a little different, since it was not clear if all PMDs
> >> needs updating that is why asked to PMD owners, and the ones requires
> >> updating already has been updated with ethdev patch. Here we know all
> >> PMDs need updating, and they need proper time in advance.
> >>
> >>>
> >>> So I am OK with both suggestions. Meaning:
> >>> 1. Define the case were application use the new offloads API with PMD 
> >>> which supports the old one as an error.
> >>> 2. apply patches to ethdev with the above behavior.
> >>>
> >>> Just to emphasize, it means that PMDs which won't move to the new API by 
> >>> the end of 17.11 will not be able to run with any of the
> >> examples and application on DPDK tree (and also with other applications 
> >> which moved to the new API), as I plan to submit patches which
> >> convert them all to the new API.
> >>
> >> I think it is good idea to update samples/apps to new method, but this
> >> can be short notice for PMD owners.
> >>
> >> Can we wait one more release to update samples/apps, to give time for
> >> PMDs to be updated, since old applications will work with new PMDs
> >> (thanks to your helpers), I believe this won't be a problem.
> >
> > I am not sure what is your suggestion here?
> > Support both flavors of PMD API for 17.11?
> 
> Support both with an exception, when application uses new method but PMD
> uses old one, throw an error (because of above discussed technical issue).
> 
> This lets existing applications run without problem, and pushes PMDs to
> adapt new method.
> 
> Your suggestion to have only new method and convert all PMDs to new
> method is good, but I am not sure if this is realistic for this release,
> since different PMDs have different pace for updates.
> 
> ethdev updates can be done in this release, with the PMDs that already
> changed to new method. The sample/app modifications Shahaf mentioned can
> be done in the beginning of the next release, and this gives time to
> remaining PMDs until end of next release. What do you think?

If it is just a timing concern - can we probably request PMD maintainers at 
least to:
- review the proposed new API and object if they have any concerns
- provide an estimation - how long it would take to adopt a new one
let say in a week time?
Then we could make a final decision - do things in one go, or prolong the pain 
for 2 releases.
Konstantin

> 
> > Konstantin
> >
> >>
> >>>
> >>> Any objection to this approach?
> >>>
> >>>
> >

Reply via email to