[dpdk-dev] Proposal for a new Committer model
On Fri, Nov 18, 2016 at 01:09:35PM -0500, Neil Horman wrote: > On Thu, Nov 17, 2016 at 09:20:50AM +, Mcnamara, John wrote: > > Repost from the moving at dpdk.org mailing list to get a wider audience. > > Original thread: > > http://dpdk.org/ml/archives/moving/2016-November/59.html > > > > > > Hi, > > > > I'd like to propose a change to the DPDK committer model. Currently we have > > one committer for the master branch of the DPDK project. > > > > One committer to master represents a single point of failure and at times > > can be inefficient. There is also no agreed cover for times when the > > committer is unavailable such as vacation, public holidays, etc. I propose > > that we change to a multi-committer model for the DPDK project. We should > > have three committers for each release that can commit changes to the > > master branch. > > > > There are a number of benefits: > > > > 1. Greater capacity to commit patches. > > 2. No single points of failure - a committer should always be available if > > we have three. > > 3. A more timely committing of patches. More committers should equal a > > faster turnaround - ideally, maintainers should also provide feedback on > > patches submitted within a 2-3 day period, as much as possible, to > > facilitate this. > > 4. It follows best practice in creating a successful multi-vendor community > > - to achieve this we must ensure there is a level playing field for all > > participants, no single person should be required to make all of the > > decisions on patches to be included in the release. > > > > Having multiple committers will require some degree of co-ordination but > > there are a number of other communities successfully following this model > > such as Apache, OVS, FD.io, OpenStack etc. so the approach is workable. > > > > John > > I agree that the problems you are attempting to address exist and are > worth finding a solution for. That said, I don't think the solution you > are proposing is the ideal, or complete fix for any of the issues being > addressed. > > If I may, I'd like to ennumerate the issues I think you are trying to > address based on your comments above, then make a counter-proposal for a > solution: > > Problems to address: > > 1) high-availability - There is a desire to make sure that, when patches > are proposed, they are integrated in a timely fashion. > > 2) high-throughput - DPDK has a large volume of patches, more than one > person can normally integrate. There is a desire to shard that work such > that it is handled by multiple individuals > > 3) Multi-Vendor fairness - There is a desire for multiple vendors to feel > as though the project tree maintainer isn't biased toward any individual > vendor. > > To solve these I would propose the following solution (which is simmilar > to, but not quite identical, to yours). > > A) Further promote subtree maintainership. This was a conversation that I > proposed some time ago, but my proposed granularity was discarded in favor > of something that hasn't worked as well (in my opinion). That is to say a > few driver pmds (i40e and fm10k come to mind) have their own tree that > send pull requests to Thomas. We should be sharding that at a much higher > granularity and using it much more consistently. That is to say, that we > should have a maintainer for all the ethernet pmds, and another for the > crypto pmds, another for the core eal layer, another for misc libraries > that have low patch volumes, etc. Each of those subdivisions should have > their own list to communicate on, and each should have a tree that > integrates patches for their own subsystem, and they should on a regular > cycle send pull requests to Thomas. Thomas in turn should by and large, > only be integrating pull requests. This should address our high- > throughput issue, in that it will allow multiple maintainers to share the > workload, and integration should be relatively easy. +1 > > B) Designate alternates to serve as backups for the maintainer when they > are unavailable. This provides high-availablility, and sounds very much > like your proposal, but in the interests of clarity, there is still a > single maintainer at any one time, it just may change to ensure the > continued merging of patches, if the primary maintainer isn't available. > Ideally however, those backup alternates arent needed, because most of the > primary maintainers work in merging pull requests, which are done based on > the trust of the submaintainer, and done during a very limited window of > time. This also partially addreses multi-vendor fairness if your subtree > maintainers come from multiple participating companies. +1 > > Regards > Neil > >
[dpdk-dev] [PATCH 0/4] libeventdev API and northbound implementation
On Fri, Nov 18, 2016 at 04:04:29PM +, Bruce Richardson wrote: > +Thomas > > On Fri, Nov 18, 2016 at 03:25:18PM +, Bruce Richardson wrote: > > On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote: > > > As previously discussed in RFC v1 [1], RFC v2 [2], with changes > > > described in [3] (also pasted below), here is the first non-draft series > > > for this new API. > > > > > > [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html > > > [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html > > > [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html > > > > > > Changes since RFC v2: > > > > > > - Updated the documentation to define the need for this library[Jerin] > > > - Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in > > > struct rte_event_queue_conf to enable optimized sw implementation > > > [Bruce] > > > - Introduced RTE_EVENT_OP* ops [Bruce] > > > - Added nb_event_queue_flows,nb_event_port_dequeue_depth, > > > nb_event_port_enqueue_depth > > > in rte_event_dev_configure() like ethdev and crypto library[Jerin] > > > - Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops > > > to > > > reduce fast path APIs and it is redundant too[Jerin] > > > - In the view of better application portability, Removed pin_event > > > from rte_event_enqueue as it is just hint and Intel/NXP can not support > > > it[Jerin] > > > - Added rte_event_port_links_get()[Jerin] > > > - Added rte_event_dev_dump[Harry] > > > > > > Notes: > > > > > > - This patch set is check-patch clean with an exception that > > > 02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL > > > - Looking forward to getting additional maintainers for libeventdev > > > > > > > > > Possible next steps: > > > 1) Review this patch set > > > 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/] > > > 3) Review proposed examples/eventdev_pipeline > > > application[http://dpdk.org/dev/patchwork/patch/17053/] > > > 4) Review proposed functional > > > tests[http://dpdk.org/dev/patchwork/patch/17051/] > > > 5) Cavium's HW based eventdev driver > > > > > > I am planning to work on (3),(4) and (5) > > > > > Thanks Jerin, > > > > we'll review and get back to you with any comments or feedback (1), and > > obviously start working on item (2) also! :-) > > > > I'm also wonder whether we should have a staging tree for this work to > > make interaction between us easier. Although this may not be > > finalised enough for 17.02 release, do you think having an > > dpdk-eventdev-next tree would be a help? My thinking is that once we get > > the eventdev library itself in reasonable shape following our review, we > > could commit that and make any changes thereafter as new patches, rather > > than constantly respinning the same set. It also gives us a clean git > > tree to base the respective driver implementations on from our two sides. > > > > Thomas, any thoughts here on your end - or from anyone else? I was thinking more or less along the same lines. To avoid re-spinning the same set, it is better to have libeventdev library mark as EXPERIMENTAL and commit it somewhere on dpdk-eventdev-next or main tree I think, EXPERIMENTAL status can be changed only when - At least two event drivers available - Functional test applications fine with at least two drivers - Portable example application to showcase the features of the library - eventdev integration with another dpdk subsystem such as ethdev Jerin > > > > Regards, > > /Bruce > >
[dpdk-dev] [RFC PATCH 0/7] RFC: EventDev Software PMD
On Thu, Nov 17, 2016 at 10:05:07AM +, Bruce Richardson wrote: > > 2) device stats API can be based on capability, HW implementations may not > > support all the stats > > Yes, this is something we were thinking about. It would be nice if we > could at least come up with a common set of stats - maybe even ones > tracked at an eventdev API level, e.g. nb enqueues/dequeues. As well as > that, we think the idea of an xstats API, like in ethdev, might work > well. For our software implementation, having visibility into the > scheduler behaviour can be important, so we'd like a way to report out > things like internal queue depths etc. > Since these are not very generic hardware, I am not sure how much sense to have generic stats API. But, Something similar to ethdev's xstat(any capability based) the scheme works well. Look forward to seeing API proposal with common code. Jerin
[dpdk-dev] Question about time library
Hi,everyone First of all, thank you for reading my email. When I read the code of time library,I meet one point that I don't understand fully. In the function rte_timer_manage(),if a periodic timer expire, before calling the function __rte_timer_reset(), its state is changed to RTE_TIMER_PENDING, and all layers of skiplist's pending_head is modified to point to unexpired timers' list, that is , the timer to reset is not involved in the skiplist--priv_timer[tim_lcore].pending_head. But in the function __rte_timer_reset() called by rte_timer_manage(), if the timer's previous state(after calling timer_set_config_state()) is RTE_TIMER_PENDING, it will call timer_del() to remove the timer from the skiplist -- priv_timer[tim_lcore].pending_head, however, the timer is already not in the skiplist now. That is what I don't understand fully,if anyone knows about it, please tell me. Thanks. Titen
[dpdk-dev] [PATCH] ethdev: don't look for devices if none were found
Aside from avoiding doing useless work, this also fixes a segfault when calling rte_eth_dev_get_port_by_name() whenever no devices were found yet, and therefore rte_eth_dev_data wasn't yet allocated. Fixes: 9c5b8d8b9feb ("ethdev: clean port id retrieval when attaching") Signed-off-by: Anatoly Burakov --- lib/librte_ether/rte_ethdev.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index fde8112..76a6dbf 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -376,6 +376,9 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id) return -EINVAL; } + if (!nb_ports) + return -ENODEV; + *port_id = RTE_MAX_ETHPORTS; for (i = 0; i < RTE_MAX_ETHPORTS; i++) { -- 2.5.5
[dpdk-dev] Proposal for a new Committer model
why aren't some patches as marked trivial and accepted right away. On Fri, Nov 18, 2016 at 11:06 AM, Jerin Jacob < jerin.jacob at caviumnetworks.com> wrote: > On Fri, Nov 18, 2016 at 01:09:35PM -0500, Neil Horman wrote: > > On Thu, Nov 17, 2016 at 09:20:50AM +, Mcnamara, John wrote: > > > Repost from the moving at dpdk.org mailing list to get a wider > audience. > > > Original thread: http://dpdk.org/ml/archives/ > moving/2016-November/59.html > > > > > > > > > Hi, > > > > > > I'd like to propose a change to the DPDK committer model. Currently we > have one committer for the master branch of the DPDK project. > > > > > > One committer to master represents a single point of failure and at > times can be inefficient. There is also no agreed cover for times when the > committer is unavailable such as vacation, public holidays, etc. I propose > that we change to a multi-committer model for the DPDK project. We should > have three committers for each release that can commit changes to the > master branch. > > > > > > There are a number of benefits: > > > > > > 1. Greater capacity to commit patches. > > > 2. No single points of failure - a committer should always be > available if we have three. > > > 3. A more timely committing of patches. More committers should equal a > faster turnaround - ideally, maintainers should also provide feedback on > patches submitted within a 2-3 day period, as much as possible, to > facilitate this. > > > 4. It follows best practice in creating a successful multi-vendor > community - to achieve this we must ensure there is a level playing field > for all participants, no single person should be required to make all of > the decisions on patches to be included in the release. > > > > > > Having multiple committers will require some degree of co-ordination > but there are a number of other communities successfully following this > model such as Apache, OVS, FD.io, OpenStack etc. so the approach is > workable. > > > > > > John > > > > I agree that the problems you are attempting to address exist and are > > worth finding a solution for. That said, I don't think the solution you > > are proposing is the ideal, or complete fix for any of the issues being > > addressed. > > > > If I may, I'd like to ennumerate the issues I think you are trying to > > address based on your comments above, then make a counter-proposal for a > > solution: > > > > Problems to address: > > > > 1) high-availability - There is a desire to make sure that, when patches > > are proposed, they are integrated in a timely fashion. > > > > 2) high-throughput - DPDK has a large volume of patches, more than one > > person can normally integrate. There is a desire to shard that work such > > that it is handled by multiple individuals > > > > 3) Multi-Vendor fairness - There is a desire for multiple vendors to feel > > as though the project tree maintainer isn't biased toward any individual > > vendor. > > > > To solve these I would propose the following solution (which is simmilar > > to, but not quite identical, to yours). > > > > A) Further promote subtree maintainership. This was a conversation that > I > > proposed some time ago, but my proposed granularity was discarded in > favor > > of something that hasn't worked as well (in my opinion). That is to say > a > > few driver pmds (i40e and fm10k come to mind) have their own tree that > > send pull requests to Thomas. We should be sharding that at a much > higher > > granularity and using it much more consistently. That is to say, that we > > should have a maintainer for all the ethernet pmds, and another for the > > crypto pmds, another for the core eal layer, another for misc libraries > > that have low patch volumes, etc. Each of those subdivisions should have > > their own list to communicate on, and each should have a tree that > > integrates patches for their own subsystem, and they should on a regular > > cycle send pull requests to Thomas. Thomas in turn should by and large, > > only be integrating pull requests. This should address our high- > > throughput issue, in that it will allow multiple maintainers to share the > > workload, and integration should be relatively easy. > > +1 > > > > > B) Designate alternates to serve as backups for the maintainer when they > > are unavailable. This provides high-availablility, and sounds very much > > like your proposal, but in the interests of clarity, there is still a > > single maintainer at any one time, it just may change to ensure the > > continued merging of patches, if the primary maintainer isn't available. > > Ideally however, those backup alternates arent needed, because most of > the > > primary maintainers work in merging pull requests, which are done based > on > > the trust of the submaintainer, and done during a very limited window of > > time. This also partially addreses multi-vendor fairness if your subtree > > maintainers come from multiple participating companies. > > +1 > > >