On Fri, 24 Feb 2017 13:25:46 +0000, Bruce Richardson <bruce.richard...@intel.com> wrote: > On Fri, Feb 24, 2017 at 02:17:31PM +0100, Olivier Matz wrote: > > On Fri, 24 Feb 2017 11:33:11 +0000, Remy Horton > > <remy.hor...@intel.com> wrote: > > > On 22/02/2017 19:06, Dumitrescu, Cristian wrote: > > > [..] > > > > This essentially leads to the "other" repos becoming second > > > > class citizens that can be broken at any time without prior > > > > notice or the right to influence the change. The amount of > > > > maintenance work becomes very difficult to quantify (e.g. we > > > > all know what a ripple effect a chance in the mbuf structure > > > > can cause to any of those "other" DPDK libraries). > > > > > > +1 - In my experience anything other than a single repository > > > ends up in tears sooner or later. At a previous company I worked > > > on a project where each "module" went into its own repo, all > > > fourty-five of which were strung together using Gerrit/Jenkins, > > > the result being I spent more time on rebases and build breakages > > > than writing business logic. Patchsets that cross repo boundaries > > > are a recipe for pain, and if DPDK goes down the same route, it > > > will likley cripple development. > > > > On the other hand, I think we can agree that everything that > > depends on dpdk cannot be hosted in dpdk repository. > > > > Many applications are hosted in other repositories: for instance > > pktgen or vpp. A given version of these applications runs on a > > given version of dpdk. It could also applies for libraries. > > > > Having apps/libs outside the dpdk repo is more work for their > > maintainers because they may need to revalidate (compilation + > > test) for each dpdk version. Having them inside the dpdk repo is > > more work for the maintainers of dpdk core libs, because they need > > to update all of them when they do a big changes. This is sometimes > > not doable because they don't have a test platform or knowledge for > > each pmd or lib. > > Maybe not, and I wouldn't expect someone making a change to have to > test every library affected by the change. However, I would expect > them to have enough knowledge of the change being made to update the > affected code in other libraries in a semi-mechanical way, and ensure > it compiles. If you are changing a core library and are not able to > change all uses of the API yourself, then you really need to question > if the change is a good one or not, or if you are qualified to make a > change if you don't understand how the old code was being used.
Yes. But the problem is when the guy that wants to do the change is aware that he/she does not have the skills to update all the libraries. At the end, the change may never be done because of the amount of work. I can give some examples: - remove the m->port field (requires to update some examples which hijack this field, maybe also some libs) - move some mbuf fields in the second cache line (requires to update all the PMDs, including vector ones) However I agree with what you said on the content :) I just want to highlight that both approaches have their advantages and drawbacks. And because to me there is no easy answer, I think it's sage to let this decision to the technical board. > > I also think it's likely that many users of DPDK code will have legacy > code using DPDK, for which the original programmer may be the one > making any updates. If, again, the change is such that it can't be > done in a relatively mechanical way, that is going to be a problem > for all those users. > > As for review and testing, that is the responsibility of the > maintainers and validators responsible for the individual library > being updated. > > /Bruce >