The time it takes for block layer patches to be reviewed and merged is a concern. Kevin and I want to discuss this issue, which was raised by Benoit, on the mailing list.
The volume of patches is so high that Kevin and I cannot review them all in a timely manner. We are adjusting the process as a result. Summary ------- From now on, 1 Reviewed-by from another contributor is required before Kevin and I will look at patches. Hint: CC a regular contributor (not Kevin or me) who you think might be able to review your patches. This will: 1. Spread the load of patch review. 2. Let Kevin and me focus on patches that are ready to be merged. Caveats: 1. Expect a few weeks before everyone has adjusted and improvement is visible. 2. Exceptions may be made such as for security or critical bug fixes. Read on for the rationale and more details... Background ---------- Both regular contributors and newcomers should experience quick code review feedback so patches can be iterated and merged at a rapid pace. Kevin and I co-maintain the block layer. Each week one of us is on full-time duty to review patches, investigate qemu.git build or test failures, and participate in the release process. The volume of patches is so high that we have a hard time consistently providing review feedback within a reasonable timeframe. A reminder why maintainers review patches: they are familiar with the code and often identify bugs before the patches are committed. They also take a big picture view of the codebase and can advise on the design of code, whereas individual contributors may not be aware of the activities in other areas of the code. The challenge is making this scale. There are two ways to achieve this: 1. Increase Kevin and my productivity 2. Add more resources Increasing productivity ----------------------- There are 3 time-consuming activities: 1. Replying to patches that are fundamentally broken 2. Screening patches that do not meet the code submission guidelines http://qemu-project.org/Contribute/SubmitAPatch 3. Bisecting build or test failures If Kevin and I spend our time dealing with patches that cannot be merged we have less time to spend on realistic candidates. #1 can be solved by requiring review from the community. This gets patches out of the way that need to be redone with more care. #2 and #3 can be solved by automated builds and tests. I have a plan for a bot that evaluates every patch series on the mailing list. This way no human time is spent verifying that the patch meets the basic requirements. As a result of eliminating these time-consuming activities, Kevin and I can focus on merging the good patch series. Adding more resources --------------------- Unfortunately, it is not easy to add more co-maintainers without losing the big picture and sacrificing code quality. Instead the focus is to delegate sections of the codebase, as was done successfully with nbd, sheepdog, vmdk, etc. Delegating has natural boundaries like block drivers or source files. It is not possible to add more people arbitrarily and still get good results. The alternative is to scale patch review by requiring 1 Reviewed-by before Kevin and I will look at a patch. Contributors need to participate in code review amongst each other in order to get code merged. To keep Reviewed-by tags meaningful, they can only be accepted from regular contributors. If Kevin or I find the review quality from a person is consistently poor, we will talk to them and may not count them. Patch review priorities ----------------------- Benoit asked what factors influence which patch series get reviewed. I think this is less relevant with the new 1 R-b rule, but others may be wondering the same thing. Kevin and I use multiple factors and make a subjective decision when selecting patches to review. This includes: * Is it a bug fix? * Is the patch series short? * Does it touch familiar parts of the codebase? * Does the contributor usually submit high-quality code? * Is the design straightforward? * Does the contributor participate in code review? There is no formula or algorithm that Kevin or I want to commit to, because it would make the already tough review process even less motivating. The problem is that traffic is too high for the current process, not that the patch selection algorithm isn't fair. That is why it's necessary to scale the review process (what the rest of this email is about). EOF --- If you have feedback or questions, let us know. The process can be tweaked as time goes on so we can continue to improve.
pgpzg8ICD4GqW.pgp
Description: PGP signature