I'm not saying that the Hadoop process is perfect, far from it, but
from where I sit (like you I'm a contributor but not yet a committer)
it seems to be working OK so far for both you and I.
It does not work for me OK. Its way too slow. i got just 2k LOC in
committed and still floating around patches. That is real and sad result
of 1/2 year of cooperation. I know that contributor patches are low
priority in every project, but this is too low priority for me.
Some things could be better, but the current fairly-conservative process has
the benefit
of keeping trunk in a really sane, safe state.
if you want to keep code in safe state you need:
1. good unit test
2. high unit test coverage
3. clean code
4. documented code
5. good javadoc
You've got plenty of successful jiras under your belt, let's just keep on
truckin' and build a better Hadoop.
only successful work was rework of todd patch because it made hbase
about 30% faster.