On Oct 10, 2013, at 23:50 , Russell Bryant 
<rbry...@redhat.com<mailto:rbry...@redhat.com>>
 wrote:

On 10/10/2013 02:20 PM, Alessandro Pilotti wrote:
Hi all,

As the Havana release date is approaching fast, I'm sending this email
to sum up the situation for pending bugs and reviews related to the
Hyper-V integration in OpenStack.

In the past weeks we diligently marked bugs that are related to Havana
features with the "havana-rc-potential" tag, which at least for what
Nova is concerned, had absolutely no effect.
Our code is sitting in the review queue as usual and, not being tagged
for a release or prioritised, there's no guarantee that anybody will
take a look at the patches in time for the release. Needless to say,
this starts to feel like a Kafka novel. :-)
The goal for us is to make sure that our efforts are directed to the
main project tree, avoiding the need to focus on a separate fork with
more advanced features and updated code, even if this means slowing down
a lot our pace. Due to the limited review bandwidth available in Nova we
had to postpone to Icehouse blueprints which were already implemented
for Havana, which is fine, but we definitely cannot leave bug fixes
behind (even if they are just a small number, like in this case).

Some of those bugs are critical for Hyper-V support in Havana, while the
related fixes typically consist in small patches with very few line changes.

Does the rant make you feel better?  :-)


Hi Russell,

This was definitely not meant to sound like a rant, I apologise if you felt it 
that way. :-)

With a more general view of nova review performance, our averages are
very good right now and are meeting our goals for review turnaround times:

http://russellbryant.net/openstack-stats/nova-openreviews.html

--> Total Open Reviews: 230
--> Waiting on Submitter: 105
--> Waiting on Reviewer: 125

--> Stats since the latest revision:
----> Average wait time: 3 days, 12 hours, 14 minutes
----> Median wait time: 1 days, 12 hours, 31 minutes
----> Number waiting more than 7 days: 19

--> Stats since the last revision without -1 or -2 (ignoring jenkins):
----> Average wait time: 5 days, 10 hours, 57 minutes
----> Median wait time: 2 days, 13 hours, 27 minutes


Usually when this type of discussion comes up, the first answer that I hear is 
some defensive data about how well project X ranks compared to some metric or 
the whole OpenStack average.
I'm not putting into discussion how much and well you guys are working (I 
actually firmly believe that you DO work very well), I'm just discussing about 
the way in which blueprints and bugs get prioritised.

Working on areas like Hyper-V inside of the OpenStack ecosystem is currently 
quite peculiar from a project management perspective due to the fragmentation 
of the commits among a number of larger projects.
Our bits are spread allover between Nova, Neutron, Cinder, Ceilometer, Windows 
Cloud-Init and let's not forget Crowbar and OpenVSwitch, although not stricly 
part of OpenStack. Except obviously Windows Cloud-Init, in none of those 
projects our contribution reaches the critical mass required for the project to 
be somehow dependent on what we do and reach a "core" status that would 
generate a sufficient autonomy. Furthermore, to complicate things more, with 
every release we are adding features to more projects.

On the other side, to get our code reviewed and merged we are always dependent 
on the good will and best effort of core reviewers that don't necessarily know 
or care about specific driver, plugin or agent internals. This brings to even 
longer review cycles even considering that reviewers are clearly doing their 
best in understanding the patches and we couldn't be more thankful.

"Best effort" has also a very specific meaning: in Nova all the Havana Hyper-V 
blueprints were marked as "low priority" (which can be translated in: "the only 
way to get them merged is to beg for reviews or maybe commit them on day 1 of 
the release cycle and pray") while most of the Hyper-V bugs had no priority at 
all (which can be translated in "make some noise on the ML and IRC or nobody 
will care"). :-)

This reality unfortunately applies to most of the sub-projects (non only 
Hyper-V) and can be IMHO solved only by delegating more authonomy to the 
sub-project teams on their specific area of competence across OpenStack as a 
whole. Hopefully we'll manage to find a solution during the design summit as we 
are definitely not the only ones feeling this way, by judging on various 
threads in this ML.

I personally consider that in a large project like this one there are multiple 
ways to work towards the achievement of the "greater good". Our call obviously 
consists in bringing OpenStack to the Microsoft world, which so far worked very 
well, I'd just prefer to be able to dedicate more resources on adding features, 
fixing bugs and make users happy instead of useless waits.

Also note that there are no hyper-v patches that are in the top 5 of any
of the lists of patches waiting the longest.  So, you are certainly not
being singled out here.

Please note that I never said that. Some people might say that in Nova "All 
hypervisors are equal but some are more equal than others", but this is not my 
point of view :-)


Please understand that I only want to help here.  Perhaps a good way for
you to get more review attention is get more karma in the dev community
by helping review other patches.  It looks like you don't really review
anything outside of your own stuff, or patches that touch hyper-v.  In
the absence of significant interest in hyper-v from others, the only way
to get more attention is by increasing your karma.


Frankly, the way you put it, you make sound karma more like a "do ut des" 
contract. Anyway, those are not only "my patches", and while I coded most of 
them until Grizzly for lack of other experienced devs in the domain, now I'm 
handing over the baton to an increasing number of community members, which are 
expert on the multiple aspects of the Hyper-V and Windows domains without 
necessarily being confident on every single area of Nova, Cinder, Neutron, 
Ceilometer, Tempest and so on.

So when you say that "It looks like you don't really review anything outside of 
your own stuff, or patches that touch hyper-v" (where my own stuff I guess 
means Hyper-V related patches by other people) it is obviously true!
The fact that you see only the subset of the reviews that I'm doing in Nova and 
not the rest of the review work, does not mean that I don't do reviews!

Our domain is the area in which me and my sub-team can add the biggest value. 
Being also an independent startup, we reached now the stage in which we can 
sponsor some devs to do reviews all the time outside of our core domain, but 
this will take a few months spawning one or two releases as aquiring the 
necessary understanding of a project like e.g. Nova cannot be done overnight.

I hope that this email appears clearly constructive and that you'll not take it 
as a rant, it is pretty far from being one. :-)


Thanks,

Alessandro


https://review.openstack.org/#/q/reviewer:3185+project:openstack/nova,n,z

--
Russell Bryant

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to