On 2014-09-15 23:44:39 +1000 (+1000), Joshua Hesketh wrote: [...] > I'm not sure how this might look. For example, do we need to > closer consider the relevancy of a project; or perhaps if two > projects might be able to be joined together. These metrics can be > ambiguous and require good insight into the proposed project(s) to > tell. Additionally we could require a certain level of maturity in > the codebase before accepting a project over to stackforge. > However, this would hinder projects being started in the open > (which would be a big deal).
I don't see how these distinctions could be anything other than arbitrary, and so whoever ends up tasked with making that judgement will get to endure endless arguments over why some particular project is not a good fit. I don't personally want to be part of that jury--it seems like a lot of effort for little gain. > It might be helpful to look at what we currently have project-wise > to determine how tangential or relevant projects are on average. > We may also wish to consider ways to mark projects as inactive to > begin removing them from our configuration files as a tidy up. This also seems like needless makework unless defunct projects are themselves creating a burden or risk. The concerns you raised were primarily over config review activity, I don't think abandoned projects are likely to generate any of that. While I might be tempted by my own personal OCD demons to try and clean up stuff like this, I always remind myself that the effort is unlikely to balance the gains. As for addressing the review burden, I gather an end goal of http://specs.openstack.org/openstack-infra/infra-specs/specs/config-repo-split.html is to be able to broaden the core reviewer team for these and reduce the overall burden on the current infra-core team. > Another idea may also be to consider what tests are critical and > what could be ran as 3rd party tests for non-incubated projects. > Or perhaps working towards testing more tactical[0] on all > projects. [...] We already basically draw the line that we don't adjust our general infrastructure provisioning to accommodate unofficial projects (additional node types not used for testing OpenStack itself, additional packages installed by default on workers even though nothing in OpenStack needs them, additional entries into global requirements which are only there to serve StackForge projects, et cetera). If a StackForge project can't use our basic OpenStack test infrastructure to run jobs it needs, then that is an avenue for third-party testing. Some of them already take advantage of that option, so I think this is working as designed? Also, if we analyze the overall test system resource usage, I think we'd find that StackForge project jobs make up a very small percentage of it. They're mostly short-running, the projects tend to have a limited number of jobs (if any) and small or nonexistent integration queues (so the resource impact from their gate resets is nearly nil). I'm not in favor of changing the testing expectations for them unless we have some actual statistics showing that such a change will be worth the time we spend implementing it. -- Jeremy Stanley _______________________________________________ OpenStack-Infra mailing list OpenStack-Infra@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra