Robert Collins wrote:
Most production systems I know don't run with open ended dependencies.
One of our contributing issues IMO is that we have the requirements
duplicated everywhere - and then ignore them for many of our test runs
(we deliberately override the in-tree ones with global requirements).
Particularly, since the only reason unified requirements matter is for
distro packages, and they ignore our requirements files *anyway*, I'm
not sure our current aggregate system is needed in that light.
That said, making requirements be capped and auto adjust upwards would
be extremely useful IMO, but its a chunk of work;
- we need the transitive dependencies listed, not just direct dependencies
Wouldn't a pip install of the requirements.txt from the requirements
repo itself get this? That would tell pip to download all the things and
there transitive dependencies (aka step #1).
- we need a thing to find possible upgrades and propose bumps
This is an analysis of the $ pip freeze after installing into that
virtualenv (aka step #2)?
- we would need to very very actively propogate those out from global
requirements
Sounds like an enhanced updater.py that uses the output from step #2?
For now I think making 'react to the situation faster and easier' is a
good thing to push on.
One question I have is that not all things specify all there
dependencies, since some of them are pluggable (for example kombu can
use couchdb, or a transport exists that seems like it could, yet kombu
doesn't list that dependency in its requirements (it gets listed in
https://github.com/celery/kombu/blob/master/setup.py#L122 under
'extra_requires' though); I'm sure other pluggable libraries
(sqlalchemy, taskflow, tooz...) are similar in this regard so I wonder
how those kind of libraries would work with this kind of proposal.
-Rob
On 18 November 2014 12:02, Sean Dague<s...@dague.net> wrote:
As we're dealing with the fact that testtools 1.4.0 apparently broke
something with attribute additions to tests (needed by tempest for
filtering), it raises an interesting problem.
Our current policy on requirements is to leave them open ended, this
lets us take upstream fixes. It also breaks us a lot. But our max
version of dependencies happens with 0 code review or testing.
However, fixing these things takes a bunch of debug, code review, and
test time. Seen by the fact that the testtools 1.2.0 block didn't even
manage to fully merge this weekend.
This is an asymetric break/fix path, which I think we need a better plan
for. If fixing is more expensive than breaking, then you'll tend to be
in a broken state quite a bit. We really actually want the other
asymetry if we can get it.
There are a couple of things we could try here:
== Cap all requirements, require code reviews to bump maximums ==
Benefits, protected from upstream breaks.
Down sides, requires active energy to move forward. The SQLA 0.8
transition took forever.
== Provide Requirements core push authority ==
For blocks on bad versions, if we had a fast path to just merge know
breaks, we could right ourselves quicker. It would have reasonably
strict rules, like could only be used to block individual versions.
Probably that should also come with sending email to the dev list any
time such a thing happened.
Benefits, fast to fix
Down sides, bypasses our testing infrastructure. Though realistically
the break bypassed it as well.
...
There are probably other ways to make this more symetric. I had a grand
vision one time of building a system that kind of automated the
requirements bump, but have other problems I think need to be addressed
in OpenStack.
The reason I think it's important to come up with a better way here is
that making our whole code gating system lock up for 12+ hrs because of
an external dependency that we are pretty sure is the crux of our break
becomes very discouraging for developers. They can't get their code
merged. They can't get accurate test results. It means that once we get
the fix done, everyone is rechecking their code, so now everyone is
waiting extra long for valid test results. People don't realize their
code can't pass and just keep pushing patches up consuming resources
which means that parts of the project that could pass tests, is backed
up behind 100% guarunteed failing parts. All in all, not a great system.
-Sean
--
Sean Dague
http://dague.net
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev