On Thu, Feb 24, 2011 at 3:17 PM, Mark Washenberger
<mark.washenber...@rackspace.com> wrote:
>> We need an unstable trunk:
>
> I could not possibly disagree more. Trunk is about releasability and 
> stability. As developers we need a stable well-protected trunk so that we can 
> actually work successfully in parallel on our own branches. My ideal for 
> trunk is that when it comes time for tagging a release, the only work is 
> making the tag, not a huge qa regression process.
>
>> I agree. I propose we always keep lp:nova (or lp:<project>) stable,
>> and instead create "trunks" like lp:nova/testing that all the
>> test/regression systems can be run against. This is pretty similar to
>> how we did things with Drizzle, a commit would bounce down the line,
>> finlly landing in lp:<project> when it was verified.
>
> I don't think we need anything beyond traditional development branches. I 
> would like to see us work on tools to make a lot of integration testing 
> available to individual developers in their branches. For the most hardware 
> intensive integration tests, perhaps we should provide an automated system 
> for developers to submit their branches for testing.

This is what we're working on, and what Justin is proposing, Mark.

Basically, in Drizzle-land, people propose a merge into trunk, Hudson
picks up that proposal, pulls the brnach into lp:drizzle/staging,
builds Drizzle on all supported platforms (>12 OS/distro combos), then
runs all automated regression testing against the proposed branch (can
take 3 or more hours).

We're proposing the same kind of automation for OpenStack.

-jay

>> We just need the tests. No matter how many stability levels of trunk
>> we have, without better tests we can't guarantee stability.
>
> Strongly agree.
>
> "Eric Day" <e...@oddments.org> said:
>
>> I agree. I propose we always keep lp:nova (or lp:<project>) stable,
>> and instead create "trunks" like lp:nova/testing that all the
>> test/regression systems can be run against. This is pretty similar to
>> how we did things with Drizzle, a commit would bounce down the line,
>> finlly landing in lp:<project> when it was verified.
>>
>> -Eric
>>
>> On Thu, Feb 24, 2011 at 10:41:02AM -0800, Justin Santa Barbara wrote:
>>>    Sounds like we're actually in agreement, just disagreeing about the
>>>    topology of the branches!
>>>    My preferred topology is actually a series of increasingly stable trunks,
>>>    each of which has passed each level of the smoke tests (one platform,
>>>    multi platform, torture tests). *The nice thing about this is that then a
>>>    release can simply be seen as the next level up in the stability
>>>    hierarchy, with a manual pull. *I think Google Chrome has a particularly
>>>    sophisticated process which we could learn from:
>>>    http://techcrunch.com/2011/01/11/google-chrome-release-cycle-slideshow/
>>>    I still think we should start simple with two trunks - stable and
>>>    unstable. *But I'll be happy if with per-branch testing also - that just
>>>    seems a lot harder to me!
>>>    Justin
>>>
>>>    On Thu, Feb 24, 2011 at 10:24 AM, Trey Morris <trey.mor...@rackspace.com>
>>>    wrote:
>>>
>>>      Instead of an unstable trunk, i think code should just be better vetted
>>>      before it lands in the trunk. If the difference between trunk and your
>>>      proposed unstable trunk is a set of automated tests, then those tests
>>>      can just as easily be run on a LP branch before it gets into current
>>>      trunk. We just need the tests. No matter how many stability levels of
>>>      trunk we have, without better tests we can't guarantee stability.
>>>
>>>      On Thu, Feb 24, 2011 at 11:15 AM, Justin Santa Barbara
>>>      <jus...@fathomdb.com> wrote:
>>>
>>>        Hi Jay,
>>>        I couldn't agree more. *I had another bug come up yesterday on 
>>> another
>>>        of my patches (I know - not a good day for me!) where I again broke
>>>        the OpenStack API by requiring the metadata attribute.
>>>        In this case, it was missed by the unit tests. *I believe I was 
>>> always
>>>        passing metadata, so simply missed the real world case. *Here's the
>>>        bug report:
>>>        https://bugs.launchpad.net/nova/+bug/724143
>>>        This brings up a number of points though:
>>>        On testing
>>>         1. The bug reporter apparently knows how to program, but instead of
>>>            us getting a test case which we could immediately use, we got a
>>>            Ruby test case. *I think we should do whatever we can so that
>>>            people that are moderately comfortable with code also feel
>>>            comfortable submitting a failing test case that we can use. *I
>>>            think this means having some version of an API client, or even a
>>>            Reference-Implementation API client, in the source tree, and 
>>> using
>>>            it.
>>>         2. It's not clear to me how we deal with unit test cases that are
>>>            failing - cases that represent a found bug that is not yet fixed.
>>>            *Maybe it should be submitted on a bugNNN branch, with failing
>>>            tests, and then whoever works on the bug can branch from it? *(Of
>>>            course, we'd be lucky if all bugs went like this, but it does
>>>            happen. *I often find myself fixing tangential issues, and I'd
>>>            like to know the 'right thing to do' is)
>>>         3. On this particular metadata bug, it was my fault. *I therefore
>>>            submitted a very rapid hotfix which just fixes the issue. *I then
>>>            coded up unit tests that use the OpenStack API, and did in fact
>>>            hit the issue naturally, and then included my fix which resolved
>>>            it. *I submitted the 'full patch' as a separate branch.
>>>         4. This 'full patch' branch includes unit tests that bring up the
>>>            OpenStack API and various services in-process, and runs tests 
>>> just
>>>            like a user. *It would have caught the metadata issue.*
>>>         5. The hope is that we can reuse the same tests as smoke tests, by
>>>            simply tweaking them to work against real services instead of
>>>            bringing up in-process stub-services. *These could be (some of)
>>>            the smoke tests in Hudson.
>>>         6. I'd hope that we could have two levels of these smoke tests, one
>>>            that runs on a single configuration (e.g. KVM, OpenISCSI, 
>>> Glance);
>>>            another that runs a matrix of configurations (and might take an
>>>            hour or more to run)
>>>         7. Ideally we'd have a torture test that would run overnight and be
>>>            randomized and try to find obscure bugs, even if issues found are
>>>            not necessarily repeatable in the way that non-randomized tests
>>>            are.
>>>        We need an unstable trunk:
>>>         1. In general, it seems that our end-users are using trunk for
>>>            unreleased functionality and treating it as if it were released.
>>>            *I don't think we should be encouraging that, because I know I'll
>>>            make more mistakes in future and some of them will make it past
>>>            the reviewers' defensive line into trunk; it's also simply not
>>>            realistic to require reviewers to review every combination - e.g.
>>>            how can a reviewer really review my HP SAN patch without an HP
>>>            SAN? *There will be issues in trunk, and if we have to revert 
>>> them
>>>            rather than just fixing them it will slow us down. *The current
>>>            situation is bad for our users and bad for developers.*
>>>         2. One way we could keep everyone happy is by using our test suite 
>>> to
>>>            auto-merge from an 'unstable trunk' into 'stable trunk', only 
>>> once
>>>            code passes tests. *Commits would initially merge into 'unstable
>>>            trunk', and we would try to keep that branch moving forwards
>>>            rather than reverting things that go wrong. *Of course,
>>>            maintaining a good 'stable trunk' relies on having good tests, 
>>> but
>>>            I think we're getting there. *It's also great incentive to write
>>>            good smoke tests.
>>>         3. Jay: I believe you've done this to great success on the Drizzle
>>>            project?
>>>        Justin
>>>
>>>        On Thu, Feb 24, 2011 at 6:13 AM, Jay Pipes <jaypi...@gmail.com> 
>>> wrote:
>>>
>>>          Hi all,
>>>
>>>          I'd like to bring up an alternate reason why it was approved and
>>>          subsequently reverted.
>>>
>>>          The test cases for the OpenStack API (and much of the EC2 API)
>>>          assume
>>>          way too many things and mock out too many things. In addition, 
>>> since
>>>          there are zero smoketests for the OpenStack API, there were no
>>>          functional tests that would have *immediately* highlighted this
>>>          problem (and many other recent EC2 vs OS API problems).
>>>
>>>          In other words, sure, we should revert the patch to "fix things",
>>>          however the priority should *not* be on refactoring the auth API or
>>>          the way the auth layer in Nova is handled. The priority should be 
>>> on
>>>          writing a smoketest for the OpenStack API so that we can link it
>>>          into
>>>          Hudson and these types of issues can be automatically caught.
>>>          -jay
>>>          On Wed, Feb 23, 2011 at 10:03 PM, Paul Voccio
>>>          <paul.voc...@rackspace.com> wrote:
>>>          > Justin,
>>>          > I think you hit upon the reason of why I think it was approved 
>>> and
>>>          reverted.
>>>          > Because it hadn't been talked about in a blueprint or a mail sent
>>>          to the
>>>          > list (I think I'm up to date on the threads) and a patch landed
>>>          means other
>>>          > alternatives weren't considered before pushing it through to 
>>> begin
>>>          with. I
>>>          > think we're all open to talking about how to better the auth
>>>          system and make
>>>          > improvements. Dragon has already discussed some alternatives and
>>>          suggestions
>>>          > on the BP page below. I think this is the right way to continue
>>>          the dialog
>>>          > and we all can agree on a good way forward.
>>>          > I'm confident we can figure it out.
>>>          > If I missed a conversation, my apologies.
>>>          > pvo
>>>          > From: Vishvananda Ishaya <vishvana...@gmail.com>
>>>          > Date: Wed, 23 Feb 2011 18:19:41 -0800
>>>          > To: Justin Santa Barbara <jus...@fathomdb.com>
>>>          > Cc: <openstack@lists.launchpad.net>
>>>          > Subject: Re: [Openstack] Should the OpenStack API re-use the EC2
>>>          > credentials?
>>>          >
>>>          > Hey Justin,
>>>          > Does it make any difference that the way the auth is
>>>          (theoretically)
>>>          > supposed to work with the os api is that the user gets an auth
>>>          token from an
>>>          > external auth server and then uses username / authtoken to
>>>          actually contact
>>>          > the api? *I think it is just faked out right now to use the
>>>          access_key
>>>          > instead of doing external auth, but I think the reason it works
>>>          like it does
>>>          > is because the plan was to switch to external auth eventually.
>>>          > Vish
>>>          > On Feb 23, 2011, at 5:56 PM, Justin Santa Barbara wrote:
>>>          >
>>>          > I previously fixed OpenStack authentication so it would use the
>>>          same
>>>          > credentials as EC2. *This bugfix was just reverted, because it
>>>          caused
>>>          > OpenStack API users to have to enter in different credentials
>>>          (sorry!), but
>>>          > primarily because it hadn't been*discussed*on the mailing list.
>>>          *So here
>>>          > goes!
>>>          > Here's a
>>>          >
>>>          
>>> blueprint:*https://blueprints.launchpad.net/nova/+spec/authentication-consistency
>>>          > Here's an overview of the problem:
>>>          > EC2 uses an (api_key, api_secret) pair. *Post-revert, OpenStack
>>>          uses the
>>>          > api_key(!) as the password, but a different value entirely as the
>>>          username:
>>>          > (username, api_key). *The bugfix made it so that both APIs used
>>>          the EC2
>>>          > credentials (api_key, api_secret) . *This did mean that anyone
>>>          that had
>>>          > saved the 'bad' OpenStack credentials was unable to continue to
>>>          use those
>>>          > credentials. *I also overlooked exporting the updated credentials
>>>          in novarc
>>>          > (though a merge request was pending).
>>>          > I actually thought originally that this was a straight-up bug,
>>>          rather than a
>>>          > design 'decision', so I should definitely have flagged it better.
>>>          *Again,
>>>          > sorry to those I impacted.
>>>          > As things stand now, post-revert, this is probably a security
>>>          flaw, because
>>>          > the EC2 API does not treat the api_key as a secret. *The EC2 API
>>>          can
>>>          > (relatively) safely be run over non-SSL, because it uses
>>>          signatures instead
>>>          > of passing the shared secret directly.
>>>          > This is also not very user-friendly. *Post-revert, an end-user
>>>          must know
>>>          > whether any particular cloud tool uses the EC2 API or the
>>>          OpenStack API, so
>>>          > that they can enter in the correct pair of credentials. *That
>>>          doesn't seem
>>>          > like a good idea; I think there should be one set of credentials.
>>>          >
>>>          > There is some discussion about the idea of having the api_key be
>>>          > user-friendly. *I don't think it buys us anything, because the
>>>          api_secret is
>>>          > still going to be un-friendly, but I have no objection as long as
>>>          it is does
>>>          > in a way that does not break existing users of the EC2 API.
>>>          > I propose that:
>>>          > *(1) the OpenStack API and EC2 credentials should be the same as
>>>          each other
>>>          > (whatever they are) for the sake of our collective sanity and
>>>          > *(2) we have to change the current configuration anyway for
>>>          security
>>>          > reasons.
>>>          > *(3) We should not change the EC2 credentials, because we've
>>>          shipped the EC2
>>>          > API and our users have an expectation that we won't break them
>>>          without good
>>>          > reason, so
>>>          > *(4) we must change the credentials for users of the 
>>> (non-shipped)
>>>          OpenStack
>>>          > API.
>>>          > Estimated user impact: I believe there are two people that will 
>>> be
>>>          affected,
>>>          > and it will take them ~1 minute each, so total impact ~2 minutes.
>>>          > The longer we delay fixing this, the more people we break and the
>>>          bigger the
>>>          > impact. *It seems that we have no choice but to do a
>>>          > non-backwards-compatible authentication change, but I believe 
>>> this
>>>          is OK at
>>>          > the moment because the OpenStack API is not yet stable/released -
>>>          i.e. we
>>>          > can still make fixes without worrying about backwards
>>>          compatibility shims.
>>>          > We're not in "The Old New Thing" land yet :-)
>>>          >
>>>          >
>>>          > As an aside, I am very unhappy about the way this revert was
>>>          pushed through
>>>          > by Rackspace team-members, seemingly without much consideration 
>>> of
>>>          > alternatives. *Perhaps we should consider changing from needing
>>>          two
>>>          > core-approves, to needing one Rackspace core-approve and one
>>>          non-Rackspace
>>>          > core-approve.
>>>          >
>>>          > Justin
>>>          >
>>>          >
>>>          > _______________________________________________
>>>          > Mailing list: https://launchpad.net/~openstack
>>>          > Post to ****: openstack@lists.launchpad.net
>>>          > Unsubscribe : https://launchpad.net/~openstack
>>>          > More help **: https://help.launchpad.net/ListHelp
>>>          >
>>>          > _______________________________________________ Mailing list:
>>>          > https://launchpad.net/~openstack Post to :
>>>          openstack@lists.launchpad.net
>>>          > Unsubscribe : https://launchpad.net/~openstack More help :
>>>          > https://help.launchpad.net/ListHelp
>>>          >
>>>          > Confidentiality Notice: This e-mail message (including any
>>>          attached or
>>>          > embedded documents) is intended for the exclusive and 
>>> confidential
>>>          use of
>>>          > the
>>>          > individual or entity to which this message is addressed, and
>>>          unless
>>>          > otherwise
>>>          > expressly indicated, is confidential and privileged information 
>>> of
>>>          > Rackspace.
>>>          > Any dissemination, distribution or copying of the enclosed
>>>          material is
>>>          > prohibited.
>>>          > If you receive this transmission in error, please notify us
>>>          immediately by
>>>          > e-mail
>>>          > at ab...@rackspace.com, and delete the original message.
>>>          > Your cooperation is appreciated.
>>>          >
>>>          > _______________________________________________
>>>          > Mailing list: https://launchpad.net/~openstack
>>>          > Post to * * : openstack@lists.launchpad.net
>>>          > Unsubscribe : https://launchpad.net/~openstack
>>>          > More help * : https://help.launchpad.net/ListHelp
>>>          >
>>>          >
>>>
>>>        _______________________________________________
>>>        Mailing list: https://launchpad.net/~openstack
>>>        Post to * * : openstack@lists.launchpad.net
>>>        Unsubscribe : https://launchpad.net/~openstack
>>>        More help * : https://help.launchpad.net/ListHelp
>>>
>>>  Confidentiality Notice: This e-mail message (including any attached or
>>>  embedded documents) is intended for the exclusive and confidential use of 
>>> the
>>>  individual or entity to which this message is addressed, and unless 
>>> otherwise
>>>  expressly indicated, is confidential and privileged information of 
>>> Rackspace.
>>>  Any dissemination, distribution or copying of the enclosed material is
>>> prohibited.
>>>  If you receive this transmission in error, please notify us immediately by
>>> e-mail
>>>  at ab...@rackspace.com, and delete the original message.
>>>  Your cooperation is appreciated.
>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to     : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to     : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to     : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to