I guess I was unaware that the test infrastructure tests various
deployment configs from the ground up, rebuilding from baremetal (I
guess?).  I was initially thinking "We already test KVM, let me just
run two commands on the host to set up a volume group, then tweak the
marvin test  to register it as a primary storage and put a loop in to
do volume tests for each registered primary storage". The test infra
sounds much more advanced than what I was thinking, and probably
harder to adjust. At any rate, I do think that having a devcloud-type
basic sanity check for our support matrix would be relatively little
work and provide a lot of benefit in catching some of these big holes
in testing prior to creating an RC.

On Wed, Sep 11, 2013 at 11:15 PM, Marcus Sorensen <shadow...@gmail.com> wrote:
> I think the test infra as described is great, but I think we're
> hurting a little more for basics. For example, we don't need a full
> infrastructure with hardware to ensure that the support matrix works.
> I could bring up a VM with CentOS and one with Ubuntu, and test NFS,
> CLVM, and RBD on each. CLVM just needs a volume group, NFS can be
> exported locally, and RBD can run on localhost node (ceph has how-tos
> for this to get your feet wet, that also buys us S3 compatible object
> storage for testing secondary). Two VMs with maybe 2 cores, 4GB ram
> each, and I think we could knock out a big swath of the basic "does it
> work on the supported platforms" that we're missing with a very simple
> automated testing. We can easily donate that much.
>
> I agree that third parties would need to plug in their own testing
> (solid fire, as you mention). And certainly testing full blown
> deployment from the ground up like it sounds like we are doing is
> great and necessary, I just want to plug a few holes and add some
> basic sanity checking that we seem to keep getting tripped up on.
>
> On Wed, Sep 11, 2013 at 10:23 PM, Prasanna Santhanam <t...@apache.org> wrote:
>> As Sebastien said, it's easy to get you the credentials for jenkins.
>> Anyone with commit rights can request for an account. In fact one is
>> created soon as you commit. I just need to adjust the credentials.
>> (We'll move to git based job configurations but later)
>>
>> Citrix is unable to test various configurations for lack of necessary
>> resources. for eg: It would be hard to test something that requires
>> hardware resources like Nicira/Midokura/Solidfire. The current testbed
>> is also limited in that it only deploys standard zone models. I have
>> only one storage node to spare on which NFS is configured.
>>
>> CloudStack can be deployed and configured in so many ways that I don't
>> think a single testbed cycling through all models is going to be
>> effective in testing every possible configuration in time. This is why
>> I'd like everyone of us to chip-in and use each others resources to
>> make the infrastructure better.
>>
>> The RBD store at least will require sometime for us to bring up. It
>> would be best if we could roll a few hosts from different datacenters
>> up into jenkins. Object storage backed CS with something like Riak is
>> another untested configuration. It is definitely tested within Citrix
>> Labs but those testbeds are internal and cannot be exposed to the
>> community. We've got corporate IT which wouldn't like that very much :)
>>
>> Ultimately, I'd want testbeds span across companies contributing to
>> cloudstack. I wouldn't want any single company X to hold the resources
>> and control allocation for testing even though that is not the case at
>> all.
>>
>> We still need to figure out how securely these deployments can be
>> brought into jenkins and who holds keys to the infrastructure. I'm no
>> secure conscious sysadmin so I'm hoping for inputs from operators
>> deploying cloudstack.
>>
>> On Wed, Sep 11, 2013 at 11:12:34AM -0600, Marcus Sorensen wrote:
>>> Again, I'm not knocking Citrix. If anything, the issue is that they tend to
>>> be so generous and community oriented that it surprises me when I find out
>>> that certain donation is limited to their interests. Its perfectly
>>> reasonable, e.g. my own donations are mostly limited to KVM.
>>> On Sep 11, 2013 10:52 AM, "Marcus Sorensen" <shadow...@gmail.com> wrote:
>>>
>>> > I do understand that. The email I received just triggered warning bells
>>> > because it gave me the impression that the QA team as it stands isn't
>>> > testing anything that Citrix doesn't care about, regardless of what the
>>> > community has put on the support matrix. This includes even basic configs
>>> > that the community claims to support like KVM on Ubuntu as the 4.1 release
>>> > shows, and other things that we may already have infra for but just 
>>> > haven't
>>> > implemented.
>>> >
>>> > That led me to wondering how much control the community really has over
>>> > testing. Its good to know that we can roll our own nodes up into Jenkins,
>>> > and/or modify tests if the infrastructure is already there. We just need 
>>> > to
>>> > raise awareness as a community that there are still holes in resources and
>>> > a need for donations to provide the minimum testing required for our
>>> > support matrix. I think David's email about release requirements is a good
>>> > step.
>>> >
>>> > If possible I'd like to modify the existing KVM testing to support testing
>>> > NFS, CLVM, and RBD. This can all be done with a single host (that
>>> > presumably already exists), we just need to set up the storage on the host
>>> > and add create pool commands and volume create/delete tests. I'll have to
>>> > figure out how to go about getting admin rights on the KVM test hosts to
>>> > configure the storage types or work with someone. If we can't do that due
>>> > to company logistics, I can easily stand up a VM or two to cover all of 
>>> > the
>>> > KVM mgmt/host hypervisor and storage configs if I can figure out how to
>>> > integrate.
>>> > On Sep 11, 2013 2:10 AM, "Sebastien Goasguen" <run...@gmail.com> wrote:
>>> >
>>> >>
>>> >> On Sep 11, 2013, at 3:02 AM, Prasanna Santhanam <t...@apache.org> wrote:
>>> >>
>>> >> > CloudStack API actions are agnostic of underlying infrastructure and
>>> >> > most cases can fall into such a category as you describe. But imagine
>>> >> > this - I want to test snapshots ..
>>> >> >
>>> >> > so i take a snapshot and verify if it backedup correctly against a
>>> >> > ceph object store, nfs store or iscsi store. that sort of test is
>>> >> > going to involve more than just api actions.
>>> >> >
>>> >> > or say - i want to test multiple shared networks a VM gets deployed
>>> >> > into. Do I assume the deployment has multiple shared networks? Can i
>>> >> > add my own network into the deployment?
>>> >> >
>>> >> > or even - I want to exhaust all the public network IPs and check if
>>> >> > the next deployed VM picks up an IP in the new public range I've
>>> >> > added. This sort of test assumes that all the necessary networking is
>>> >> > in place and also hurts VM deployments of all tests that run at the
>>> >> > same time.
>>> >> >
>>> >> > It's a difficult balance to strike but we have to begin somewhere.
>>> >> > Start with the basic minimum that every infra can run. infra specifc
>>> >> > tests skip if thing are unsuitable, but will run for someone who wants
>>> >> > to test that feature
>>> >>
>>> >> A small point here to make is that jenkins.cloudstack.org is open to
>>> >> anyone.
>>> >> Prasanna has created an account for me and I am (slowly) working on
>>> >> adding tests for clients including aws.
>>> >>
>>> >> Anyone could use this jenkins instance, bring in slaves from "home" and
>>> >> setup tests?
>>> >>
>>> >> Back to the solidfire example, I think Mike could easily contribute one
>>> >> node that has a solidfire storage, then contribute Marvin tests that 
>>> >> would
>>> >> run on jenkins.c.o and target his slave specifically. Same for KVM on
>>> >> Ubuntu...
>>> >>
>>> >> -sebastien
>>> >>
>>> >>
>>> >> >
>>> >> > On Tue, Sep 10, 2013 at 11:53:15PM -0700, Ahmad Emneina wrote:
>>> >> >> That's a good question, I'm not sure how preconditions work with
>>> >> >> Marvin cases, but I know the tests are run generically. Say I run
>>> >> >> copyvolumeToPrimary (not sure this test exists, hypothetical at the
>>> >> >> moment), it gets run against a slew of infrastructure configurations
>>> >> >> using local storage as well as shared (NSF, iscsi, ceph...) back
>>> >> >> ends. So just dropping my test into a storage suite should give it
>>> >> >> some guarantee its hitting a few different storage back-ends. That's
>>> >> >> how i understand it works today, I'll defer to Prasanna or Sudha...
>>> >> >> Or anyone else that runs tests aggressively to fill in the gaps and
>>> >> >> make corrections.
>>> >> >>
>>> >> >> Ahmad
>>> >> >>
>>> >> >> On Sep 10, 2013, at 11:43 PM, Marcus Sorensen <shadow...@gmail.com>
>>> >> wrote:
>>> >> >>
>>> >> >>> But if the test requires some sort of preconfiguration, what then
>>> >> (e.g. test NFS primary storage would need a local or remote NFS
>>> >> configured)? do I need to roll my own, or can I touch the existing test
>>> >> infra and do the preconfigure?
>>> >> >>>
>>> >> >>> On Sep 11, 2013 12:34 AM, "Prasanna Santhanam" <t...@apache.org>
>>> >> wrote:
>>> >> >>>> Yes - Once your test goes into the repo, it should get picked in the
>>> >> subsequent
>>> >> >>>> run.
>>> >> >>>>
>>> >> >>>> Jenkins installations from various companies can be combined into a
>>> >> single
>>> >> >>>> landing page. Jenkins itself doesn't support master/slave but it
>>> >> does through
>>> >> >>>> the gearman plugin. It's something I have tried using with VMs but
>>> >> not with
>>> >> >>>> real infra - but it is entirely possible.
>>> >> >>>>
>>> >> >>>> On Tue, Sep 10, 2013 at 11:17:53PM -0700, Ahmad Emneina wrote:
>>> >> >>>>> I think there are jenkins slaves that run the nicera plugins on/at
>>> >> Schuberg
>>> >> >>>>> Philis housed infrastructure. The Citrix jenkins nodes also runs as
>>> >> slaves
>>> >> >>>>> that connect back to the apache owned/controlled jenkins. No reason
>>> >> why
>>> >> >>>>> testing infra need be so consolidated, it just so happens no one is
>>> >> putting
>>> >> >>>>> their hardware where their mouth is.
>>> >> >>>>>
>>> >> >>>>> I also assume if your marvin tests get accepted upstream, they'll 
>>> >> >>>>> be
>>> >> >>>>> included in the nightly runs/reports. Prasanna correct me if I'm
>>> >> wrong.
>>> >> >>>>>
>>> >> >>>>>
>>> >> >>>>> On Tue, Sep 10, 2013 at 11:02 PM, Marcus Sorensen <
>>> >> shadow...@gmail.com>wrote:
>>> >> >>>>>
>>> >> >>>>>> CloudStack Dev,
>>> >> >>>>>>    I was emailed about some of the testing questions I brought up
>>> >> >>>>>> over the last few threads, and a few things were pointed out to me
>>> >> >>>>>> that I think we should try to remedy.  Primarily, that the testing
>>> >> >>>>>> environment is owned by Citrix, the QA team is primarily
>>> >> Citrix-run,
>>> >> >>>>>> and the testing done is focused on the use models that Citrix
>>> >> >>>>>> develops.
>>> >> >>>>>>    I've been assured that the test infrastructure is for everyone,
>>> >> >>>>>> and I'm not at all trying to say that there's a problem with 
>>> >> >>>>>> Citrix
>>> >> >>>>>> focusing their work on their own interests, but I'm not sure that
>>> >> >>>>>> anyone outside of Citrix really knows how to add their own stuff 
>>> >> >>>>>> to
>>> >> >>>>>> this testing infrastructure (perhaps for lack of trying, I don't
>>> >> >>>>>> know).
>>> >> >>>>>>    I haven't really put together enough thought to know how to
>>> >> tackle
>>> >> >>>>>> this, but my gut tells me that we need some sort of 
>>> >> >>>>>> community-owned
>>> >> >>>>>> testing roll-up, where everyone can do their own testing in
>>> >> whatever
>>> >> >>>>>> infrastructure and submit hourly, daily, weekly results. If my 
>>> >> >>>>>> test
>>> >> >>>>>> fits into the Citrix test infrastructure and I can figure out how
>>> >> to
>>> >> >>>>>> get it there, great. If not, I can roll my own and integrate it 
>>> >> >>>>>> via
>>> >> >>>>>> some API. For example the SolidFire guys may wan to run automated
>>> >> >>>>>> regression testing. That probably won't be doable in the Citrix
>>> >> >>>>>> infrastructure, but they may want to script a daily
>>> >> >>>>>> git-pull/build/deploy zone/create volume and it seems logical that
>>> >> >>>>>> we'd want to support it.
>>> >> >>>>>>    Thoughts? Anyone have experience with such things? Can we have 
>>> >> >>>>>> a
>>> >> >>>>>> master/slave scenario with Jenkins? Perhaps the Citrix environment
>>> >> >>>>>> already supports something like this via Jenkins API?
>>> >> >>>>>>
>>> >> >>>>
>>> >> >>>> --
>>> >> >>>> Prasanna.,
>>> >> >>>>
>>> >> >>>> ------------------------
>>> >> >>>> Powered by BigRock.com
>>> >> >
>>> >> > --
>>> >> > Prasanna.,
>>> >> >
>>> >> > ------------------------
>>> >> > Powered by BigRock.com
>>> >> >
>>> >>
>>> >>
>>
>> --
>> Prasanna.,
>>
>> ------------------------
>> Powered by BigRock.com
>>

Reply via email to