On Sep 11, 2013, at 3:02 AM, Prasanna Santhanam <t...@apache.org> wrote:

> CloudStack API actions are agnostic of underlying infrastructure and
> most cases can fall into such a category as you describe. But imagine
> this - I want to test snapshots ..
> 
> so i take a snapshot and verify if it backedup correctly against a
> ceph object store, nfs store or iscsi store. that sort of test is
> going to involve more than just api actions.
> 
> or say - i want to test multiple shared networks a VM gets deployed
> into. Do I assume the deployment has multiple shared networks? Can i
> add my own network into the deployment?
> 
> or even - I want to exhaust all the public network IPs and check if
> the next deployed VM picks up an IP in the new public range I've
> added. This sort of test assumes that all the necessary networking is
> in place and also hurts VM deployments of all tests that run at the
> same time.
> 
> It's a difficult balance to strike but we have to begin somewhere.
> Start with the basic minimum that every infra can run. infra specifc
> tests skip if thing are unsuitable, but will run for someone who wants
> to test that feature

A small point here to make is that jenkins.cloudstack.org is open to anyone.
Prasanna has created an account for me and I am (slowly) working on adding 
tests for clients including aws.

Anyone could use this jenkins instance, bring in slaves from "home" and setup 
tests…

Back to the solidfire example, I think Mike could easily contribute one node 
that has a solidfire storage, then contribute Marvin tests that would run on 
jenkins.c.o and target his slave specifically. Same for KVM on Ubuntu...

-sebastien


> 
> On Tue, Sep 10, 2013 at 11:53:15PM -0700, Ahmad Emneina wrote:
>> That's a good question, I'm not sure how preconditions work with
>> Marvin cases, but I know the tests are run generically. Say I run
>> copyvolumeToPrimary (not sure this test exists, hypothetical at the
>> moment), it gets run against a slew of infrastructure configurations
>> using local storage as well as shared (NSF, iscsi, ceph...) back
>> ends. So just dropping my test into a storage suite should give it
>> some guarantee its hitting a few different storage back-ends. That's
>> how i understand it works today, I'll defer to Prasanna or Sudha...
>> Or anyone else that runs tests aggressively to fill in the gaps and
>> make corrections.
>> 
>> Ahmad
>> 
>> On Sep 10, 2013, at 11:43 PM, Marcus Sorensen <shadow...@gmail.com> wrote:
>> 
>>> But if the test requires some sort of preconfiguration, what then (e.g. 
>>> test NFS primary storage would need a local or remote NFS configured)? do I 
>>> need to roll my own, or can I touch the existing test infra and do the 
>>> preconfigure?
>>> 
>>> On Sep 11, 2013 12:34 AM, "Prasanna Santhanam" <t...@apache.org> wrote:
>>>> Yes - Once your test goes into the repo, it should get picked in the 
>>>> subsequent
>>>> run.
>>>> 
>>>> Jenkins installations from various companies can be combined into a single
>>>> landing page. Jenkins itself doesn't support master/slave but it does 
>>>> through
>>>> the gearman plugin. It's something I have tried using with VMs but not with
>>>> real infra - but it is entirely possible.
>>>> 
>>>> On Tue, Sep 10, 2013 at 11:17:53PM -0700, Ahmad Emneina wrote:
>>>>> I think there are jenkins slaves that run the nicera plugins on/at 
>>>>> Schuberg
>>>>> Philis housed infrastructure. The Citrix jenkins nodes also runs as slaves
>>>>> that connect back to the apache owned/controlled jenkins. No reason why
>>>>> testing infra need be so consolidated, it just so happens no one is 
>>>>> putting
>>>>> their hardware where their mouth is.
>>>>> 
>>>>> I also assume if your marvin tests get accepted upstream, they'll be
>>>>> included in the nightly runs/reports. Prasanna correct me if I'm wrong.
>>>>> 
>>>>> 
>>>>> On Tue, Sep 10, 2013 at 11:02 PM, Marcus Sorensen 
>>>>> <shadow...@gmail.com>wrote:
>>>>> 
>>>>>> CloudStack Dev,
>>>>>>    I was emailed about some of the testing questions I brought up
>>>>>> over the last few threads, and a few things were pointed out to me
>>>>>> that I think we should try to remedy.  Primarily, that the testing
>>>>>> environment is owned by Citrix, the QA team is primarily Citrix-run,
>>>>>> and the testing done is focused on the use models that Citrix
>>>>>> develops.
>>>>>>    I've been assured that the test infrastructure is for everyone,
>>>>>> and I'm not at all trying to say that there's a problem with Citrix
>>>>>> focusing their work on their own interests, but I'm not sure that
>>>>>> anyone outside of Citrix really knows how to add their own stuff to
>>>>>> this testing infrastructure (perhaps for lack of trying, I don't
>>>>>> know).
>>>>>>    I haven't really put together enough thought to know how to tackle
>>>>>> this, but my gut tells me that we need some sort of community-owned
>>>>>> testing roll-up, where everyone can do their own testing in whatever
>>>>>> infrastructure and submit hourly, daily, weekly results. If my test
>>>>>> fits into the Citrix test infrastructure and I can figure out how to
>>>>>> get it there, great. If not, I can roll my own and integrate it via
>>>>>> some API. For example the SolidFire guys may wan to run automated
>>>>>> regression testing. That probably won't be doable in the Citrix
>>>>>> infrastructure, but they may want to script a daily
>>>>>> git-pull/build/deploy zone/create volume and it seems logical that
>>>>>> we'd want to support it.
>>>>>>    Thoughts? Anyone have experience with such things? Can we have a
>>>>>> master/slave scenario with Jenkins? Perhaps the Citrix environment
>>>>>> already supports something like this via Jenkins API?
>>>>>> 
>>>> 
>>>> --
>>>> Prasanna.,
>>>> 
>>>> ------------------------
>>>> Powered by BigRock.com
> 
> -- 
> Prasanna.,
> 
> ------------------------
> Powered by BigRock.com
> 

Reply via email to