On Sat, Jun 15, 2013 at 01:48:57PM -0400, Chip Childers wrote:
> > There's also a couple of issues here -
> > 1. Does everyone know where the tests run?
> Nope
> > 2. Do people know how to spot the failures?
> Nope
> > 3. Do people know how to find the logs for the failures?
> Nope
> > 
> > If the answer is no to all this, I have more documentation on my
> > hands.

I'll have the documentation draft up soon. Thanks for pointing this
out. All the logs show up under the test-matrix(-extended) job on the
cloudstack-qa view. You can drill down from the "Test Result" shown by
jenkins to see the stacktrace of the failure. For the management
server log, it's a little hidden - it goes under the profile
(hypervisor, ms-distro). For now I'm pulling in management server
logs. Will expose the kvm agent debug logs too.

> > Ideally, I'd like those interested in infra activities to form a
> > separate group for cloudstack-infra related topics. The group's focus
> > will be to maintain, test and add to the infrastructure of this
> > project. But that's in the future. Without such a group, building an
> > IaaS cloud is not much fun :)
> 
> +1 - and at least for now, perhaps we start getting more organized
> around this via dev@cs.a.o using [INFRA] tags.

Will start using the tag as a start.

> 
> Some thoughts I have are: I know that some stuff is being put to use for
> the project in Fremont, but I don't know what it is.  I also don't
> know what hardware donations might be helpful for the environment, so
> that perhaps I could help find something.
> 

Since every $company deploys cloudstack a different way, ideally the
environment should be a small mirror of what is used in production by
$company. That environment can be behind a firewall. What is required
is a jenkins slave that can be either hooked in through jnlp or SSH to
the jenkins.buildacloud.org instance. It will be labelled as a test
slave there and when we need to run tests, we can utilize it for
running tests.  The auth keys can be shared among those interested to
work towards maintaining that infra.

> In all seriousness, if there is a need, I could take up the question at
> $dayjob to provide some testing resources within one of our labs as
> well.  I actually think this would be easier to do then a "donation" of
> hardware that's not really a "donation" to the ASF.  The question is:
> *what's needed* that we don't have already?
> 

Right - donations are (IIUC) only reqd if the ASF infra is going to
manage this. But if there's a group of people within the project
managing this infra and not have it flout any infra rules we're good
to go and get started independantly on this.

We have a single dedicated enviornment that I cycle through deployment
styles that are oft used within Citrix. But obviously others are using
it differently. With perhaps RBD /Ceph, Object stores, OVS, Nicira,
etc. These are not tested.

For specifics on setup and internal resources like - NFS, code
repositories, images repositories, pypi mirrors/caches, log gathering
etc - we can start a separate thread if there is interest.

> > 
> > > 17:44:17 [topcloud]: i can't imagine apache wanting bvt to only run 
> > > inside citrix all the time.
> > It doesn't run within Citrix. It runs in a DC in Fremont. There are
> > other environments within Citrix however that run their own tests for
> > their needs - eg: object_store tests, cloudplatform tests, customer
> > related issue tests etc.
> > 
> > /me beginning to think more doc work is on my way :/
> 
> Well, really, the key is for us to all know about which infra is being
> shared for the use of the project.  Stuff that's inside a corp that we
> can't all see isn't worth documenting for the project itself.
> 
But it should be if the infra is exposing all troubleshooting tools,
logs to fix cloudstack bugs. If it's running custom builds etc, then I
agree it would not be of much use.

-- 
Prasanna.,

------------------------
Powered by BigRock.com

Reply via email to