One thing that is important to consider here is "order in which we cleanup
resources". Many times, cleanup operation fails if the resource can't be
deleted unless few other resources are deleted (child/dependent).

Alex, if you think if this too can be handled efficiently with the
prototype you are likely to build, then it will be great! If not, please
consider this too.

Also, please let me know if I can help. I had started fixing the cleanup
problems in test cases (identifying the missing items, fixing the cleanup
orders), but due to other priorities I could not finish it. I will take it
up soon anyway to complete it. But this change in framework itself to
accommodate cleanup operation will be great!

Regards,
Gaurav

On Wed, Sep 10, 2014 at 9:13 AM, Prasanna Santhanam <t...@apache.org> wrote:

> On Wed, Sep 10, 2014 at 5:42 AM, Alex Brett <alex.br...@citrix.com> wrote:
> > Hello all,
> >
> > At the moment we have a lot of Marvin tests that follow a pattern that
> looks roughly like this:
> >
> > 1. Setup some resources (e.g. accounts, service offerings, VMs etc)
> > 2. Add the resources to a list in the testcase (often called
> self.cleanup)
> > 3. Do the test(s)
> > 4. Call cleanup_resources with the list of resources from 2
> >
> > (obviously in some cases resources get created/allocated during the
> actual test rather than in setup, but it's a similar principle)
> >
> > In theory this is fine, however there are a number of cases where
> resources are being created and then not added to the cleanup list, which
> results in things being 'left behind', potentially using up resources which
> may then affect future tests. For example I'm currently attempting to run
> various tests in parallel (to speed up execution), and I'm hitting some
> issues I believe to be caused by this.
> >
> > The thought that occurs to me here is do we actually need the testcase
> to have to manually add resources to a list to cleanup, with the inherent
> risk of resources getting missed etc - could we not make this something the
> framework does for us (at least by default, with the option to override the
> behaviour if needed).
> >
> > I've got some ideas as to how this could be done (one example that's a
> bit of a layer violation but might be acceptable would be to wrap/extend
> the apiClient to have a method that can be called on it from the various
> object create methods to add the resulting object for cleanup), but before
> I go ahead and start trying to prototype something I wanted to see if
> anybody had any reasons why this sort of automatic cleanup behaviour might
> be a bad idea or has investigated anything similar in the past?
> >
>
> +1 - the cleanup of cloud resources is nasty especially when VMs and
> billable entities are left behind. Many tests that follow fail as a
> result of exhausted capacity. The nose framework sort of imposes the
> setUp - test - tearDown model of test case authoring and perhaps led
> us down that path of accumulating entities in cleanup lists. It would
> be elegant if you can identify the corresponding Object.destroy method
> and add the necessary finalizer automatically during create. Sort of
> like the way unittest2 and py.test do it. The ACS API is quite
> expressive and broken down easily into this model where you can figure
> out the entity/resource you operate upon. Each entity has its API for
> creation and a corresponding one for listing and destroy/deletion. If
> that's too involved the overloaded apiClient works fine too.
>
> > Cheers,
> > Alex
> >
>

Reply via email to