On 05/30/2016 10:26 AM, Loo, Ruby wrote:
> Hi,
> 
>> But the issue here is just capacity. Whether or not we keep an instance
>> in a deleting state, or when we release quota, doesn't change the
>> Tempest failures from what I can tell. The suggestions below address
>> that.
>>
>>
>>>
>>>>>>
>>>>>> I think we should go with #1, but instead of erasing the whole disk
>>>>>> for real maybe we should have a "fake" clean step that runs quickly
>>>>>> for tests purposes only?
>>>>>>
>>>
>>> Disabling the cleaning step (or having a fake one that does nothing) for
>>> the
>>> gate would get around the failures at least. It would make things work
>>> again
>>> because the nodes would be available right after Nova deletes them.
> 
> I lost track of what we are trying to test ? If we want to test that an 
> ironic node gets cleaned, then add fake cleaning. If we don¹t care that the 
> node gets cleaned (because eg we have a different test that will test for 
> that), then disable the cleaning. [And if we don¹t care either way, but one 
> is harder to do than the other, go with the easier ;)]

It seems like cleaning tests are probably something you want to do in a
more dedicated way because of the cost associated with them. We run the
default gate jobs with secure_delete turned off for volumes for the same
reason, it just adds a ton of delay that impacts a lot of other
unrelated code.

So if there is a flag to just disable it, I think that's fine.
Especially given the fake ironic is qemu guests right? So kill and
reboot should give you a fresh one.

Just make sure that in an Ironic specific normal job cleaning is handled.

        -Sean

-- 
Sean Dague
http://dague.net

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to