Lucas, Andrew

Thanks for fast response.

On Fri, May 27, 2016 at 4:53 PM, Andrew Laski <and...@lascii.com> wrote:

>
>
> On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
> > Hi,
> >
> > Thanks for bringing this up Vasyl!
> >
> > > At the moment Nova with ironic virt_driver consider instance as
> deleted,
> > > while on Ironic side server goes to cleaning which can take a while. As
> > > result current implementation of Nova tempest tests doesn't work for
> case
> > > when Ironic is enabled.
>
> What is the actual failure? Is it a capacity issue because nodes do not
> become available again quickly enough?
>
>
The actual failure is that temepest community doesn't want to accept 1
option.
https://review.openstack.org/315422/
And I'm not sure that it is the right way.

> >
> > > There are two possible options how to fix it:
> > >
> > >  Update Nova tempest test scenarios for Ironic case to wait when
> cleaning is
> > > finished and Ironic node goes to 'available' state.
> > >
> > > Mark instance as deleted in Nova only after cleaning is finished on
> Ironic
> > > side.
> > >
> > > I'm personally incline to 2 option. From user side successful instance
> > > termination means that no instance data is available any more, and
> nobody
> > > can access/restore that data. Current implementation breaks this rule.
> > > Instance is marked as successfully deleted while in fact it may be not
> > > cleaned, it may fail to clean and user will not know anything about it.
> > >

>
> > I don't really like option #2, cleaning can take several hours
> > depending on the configuration of the node. I think that it would be a
> > really bad experience if the user of the cloud had to wait a really
> > long time before his resources are available again once he deletes an
> > instance. The idea of marking the instance as deleted in Nova quickly
> > is aligned with our idea of making bare metal deployments
> > look-and-feel like VMs for the end user. And also (one of) the
> > reason(s) why we do have a separated state in Ironic for DELETING and
> > CLEANING.
>

The resources will be available only if there are other available baremetal
nodes in the cloud.
User doesn't have ability to track for status of available resources
without admin access.


> I agree. From a user perspective once they've issued a delete their
> instance should be gone. Any delay in that actually happening is purely
> an internal implementation detail that they should not care about.
>
> >
> > I think we should go with #1, but instead of erasing the whole disk
> > for real maybe we should have a "fake" clean step that runs quickly
> > for tests purposes only?
> >
>

At the gates we just waiting for bootstrap and callback from node when
cleaning starts. All heavy operations are postponed. We can disable
automated_clean, which means it is not tested.


> > Cheers,
> > Lucas
> >
> >
> __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to