Hi,
We have faced the issue that nodes' disks are wiped after stop deployment.
It occurs due to the logic of nodes removing (this is old logic and it's
not actual already as I understand). This logic contains step which calls
erase_node[0], also there is another method with wipe of disks [1]. AFAIK
it was needed for smooth cobbler provision and ensure that nodes will not
be booted from disk when it shouldn't. Instead of cobbler we use IBP from
fuel-agent where current partition table is wiped before provision stage.
And use disks wiping for insurance that nodes will not booted from disk
doesn't seem good solution. I want to propose not to wipe disks and simply
unset bootable flag from node disks.

Please share your thoughts. Perhaps some other components use the fact that
disks are wiped after node removing or stop deployment. If it's so, then
please tell about it.

[0]
https://github.com/openstack/fuel-astute/blob/master/lib/astute/nodes_remover.rb#L132-L137
[1]
https://github.com/openstack/fuel-astute/blob/master/lib/astute/ssh_actions/ssh_erase_nodes.rb

Best regards,
Svechnikov Artur
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to