>As Andrew said already, in such case LVM meta data will remain on the hard
drive. So if you remove partition table, reboot the node (env reset), then
configure exactly the same partition table (like when you use the same
>default disk allocation in Fuel), then Linux will find LVM info on the
same
On Tue, Dec 29, 2015 at 5:35 AM Sergii Golovatiuk
wrote:
> Hi,
>
> Let me comment inline.
>
>
> On Mon, Dec 28, 2015 at 7:06 PM, Andrew Woodward wrote:
>
>> In order to ensure that LVM can be configured as desired, its necessary
>> to purge them and then reboot the node, otherwise the partitioni
Alex is right, wiping partition table is not enough. User can create
partition table with exact the same sizes of partition as was before. In
this case lvm may detect metadata on untouched partitions to assemble a
logical volume. We should remove lvm metadata from every partition (or wipe
1st megab
> accurately wipe only partition table and do not touch any other data
As Andrew said already, in such case LVM meta data will remain on the hard
drive. So if you remove partition table, reboot the node (env reset), then
configure exactly the same partition table (like when you use the same
defaul
Thank you all for the answers.
Well, it seems we should figure out how to do it another way. For now I see
only one solution - accurately wipe only partition table and do not touch
any other data. But this solution require huge changes in volume manager
and fuel-agent, so I don't like this solution
Hi,
Let me comment inline.
On Mon, Dec 28, 2015 at 7:06 PM, Andrew Woodward wrote:
> In order to ensure that LVM can be configured as desired, its necessary to
> purge them and then reboot the node, otherwise the partitioning commands
> will most likely fail on the next attempt as they will be
In order to ensure that LVM can be configured as desired, its necessary to
purge them and then reboot the node, otherwise the partitioning commands
will most likely fail on the next attempt as they will be initialized
before we can start partitioning the node. Hence, when a node is removed
from the
>
>
> It's used in stop_deployment provision stage [0] and for control reboot
> [1].
>
> > Is it a fall back mechanism if the mcollective fails?
>
> Yes it's like fall back mechanism, but it's used always [2].
>
As I remember it the use of SSH for stopping provisioning was because of
our use of OS
Hi,
> I want to propose not to wipe disks and simply unset bootable flag from
node disks.
AFAIK, removing bootable flag does not guarantee that system won't be
booted from the local drive. This is why erase_node is needed.
Regards,
Alex
On Fri, Dec 25, 2015 at 8:59 AM, Artur Svechnikov
wrote:
> When do we use the ssh_erase_nodes?
It's used in stop_deployment provision stage [0] and for control reboot [1].
> Is it a fall back mechanism if the mcollective fails?
Yes it's like fall back mechanism, but it's used always [2].
> That might have been a side effect of cobbler and we should t
On Thu, Dec 24, 2015 at 1:29 AM, Artur Svechnikov
wrote:
> Hi,
> We have faced the issue that nodes' disks are wiped after stop deployment.
> It occurs due to the logic of nodes removing (this is old logic and it's not
> actual already as I understand). This logic contains step which calls
> erase
>From my point of view there are no any security because of disks are wiped
not fully, but about 1MB of begin and end of each partition. The other data
is still stored on the disks.
Best regards,
Svechnikov Artur
On Thu, Dec 24, 2015 at 12:34 PM, Oleg Gelbukh
wrote:
> I guess that the original
I guess that the original idea behind the wipe were security reasons so the
decommissioned node didn't contain any information of the cloud, including
configuration files and such.
--
Best regards,
Oleg Gelbukh
On Thu, Dec 24, 2015 at 11:29 AM, Artur Svechnikov wrote:
> Hi,
> We have faced the
13 matches
Mail list logo