Ok, so I managed to delete the vzctl snapshot without destroying anything,
now for the ploop snapshot.
# pwd
/vz/private/2202/root.hdd
# ploop snapshot-list DiskDescriptor.xml
PARENT_UUIDC UUID
FNAME
{----}
{1ef87f8e-e5b7-4627-8424-d2
Thanks Kir, yes just after posting my question I realized I misunderstood
the output from snapshot-list and also the snapshot without a UUID was
actually a snapshot to be deleted.
Thanks also for the explanation of the difference between ploop and vzctl
snapshots, that will come in handy!
One thi
On 23 March 2015 at 05:35, Rene C. wrote:
> but if I go to /vz/private/2202/root.hdd I find
>
> 4.0KDiskDescriptor.xml
> 0 DiskDescriptor.xml.lck
> 300Groot.hdd
> 161Groot.hdd.{8c40287b-2e17-45d1-b58f-1119b3b58b53}
> 138Groot.hdd.{fb7ba001-cb78-4dd3-9ac8-cb0c8cbab4f6}
>
> It
Any news on this?
> We know this problem and going to fix it in a near future.
I'm just going through our servers and are quite surprised how much
diskspace seems to be wasted.
Take for example this VE with a disk of nominally 292G, 235G used
# df -h
FilesystemSize Used Avail Use% M
Good answer, thanks much!!
On Mon, Dec 23, 2013 at 4:47 PM, Andrew Vagin wrote:
> On Mon, Dec 23, 2013 at 04:28:14PM +0700, Rene C. wrote:
>> Indeed, that seems to have been at least part of the problem, thanks much.
>>
>> Still, after having removed all snapshots and rerun vzctl compress, I
>>
On Mon, Dec 23, 2013 at 04:28:14PM +0700, Rene C. wrote:
> Indeed, that seems to have been at least part of the problem, thanks much.
>
> Still, after having removed all snapshots and rerun vzctl compress, I
> still ended up with a pigz compressed backup of 40G for a container
> with 2G used disk
Indeed, that seems to have been at least part of the problem, thanks much.
Still, after having removed all snapshots and rerun vzctl compress, I
still ended up with a pigz compressed backup of 40G for a container
with 2G used disk space (shown by df -h within the container). Any
idea how that can
Greetings,
- Original Message -
> What can we do to compress this down to the actual 2G used?
>
> e2fsprogs-resize2fs-static-1.42.3-3.el6.1.ovz.x86_64
> vzctl-4.5.1-1.x86_64
> vzkernel-2.6.32-042stab081.5.x86_64
> vzctl-core-4.5.1-1.x86_64
> vzquota-3.1-1.x86_64
> vzstats-0.5.2-1.noarch
>
On Sat, Dec 21, 2013 at 09:08:43PM +0700, Rene C. wrote:
> We have a container that needs moving to another hardware node, but a
> vzpbackup of it is 250G.
>
> Within the container only 1.9G is used:
>
> /dev/ploop33244p1 393G 1.9G 371G 1% /
>
> Tried running a vzctl compact but it just sho
We have a container that needs moving to another hardware node, but a
vzpbackup of it is 250G.
Within the container only 1.9G is used:
/dev/ploop33244p1 393G 1.9G 371G 1% /
Tried running a vzctl compact but it just shows a few lines and stops
without having done anything:
# vzctl compact 1
10 matches
Mail list logo