A new mountpoint property is added to the schema for ZFSPool storages.
When needed for the first time, the current mount point is determined and
written to the storage config.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* expanded eval around the zfs_request
* check if the returned
On 11/7/19 9:34 AM, Fabian Grünbichler wrote:
On November 6, 2019 1:46 pm, Fabian Ebner wrote:
A new mountpoint property is added to the schema for ZFSPool storages.
When needed for the first time, the current mount point is determined and
written to the storage config.
Signed-off-by: Fabian
Thanks for the suggestions, I'll do a v3.
On 11/6/19 8:40 PM, Thomas Lamprecht wrote:
On 11/6/19 10:46 AM, Fabian Ebner wrote:
Signed-off-by: Fabian Ebner
---
Changes from v1:
* Reworded the part that describes when a special device is useful
* Moved that part to the top, so p
Signed-off-by: Fabian Ebner
---
Changes from v2:
* Better example of when a special device is useful
* Don't mention special_small_blocks property in the first section, so it
is explained right when we use it for the first time
* Explain possible values for size right
On 11/7/19 9:34 AM, Fabian Grünbichler wrote:
On November 6, 2019 1:46 pm, Fabian Ebner wrote:
A new mountpoint property is added to the schema for ZFSPool storages.
When needed for the first time, the current mount point is determined and
written to the storage config.
Signed-off-by: Fabian
On 11/11/19 4:58 PM, Thomas Lamprecht wrote:
On 10/10/19 12:25 PM, Fabian Ebner wrote:
This patch series introduces a new 'stop' command for ha-manager.
The command takes a timeout parameter and in case it is 0, it performs a hard
stop.
The series also includes a test for the new
The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a
value of 0 would trigger a hard stop for HA managed containers. Like this the
API description stays valid for all cases.
Signed-off-by: Fabian Ebner
---
src/PVE/API2/LXC/Status.pm | 6 +++---
1 file changed, 3
The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a
value of 0 would trigger a hard stop for HA managed VMs. Like this the API
description stays valid for all cases.
Signed-off-by: Fabian Ebner
---
In vm_shutdown we'd like to pass along the timeout parameter to t
On 11/7/19 12:59 PM, Fabian Grünbichler wrote:
On November 7, 2019 12:52 pm, Fabian Ebner wrote:
On 11/7/19 9:34 AM, Fabian Grünbichler wrote:
On November 6, 2019 1:46 pm, Fabian Ebner wrote:
A new mountpoint property is added to the schema for ZFSPool storages.
When needed for the first time
On 11/13/19 9:55 AM, Thomas Lamprecht wrote:
On 11/12/19 11:03 AM, Fabian Ebner wrote:
The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a
value of 0 would trigger a hard stop for HA managed VMs. Like this the API
description stays valid for all cases.
Signed-off-by
it can be determined.
path() does not assume the default mountpoint anymore, fixing 2085.
Signed-off-by: Fabian Ebner
---
Changes from previous versions:
* do the handling in the on_add_hook instead of path()
* change the property name from mountpoint to path
* modified the pool used b
Signed-off-by: Fabian Ebner
---
PVE/Storage/ZFSPoolPlugin.pm | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 16fb0d6..b8adf1c 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage
Signed-off-by: Fabian Ebner
---
pve-storage-zfspool.adoc | 6 ++
1 file changed, 6 insertions(+)
diff --git a/pve-storage-zfspool.adoc b/pve-storage-zfspool.adoc
index f53a598..0f213b0 100644
--- a/pve-storage-zfspool.adoc
+++ b/pve-storage-zfspool.adoc
@@ -32,6 +32,12 @@ sparse::
Use ZFS
Signed-off-by: Fabian Ebner
---
Changes from v1:
* don't change the API
PVE/API2/Qemu.pm | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c31dd1d..8e162aa 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2
Signed-off-by: Fabian Ebner
---
Changes from v1:
* don't change the API
src/PVE/API2/LXC/Status.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
index 1b7a71d..166c731 100644
--- a/src/PVE/API2/LXC/Stat
When adding a zfspool storage with 'pvesm add' the mount point is now added
automatically to the storage configuration if it can be determined.
path() does not assume the default mountpoint anymore, fixing 2085.
Signed-off-by: Fabian Ebner
---
Changes from v3:
* create a new
Signed-off-by: Fabian Ebner
---
Changes from v3:
* 'path' renamed to 'mountpoint'
pve-storage-zfspool.adoc | 6 ++
1 file changed, 6 insertions(+)
diff --git a/pve-storage-zfspool.adoc b/pve-storage-zfspool.adoc
index f53a598..366a1f3 100644
--- a/pve-storage-zf
When 'content_types' included both 'images' and 'rootdir', a single volume
could appear twice in the volume list. This also fixes the same kind of
duplication in 'pvesm list'.
Signed-off-by: Fabian Ebner
---
PVE/Storage/Plugin.pm | 8 ++--
1 fi
On 11/19/19 10:13 AM, Fabian Ebner wrote:
When 'content_types' included both 'images' and 'rootdir', a single volume
could appear twice in the volume list. This also fixes the same kind of
duplication in 'pvesm list'.
Signed-off-by: Fabian Ebner
---
On 11/19/19 12:05 PM, Tim Marx wrote:
The bugfix for #2317 introduced a kind of odd api behavior, where each volume
was returned twice from our api if a storage has both 'rootdir' & 'images'
content
types enabled. To give the content type of the volume an actual meaning, it is
now inferred form
On 11/20/19 11:35 AM, Tim Marx wrote:
The bugfix for #2317 introduced a kind of odd api behavior, where each volume
was returned twice from our api if a storage has both 'rootdir' & 'images'
content
types enabled. To give the content type of the volume an actual meaning, it is
now inferred form
When a zpool is created the whole disks are used, so a user cannot set a size
limit in this case.
Signed-off-by: Fabian Ebner
---
proxinstall | 1 -
1 file changed, 1 deletion(-)
diff --git a/proxinstall b/proxinstall
index 5d02b34..93a61cb 100755
--- a/proxinstall
+++ b/proxinstall
@@ -2877,7
Signed-off-by: Fabian Ebner
---
proxinstall | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/proxinstall b/proxinstall
index 93a61cb..268bc91 100755
--- a/proxinstall
+++ b/proxinstall
@@ -678,7 +678,7 @@ sub read_cmap {
}
}
-# search for
On 11/21/19 12:48 PM, Thomas Lamprecht wrote:
On 11/21/19 12:35 PM, Fabian Ebner wrote:
When a zpool is created the whole disks are used, so a user cannot set a size
limit in this case.
are you sure?? AFAICR, this was added to ZFS so that one can leave some
free space to add a swap device
On 11/21/19 4:46 PM, Fabian Grünbichler wrote:
On November 4, 2019 11:23 am, Fabian Ebner wrote:
On 10/31/19 10:19 AM, Thomas Lamprecht wrote:
On 10/30/19 10:54 AM, Fabian Ebner wrote:
Doing an online migration with --targetstorage and two unused disks with the
same name on different storages
running container won't be able to re-add an unused
volume multiple times via the web GUI.
Signed-off-by: Fabian Ebner
---
src/PVE/LXC/Config.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm
index ffc5911..c2ae166 100644
--- a/src/PV
;t a
hotplug mount done straight away and hence doesn't land in the pending
section of the config? And so it can't be reverted either?
On Tue, Nov 26, 2019 at 12:51:38PM +0100, Fabian Ebner wrote:
This makes the behavior more similar to what we do for VM configs.
If we have a pending ch
The size of an unused volume is not visible to the user and trying to resize
an unused volume runs into a 'parameter verification failed' anyways.
Signed-off-by: Fabian Ebner
---
www/manager6/lxc/Resources.js | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/www/ma
Signed-off-by: Fabian Ebner
---
src/PVE/HA/Manager.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index d1d70b8..1eb117f 100644
--- a/src/PVE/HA/Manager.pm
+++ b/src/PVE/HA/Manager.pm
@@ -607,7 +607,7 @@ sub next_state_stopped
Otherwise there is an issue when resizing a volume with pending changes:
1. Have a running container with a mount point
2. Edit the mount point and change the path
3. Resize the mount point
4. Reboot the container
Result: the old size is written to the config.
Signed-off-by: Fabian Ebner
---
An
so ZFS won't complain when we do things like 'qm resize 102 scsi1 +0.01G'
Signed-off-by: Fabian Ebner
---
PVE/Storage/ZFSPoolPlugin.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 456fb40..9bc3f
The size is required to be a multiple of volblocksize. Make sure
that the requirement is always met, so ZFS won't complain when we do
things like 'qm resize 102 scsi1 +0.01G'.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* Always align to 1M to avoid requesting vol
The local versions of find_free_diskname retrieved the relevant disk list using
plugin-specific code and called get_next_vm_diskname. We can use list_images
instead to allow for a common interface and avoid having those similar methods.
Signed-off-by: Fabian Ebner
---
I did not test for
than the custom version, so we
keep the custom version.
Signed-off-by: Fabian Ebner
---
Changes from v1:
* Keep the custom versions in LVMPlugin and RBDPlugin
* Do not change the interface for get_next_vm_diskname
Thanks to Fabian for the suggestions!
PVE/Storage/GlusterfsPlu
Signed-off-by: Fabian Ebner
---
Only makes sense together with patch 3.
PVE/Storage.pm | 19 ++-
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index ae2ea53..3e65e06 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -103,6
to avoid a potential race for two processes trying to allocate the same volume.
Signed-off-by: Fabian Ebner
---
This is conceptually independent from patches 2+3 (but patch 3 modfies the same
hunk as this one).
PVE/Storage.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff
Following the rationale in afdfbe5594be5a0a61943de10cc5671ac53cbf79, mask
these bits for 'clone_image' and 'volume_import'. Also mask in 'chmod' for
new base images for consistency.
Signed-off-by: Fabian Ebner
---
This would make the permissions more consistent, bu
utput leaves through a pipe. Upon
importing a second error was present, since the volid didn't match the format.
Signed-off-by: Fabian Ebner
---
Here is the error messages:
2020-01-08 10:34:47 found local disk 'myzfsdir:111/vm-111-disk-0.vmdk' (via
storage)
2020-01-08 10:34:47
since it is not just the name but a hash containing
information about the volume
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0353458..1de1540 100644
--- a/PVE/QemuMigrate.pm
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index a1e2dea..96ad3f4 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -319,6 +319,7 @@ sub sync_disks
AFAICT this one hasn't been in use since commit
'4530494bf9f3d45c4a405c53ef3688e641f6bd8e'
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 5 -
1 file changed, 5 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 0353458..49848e8 100644
--- a/PVE
;
if the VM isn't running
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2b68d81..2c92c3b 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4668,6 +4668,10 @@ sub qemu_block_resi
On 1/9/20 5:59 PM, Thomas Lamprecht wrote:
On 1/9/20 11:20 AM, Fabian Ebner wrote:
Doing 'qm resize 111 scsi0 +0.2G' where scsi0 is a qcow2 disk
produced the following errors:
"VM 111 qmp command 'block_resize' failed - The new size must be a multiple of
512"
if t
On 1/13/20 10:49 AM, Fabian Ebner wrote:
On 1/9/20 5:59 PM, Thomas Lamprecht wrote:
On 1/9/20 11:20 AM, Fabian Ebner wrote:
Doing 'qm resize 111 scsi0 +0.2G' where scsi0 is a qcow2 disk
produced the following errors:
"VM 111 qmp command 'block_resize' failed - The new
Because of alignment and rounding in the storage backend, the effective
size might not match the 'newsize' parameter we passed along.
Signed-off-by: Fabian Ebner
---
Turns out that this happens in basically every storage backend that has
'volume_resize': LVM and RBD round d
For qcow2, this is required and for raw, the qmp command aligns to 512
implicitly anyways
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2b68d81..922f9b0 100644
--- a/PVE/QemuServer.pm
+++ b
since for qcow2, qemu-img expects a multiple of 512 and
for raw it aligns to 512 with a warning, which we avoid
Signed-off-by: Fabian Ebner
---
PVE/Storage/Plugin.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm
index 0c39cbd..7382140
Could I get some feedback for this? The same locking is done for
'vdisk_alloc' and 'vdisk_clone' already (among others), so I thought it
makes sense for 'volume_import' as well.
On 12/12/19 11:17 AM, Fabian Ebner wrote:
to avoid a potential race for two process
Signed-off-by: Fabian Ebner
---
local-zfs.adoc | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
diff --git a/local-zfs.adoc b/local-zfs.adoc
index 15a88bb..69979b5 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -180,7 +180,7 @@ underlying disk.
zpool
Signed-off-by: Fabian Ebner
---
local-zfs.adoc | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/local-zfs.adoc b/local-zfs.adoc
index bb03506..71a4a4c 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -190,7 +190,7 @@ To activate compression (see section
Signed-off-by: Fabian Ebner
---
local-zfs.adoc | 84 --
1 file changed, 61 insertions(+), 23 deletions(-)
diff --git a/local-zfs.adoc b/local-zfs.adoc
index 7043a24..bb03506 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -178,41 +178,55 @@ To
Signed-off-by: Fabian Ebner
---
Thanks to Thomas and Aaron for their suggestions.
Changes from v1:
* Moved paragraph to serve as introduction
* ZFS might not compress blocks that aren't compressible
enough (7/8 of original size is the threshold)
so I used "tries t
As 'qemu_img_format' just matches a regex, this doesn't make much of
a difference, but AFAICT all other calls of 'qemu_img_format' use 'volname'.
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
d
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 9ef3b71..59335c5 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5376,13 +5376,16 @@ sub vm_start
On 1/13/20 11:47 AM, Fabian Ebner wrote:
Because of alignment and rounding in the storage backend, the effective
size might not match the 'newsize' parameter we passed along.
Signed-off-by: Fabian Ebner
---
Turns out that this happens in basically every storage backend that has
Signed-off-by: Fabian Ebner
---
pve-zsync | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/pve-zsync b/pve-zsync
index ea3178e..04c5c5a 100755
--- a/pve-zsync
+++ b/pve-zsync
@@ -53,6 +53,8 @@ my $HOSTRE = "(?:$HOSTv4RE1|\\[$IPV6RE\\])"; # ipv6
must al
Signed-off-by: Fabian Ebner
---
pve-zsync | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pve-zsync b/pve-zsync
index 04c5c5a..25add92 100755
--- a/pve-zsync
+++ b/pve-zsync
@@ -53,7 +53,7 @@ my $HOSTRE = "(?:$HOSTv4RE1|\\[$IPV6RE\\])"; # ipv6
must always
Since 'pvesm import' uses a new volume ID if the requested one is already
present, callers should have a way to get the new volume ID.
Signed-off-by: Fabian Ebner
---
PVE/CLI/pvesm.pm | 2 +-
PVE/Storage.pm | 41 +
2 files changed, 34 insert
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index d025b09..81b52d1 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -686,8 +686,10 @@ sub phase2 {
foreach
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 5fefa06..2b292f6 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -626,11 +626,21 @@ sub storage_migrate
es and telling
me what changes would be needed or if the same approach
as I took for QEMU would work for LXC?
guest-common:
Fabian Ebner (1):
Implement update_volume_ids and add required helpers: foreach_volume
and print_volume
storage:
Fabian Ebner (8):
Remove unused string
volume_im
This function is intened to be used after doing a migration where some
of the volume IDs changed.
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 61 +++
1 file changed, 61 insertions(+)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 1 -
1 file changed, 1 deletion(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index 0bd103e..5fefa06 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -573,7 +573,6 @@ sub storage_migrate {
my $target_volid = "${target_st
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 10 +-
PVE/QemuServer.pm | 4 ++--
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 49848e8..d025b09 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -491,7
the volume ID might
look differently. E.g. 'mydir:102/vm-102-disk-0.raw' on a 'dir'
storage would be 'mylvm:vm-102-disk-0' on an 'lvm' storage.
Signed-off-by: Fabian Ebner
---
An alternative approach would be to translate the volids as mentioned
in the
Like this it is possible to determine if the transfer of a volume is possible
wihout already
having the name of the volume on the target storage. When doing the import,
'volume_import'
can then choose a new name automatically.
Signed-off-by: Fabian Ebner
---
For example, migration w
Signed-off-by: Fabian Ebner
---
Might make sense to combine this patch and patch 16.
PVE/QemuMigrate.pm | 22 +++---
1 file changed, 19 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 81b52d1..702fda0 100644
--- a/PVE/QemuMigrate.pm
an LVM storage.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index d708c03..6aea2ef 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -561,7 +561,6 @@ sub storage_migrate {
m
Signed-off-by: Fabian Ebner
---
Not sure about this one. On the one hand it adds even more to the
migration logs, which are already rather long. On the other hand it
might contain useful information.
PVE/QemuMigrate.pm | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/PVE
Signed-off-by: Fabian Ebner
---
PVE/API2/Qemu.pm | 3 ---
1 file changed, 3 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 89e2477..f21fb69 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -3379,9 +3379,6 @@ __PACKAGE__->register_method({
$param->{
The volid contains the format and that's relevant information
for why migration is not possible.
For example, a raw volume can be migrated between an LVM storage
and a filesystem based storage, but a qcow2 volume cannot.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 5 -
1 file ch
Signed-off-by: Fabian Ebner
---
PVE/QemuConfig.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 1ba728a..a983e52 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -130,6 +130,18 @@ sub get_replicatable_volumes {
return
Use 'update_volume_ids' for the live-migrated disks as well.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index af1cf01..6a0f034 100644
The ID of the new volume is returned and pvesm import prints it. This is
useful for migration, since the storage on the target might already contain
unused/orphaned disks.
Signed-off-by: Fabian Ebner
---
Breaks the current migration in QEMU/LXC if there is a collision,
since the code doesn
The description for vm_config was out of date and from the description
for vm_pending it was hard to tell what the difference to vm_config was.
Signed-off-by: Fabian Ebner
---
PVE/API2/Qemu.pm | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2
On 2/5/20 10:29 AM, Fabian Grünbichler wrote:
On January 29, 2020 2:29 pm, Fabian Ebner wrote:
This function is intened to be used after doing a migration where some
of the volume IDs changed.
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 61
On 2/5/20 10:38 AM, Fabian Grünbichler wrote:
On January 29, 2020 2:29 pm, Fabian Ebner wrote:
This function is intened to be used after doing a migration where some
of the volume IDs changed.
forgot to ask this - this is in AbstractConfig because you intend to
also re-use this for a similar
On 2/5/20 11:50 AM, Fabian Grünbichler wrote:
On January 29, 2020 2:30 pm, Fabian Ebner wrote:
Extends the API so that 'volume' can also only be a storage identifier. In
that case the VMID needs to be specified as well. In 'import_volume' a new
name for the allocation is
Signed-off-by: Fabian Ebner
---
www/manager6/window/Snapshot.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/window/Snapshot.js b/www/manager6/window/Snapshot.js
index 3b1070a2..32a66fda 100644
--- a/www/manager6/window/Snapshot.js
+++ b/www/manager6/window
Signed-off-by: Fabian Ebner
---
www/manager6/window/Snapshot.js | 24
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/www/manager6/window/Snapshot.js b/www/manager6/window/Snapshot.js
index 32a66fda..e4355106 100644
--- a/www/manager6/window/Snapshot.js
Signed-off-by: Fabian Ebner
---
www/manager6/tree/SnapshotTree.js | 3 +++
www/manager6/window/Snapshot.js | 4 ++--
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/www/manager6/tree/SnapshotTree.js
b/www/manager6/tree/SnapshotTree.js
index 0636ef68..7b5ac3ed 100644
--- a/www
There is no need to display 'Include RAM' when the VM is not running.
I thought it would make sense to warn users when they take a snapshot
where a file system freeze would be needed, but isn't possible.
Thanks to Oguz and Stefan for some JavaScript consulting.
Fabian Ebner (
snapshot.
Signed-off-by: Fabian Ebner
---
www/manager6/window/Snapshot.js | 42 +
1 file changed, 42 insertions(+)
diff --git a/www/manager6/window/Snapshot.js b/www/manager6/window/Snapshot.js
index 1a08637f..88f7248e 100644
--- a/www/manager6/window/Snapshot.js
+++ b
Avoid some problems with 'qemu-img resize', which expects
that the size is a multiple of 512 bytes for qcow2 images.
Since vdisk_alloc already uses KiB, this also improves
consistency a little.
The tests for ZFS are also adapted to the new interface.
Signed-off-by: Fabian Ebner
--
Signed-off-by: Fabian Ebner
---
src/PVE/API2/LXC.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 6652e2e..ebe8f18 100644
--- a/src/PVE/API2/LXC.pm
+++ b/src/PVE/API2/LXC.pm
@@ -1671,13 +1671,15 @@ __PACKAGE__
Signed-off-by: Fabian Ebner
---
PVE/Storage/ZFSPoolPlugin.pm | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index dbe0465..b29fba5 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage
Also gets rid of an error with qmp block_resize, which expects
that the size is a multiple of 512 bytes for qcow2 volumes.
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 23176dd
-mode-fails.61927/#post-284123
Signed-off-by: Fabian Ebner
---
PVE/Storage/ZFSPoolPlugin.pm | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index d72ee16..b538e3b 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++
hreads/lxc-backup-fails-unable-to-open-the-dataset-vzdump.64944/
Signed-off-by: Fabian Ebner
---
Hopefully there is nothing that relies on the old behavior
with $snapname. Or was it intended to be able to reach
the 'zfs set acl' call with $snapname set?
src/PVE/LXC.pm | 10 -
On 2/18/20 4:09 PM, Dominik Csapak wrote:
one comment inline
On 2/17/20 12:41 PM, Fabian Ebner wrote:
Also gets rid of an error with qmp block_resize, which expects
that the size is a multiple of 512 bytes for qcow2 volumes.
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 4 +++-
1
1. Avoids the error
"VM 111 qmp command 'block_resize' failed - The new size must be a multiple of
512"
for qcow2 disks.
2. Because volume_import expects disk sizes to be a multiple of 1 KiB.
Signed-off-by: Fabian Ebner
---
Changes from v3:
* No ABI change anymore
1. Avoids the error
qemu-img: The new size must be a multiple of 512
for qcow2 disks.
2. Because volume_import expects disk sizes to be a multiple of 1 KiB.
Signed-off-by: Fabian Ebner
---
PVE/Storage.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
On 2/18/20 5:55 PM, Aaron Lauterer wrote:
in some situations it is possible, that a disk does not have a
/dev/disk/by-id path, mainly AFAICT inside VMs with virtio disks.
Commit e1b490865f750e08f6c9c6b7e162e7def9dcc411 forgot to handle this
situation which resultet in a failed installation.
Sig
This function is intened to be used after doing a migration where some
of the volume IDs changed.
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 29 +
1 file changed, 29 insertions(+)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index 9ce3d12
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a4d2e4e..751a075 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3523,7 +3523,7 @@ sub config_to_command {
my $path
Signed-off-by: Fabian Ebner
---
PVE/QemuServer.pm | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 751a075..1580514 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -1655,9 +1655,8 @@ sub parse_drive {
sub
Signed-off-by: Fabian Ebner
---
Not related to this series, but: this should also make
it easier to preserve meta-information (e.g. size, discard)
to unused disks without re-implementing add_unused_volume
in the modules.
PVE/AbstractConfig.pm | 23 +++
1 file changed, 23
Signed-off-by: Fabian Ebner
---
PVE/QemuServer/Drive.pm | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 35ac99d..e922392 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -378,16 +378,22
Signed-off-by: Fabian Ebner
---
PVE/QemuConfig.pm | 24
1 file changed, 24 insertions(+)
diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm
index 1ba728a..b0dc3b9 100644
--- a/PVE/QemuConfig.pm
+++ b/PVE/QemuConfig.pm
@@ -8,6 +8,7 @@ use PVE::INotify;
use PVE
Signed-off-by: Fabian Ebner
---
PVE/AbstractConfig.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm
index bd43cbe..5c449f6 100644
--- a/PVE/AbstractConfig.pm
+++ b/PVE/AbstractConfig.pm
@@ -508,6 +508,19 @@ sub
101 - 200 of 468 matches
Mail list logo