[pve-devel] [PATCH v2 storage] fix #2085: Handle non-default mount point in path() by introducing new mountpoint property

2019-11-07 Thread Fabian Ebner
A new mountpoint property is added to the schema for ZFSPool storages. When needed for the first time, the current mount point is determined and written to the storage config. Signed-off-by: Fabian Ebner --- Changes from v1: * expanded eval around the zfs_request * check if the returned

Re: [pve-devel] [PATCH storage] fix #2085: Handle non-default mount point in path() by introducing new mountpoint property

2019-11-07 Thread Fabian Ebner
On 11/7/19 9:34 AM, Fabian Grünbichler wrote: On November 6, 2019 1:46 pm, Fabian Ebner wrote: A new mountpoint property is added to the schema for ZFSPool storages. When needed for the first time, the current mount point is determined and written to the storage config. Signed-off-by: Fabian

Re: [pve-devel] [PATCH v2 docs] Add section for ZFS Special Device

2019-11-07 Thread Fabian Ebner
Thanks for the suggestions, I'll do a v3. On 11/6/19 8:40 PM, Thomas Lamprecht wrote: On 11/6/19 10:46 AM, Fabian Ebner wrote: Signed-off-by: Fabian Ebner --- Changes from v1: * Reworded the part that describes when a special device is useful * Moved that part to the top, so p

[pve-devel] [PATCH v3 docs] Add section for ZFS Special Device

2019-11-07 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Changes from v2: * Better example of when a special device is useful * Don't mention special_small_blocks property in the first section, so it is explained right when we use it for the first time * Explain possible values for size right

Re: [pve-devel] [PATCH storage] fix #2085: Handle non-default mount point in path() by introducing new mountpoint property

2019-11-07 Thread Fabian Ebner
On 11/7/19 9:34 AM, Fabian Grünbichler wrote: On November 6, 2019 1:46 pm, Fabian Ebner wrote: A new mountpoint property is added to the schema for ZFSPool storages. When needed for the first time, the current mount point is determined and written to the storage config. Signed-off-by: Fabian

Re: [pve-devel] [PATCH v4 ha-manager 0/5] Implement a stop command for HA

2019-11-11 Thread Fabian Ebner
On 11/11/19 4:58 PM, Thomas Lamprecht wrote: On 10/10/19 12:25 PM, Fabian Ebner wrote: This patch series introduces a new 'stop' command for ha-manager. The command takes a timeout parameter and in case it is 0, it performs a hard stop. The series also includes a test for the new

[pve-devel] [PATCH container] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-12 Thread Fabian Ebner
The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a value of 0 would trigger a hard stop for HA managed containers. Like this the API description stays valid for all cases. Signed-off-by: Fabian Ebner --- src/PVE/API2/LXC/Status.pm | 6 +++--- 1 file changed, 3

[pve-devel] [PATCH qemu-server] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-12 Thread Fabian Ebner
The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a value of 0 would trigger a hard stop for HA managed VMs. Like this the API description stays valid for all cases. Signed-off-by: Fabian Ebner --- In vm_shutdown we'd like to pass along the timeout parameter to t

Re: [pve-devel] [PATCH storage] fix #2085: Handle non-default mount point in path() by introducing new mountpoint property

2019-11-12 Thread Fabian Ebner
On 11/7/19 12:59 PM, Fabian Grünbichler wrote: On November 7, 2019 12:52 pm, Fabian Ebner wrote: On 11/7/19 9:34 AM, Fabian Grünbichler wrote: On November 6, 2019 1:46 pm, Fabian Ebner wrote: A new mountpoint property is added to the schema for ZFSPool storages. When needed for the first time

Re: [pve-devel] [PATCH qemu-server] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-13 Thread Fabian Ebner
On 11/13/19 9:55 AM, Thomas Lamprecht wrote: On 11/12/19 11:03 AM, Fabian Ebner wrote: The minimum value for timeout in vm_shutdown is changed from 0 to 1, since a value of 0 would trigger a hard stop for HA managed VMs. Like this the API description stays valid for all cases. Signed-off-by

[pve-devel] [PATCH storage 2/2] fix #2085: Handle non-default mount point in path() using storage property 'path' for mount point

2019-11-14 Thread Fabian Ebner
it can be determined. path() does not assume the default mountpoint anymore, fixing 2085. Signed-off-by: Fabian Ebner --- Changes from previous versions: * do the handling in the on_add_hook instead of path() * change the property name from mountpoint to path * modified the pool used b

[pve-devel] [PATCH storage 1/2] Introduce zfs_get_properties helper

2019-11-14 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/Storage/ZFSPoolPlugin.pm | 18 -- 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 16fb0d6..b8adf1c 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage

[pve-devel] [PATCH docs] Add description for path property

2019-11-14 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- pve-storage-zfspool.adoc | 6 ++ 1 file changed, 6 insertions(+) diff --git a/pve-storage-zfspool.adoc b/pve-storage-zfspool.adoc index f53a598..0f213b0 100644 --- a/pve-storage-zfspool.adoc +++ b/pve-storage-zfspool.adoc @@ -32,6 +32,12 @@ sparse:: Use ZFS

[pve-devel] [PATCH v2 qemu-server] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-14 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Changes from v1: * don't change the API PVE/API2/Qemu.pm | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index c31dd1d..8e162aa 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -2

[pve-devel] [PATCH v2 container] Use crm-command stop to allow shutdown with timeout and hard stop for HA

2019-11-14 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Changes from v1: * don't change the API src/PVE/API2/LXC/Status.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm index 1b7a71d..166c731 100644 --- a/src/PVE/API2/LXC/Stat

[pve-devel] [PATCH v4 storage] fix #2085: Handle non-default mount point in path() by introducing new mountpoint property

2019-11-18 Thread Fabian Ebner
When adding a zfspool storage with 'pvesm add' the mount point is now added automatically to the storage configuration if it can be determined. path() does not assume the default mountpoint anymore, fixing 2085. Signed-off-by: Fabian Ebner --- Changes from v3: * create a new

[pve-devel] [PATCH v4 docs] Add description for mountpoint property

2019-11-18 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Changes from v3: * 'path' renamed to 'mountpoint' pve-storage-zfspool.adoc | 6 ++ 1 file changed, 6 insertions(+) diff --git a/pve-storage-zfspool.adoc b/pve-storage-zfspool.adoc index f53a598..366a1f3 100644 --- a/pve-storage-zf

[pve-devel] [PATCH storage] Do not include a volume more than once in list_volumes

2019-11-19 Thread Fabian Ebner
When 'content_types' included both 'images' and 'rootdir', a single volume could appear twice in the volume list. This also fixes the same kind of duplication in 'pvesm list'. Signed-off-by: Fabian Ebner --- PVE/Storage/Plugin.pm | 8 ++-- 1 fi

Re: [pve-devel] [PATCH storage] Do not include a volume more than once in list_volumes

2019-11-19 Thread Fabian Ebner
On 11/19/19 10:13 AM, Fabian Ebner wrote: When 'content_types' included both 'images' and 'rootdir', a single volume could appear twice in the volume list. This also fixes the same kind of duplication in 'pvesm list'. Signed-off-by: Fabian Ebner ---

Re: [pve-devel] [PATCH storage] fix #2467 remove duplicate volumes & tag with correct content type

2019-11-20 Thread Fabian Ebner
On 11/19/19 12:05 PM, Tim Marx wrote: The bugfix for #2317 introduced a kind of odd api behavior, where each volume was returned twice from our api if a storage has both 'rootdir' & 'images' content types enabled. To give the content type of the volume an actual meaning, it is now inferred form

Re: [pve-devel] [PATCH v2 storage 1/3] fix #2467 remove duplicate volumes & tag with correct content type

2019-11-20 Thread Fabian Ebner
On 11/20/19 11:35 AM, Tim Marx wrote: The bugfix for #2317 introduced a kind of odd api behavior, where each volume was returned twice from our api if a storage has both 'rootdir' & 'images' content types enabled. To give the content type of the volume an actual meaning, it is now inferred form

[pve-devel] [PATCH installer 1/2] Remove unused hdsize from zfs advanced options

2019-11-21 Thread Fabian Ebner
When a zpool is created the whole disks are used, so a user cannot set a size limit in this case. Signed-off-by: Fabian Ebner --- proxinstall | 1 - 1 file changed, 1 deletion(-) diff --git a/proxinstall b/proxinstall index 5d02b34..93a61cb 100755 --- a/proxinstall +++ b/proxinstall @@ -2877,7

[pve-devel] [PATCH installer 2/2] Fix typos

2019-11-21 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- proxinstall | 20 ++-- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/proxinstall b/proxinstall index 93a61cb..268bc91 100755 --- a/proxinstall +++ b/proxinstall @@ -678,7 +678,7 @@ sub read_cmap { } } -# search for

Re: [pve-devel] [PATCH installer 1/2] Remove unused hdsize from zfs advanced options

2019-11-21 Thread Fabian Ebner
On 11/21/19 12:48 PM, Thomas Lamprecht wrote: On 11/21/19 12:35 PM, Fabian Ebner wrote: When a zpool is created the whole disks are used, so a user cannot set a size limit in this case. are you sure?? AFAICR, this was added to ZFS so that one can leave some free space to add a swap device

Re: [pve-devel] [PATCH v2 qemu-server 2/3] Avoid collisions of unused disks when doing online migration with --targetstorage

2019-11-25 Thread Fabian Ebner
On 11/21/19 4:46 PM, Fabian Grünbichler wrote: On November 4, 2019 11:23 am, Fabian Ebner wrote: On 10/31/19 10:19 AM, Thomas Lamprecht wrote: On 10/30/19 10:54 AM, Fabian Ebner wrote: Doing an online migration with --targetstorage and two unused disks with the same name on different storages

[pve-devel] [PATCH container] Remove an unused volume from the config if it is pending to be re-added

2019-11-26 Thread Fabian Ebner
running container won't be able to re-add an unused volume multiple times via the web GUI. Signed-off-by: Fabian Ebner --- src/PVE/LXC/Config.pm | 11 +++ 1 file changed, 11 insertions(+) diff --git a/src/PVE/LXC/Config.pm b/src/PVE/LXC/Config.pm index ffc5911..c2ae166 100644 --- a/src/PV

Re: [pve-devel] [PATCH container] Remove an unused volume from the config if it is pending to be re-added

2019-11-27 Thread Fabian Ebner
;t a hotplug mount done straight away and hence doesn't land in the pending section of the config? And so it can't be reverted either? On Tue, Nov 26, 2019 at 12:51:38PM +0100, Fabian Ebner wrote: This makes the behavior more similar to what we do for VM configs. If we have a pending ch

[pve-devel] [PATCH manager] LXC: Disable resize button when volume is unusued

2019-11-27 Thread Fabian Ebner
The size of an unused volume is not visible to the user and trying to resize an unused volume runs into a 'parameter verification failed' anyways. Signed-off-by: Fabian Ebner --- www/manager6/lxc/Resources.js | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/www/ma

[pve-devel] [PATCH ha-manager] Fix check for maintenance mode

2019-12-02 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- src/PVE/HA/Manager.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm index d1d70b8..1eb117f 100644 --- a/src/PVE/HA/Manager.pm +++ b/src/PVE/HA/Manager.pm @@ -607,7 +607,7 @@ sub next_state_stopped

[pve-devel] [PATCH container] Always determine the size of the volume in volume_rescan

2019-12-03 Thread Fabian Ebner
Otherwise there is an issue when resizing a volume with pending changes: 1. Have a running container with a mount point 2. Edit the mount point and change the path 3. Resize the mount point 4. Reboot the container Result: the old size is written to the config. Signed-off-by: Fabian Ebner --- An

[pve-devel] [PATCH storage] Automatically round up to the next valid size when resizing a ZFS volume

2019-12-05 Thread Fabian Ebner
so ZFS won't complain when we do things like 'qm resize 102 scsi1 +0.01G' Signed-off-by: Fabian Ebner --- PVE/Storage/ZFSPoolPlugin.pm | 11 +++ 1 file changed, 11 insertions(+) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 456fb40..9bc3f

[pve-devel] [PATCH v2 storage] When resizing a ZFS volume, align size to 1M

2019-12-08 Thread Fabian Ebner
The size is required to be a multiple of volblocksize. Make sure that the requirement is always met, so ZFS won't complain when we do things like 'qm resize 102 scsi1 +0.01G'. Signed-off-by: Fabian Ebner --- Changes from v1: * Always align to 1M to avoid requesting vol

[pve-devel] [PATCH storage] Use a common find_free_diskname in all plugins

2019-12-09 Thread Fabian Ebner
The local versions of find_free_diskname retrieved the relevant disk list using plugin-specific code and called get_next_vm_diskname. We can use list_images instead to allow for a common interface and avoid having those similar methods. Signed-off-by: Fabian Ebner --- I did not test for

[pve-devel] [PATCH v2 storage] Use a common interface for find_free_diskname

2019-12-11 Thread Fabian Ebner
than the custom version, so we keep the custom version. Signed-off-by: Fabian Ebner --- Changes from v1: * Keep the custom versions in LVMPlugin and RBDPlugin * Do not change the interface for get_next_vm_diskname Thanks to Fabian for the suggestions! PVE/Storage/GlusterfsPlu

[pve-devel] [PATCH storage 2/3] Create run_with_umask helper

2019-12-12 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Only makes sense together with patch 3. PVE/Storage.pm | 19 ++- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index ae2ea53..3e65e06 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -103,6

[pve-devel] [PATCH storage 1/3] Lock storage when calling volume_import

2019-12-12 Thread Fabian Ebner
to avoid a potential race for two processes trying to allocate the same volume. Signed-off-by: Fabian Ebner --- This is conceptually independent from patches 2+3 (but patch 3 modfies the same hunk as this one). PVE/Storage.pm | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff

[pve-devel] [PATCH storage 3/3] Mask world rwx and group wx for newly allocated images and when converting to base image

2019-12-12 Thread Fabian Ebner
Following the rationale in afdfbe5594be5a0a61943de10cc5671ac53cbf79, mask these bits for 'clone_image' and 'volume_import'. Also mask in 'chmod' for new base images for consistency. Signed-off-by: Fabian Ebner --- This would make the permissions more consistent, bu

[pve-devel] [PATCH qemu-server 2/3] Always set 'snapshots' for qcow2 and vmdk volumes

2020-01-08 Thread Fabian Ebner
utput leaves through a pipe. Upon importing a second error was present, since the volid didn't match the format. Signed-off-by: Fabian Ebner --- Here is the error messages: 2020-01-08 10:34:47 found local disk 'myzfsdir:111/vm-111-disk-0.vmdk' (via storage) 2020-01-08 10:34:47

[pve-devel] [PATCH qemu-server 1/3] Rename parameter 'volname' to 'volinfo'

2020-01-08 Thread Fabian Ebner
since it is not just the name but a hash containing information about the volume Signed-off-by: Fabian Ebner --- PVE/QemuMigrate.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 0353458..1de1540 100644 --- a/PVE/QemuMigrate.pm

[pve-devel] [PATCH qemu-server 3/3] Consistently use format determined in 'PVE::Storage::foreach_volid'

2020-01-08 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuMigrate.pm | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index a1e2dea..96ad3f4 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -319,6 +319,7 @@ sub sync_disks

[pve-devel] [PATCH qemu-server] Remove unused 'sharedvm' variable

2020-01-09 Thread Fabian Ebner
AFAICT this one hasn't been in use since commit '4530494bf9f3d45c4a405c53ef3688e641f6bd8e' Signed-off-by: Fabian Ebner --- PVE/QemuMigrate.pm | 5 - 1 file changed, 5 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 0353458..49848e8 100644 --- a/PVE

[pve-devel] [PATCH qemu-server] qemu_block_resize: align size to 512

2020-01-09 Thread Fabian Ebner
; if the VM isn't running Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 4 1 file changed, 4 insertions(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 2b68d81..2c92c3b 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -4668,6 +4668,10 @@ sub qemu_block_resi

Re: [pve-devel] [PATCH qemu-server] qemu_block_resize: align size to 512

2020-01-13 Thread Fabian Ebner
On 1/9/20 5:59 PM, Thomas Lamprecht wrote: On 1/9/20 11:20 AM, Fabian Ebner wrote: Doing 'qm resize 111 scsi0 +0.2G' where scsi0 is a qcow2 disk produced the following errors: "VM 111 qmp command 'block_resize' failed - The new size must be a multiple of 512" if t

Re: [pve-devel] [PATCH qemu-server] qemu_block_resize: align size to 512

2020-01-13 Thread Fabian Ebner
On 1/13/20 10:49 AM, Fabian Ebner wrote: On 1/9/20 5:59 PM, Thomas Lamprecht wrote: On 1/9/20 11:20 AM, Fabian Ebner wrote: Doing 'qm resize 111 scsi0 +0.2G' where scsi0 is a qcow2 disk produced the following errors: "VM 111 qmp command 'block_resize' failed - The new

[pve-devel] [PATCH v2 qemu-server 1/2] resize_vm: request new size from storage after resizing

2020-01-13 Thread Fabian Ebner
Because of alignment and rounding in the storage backend, the effective size might not match the 'newsize' parameter we passed along. Signed-off-by: Fabian Ebner --- Turns out that this happens in basically every storage backend that has 'volume_resize': LVM and RBD round d

[pve-devel] [PATCH v2 qemu-server 2/2] qemu_block_resize: align size to 512 before issuing 'block_resize' qmp command

2020-01-13 Thread Fabian Ebner
For qcow2, this is required and for raw, the qmp command aligns to 512 implicitly anyways Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 5 + 1 file changed, 5 insertions(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 2b68d81..922f9b0 100644 --- a/PVE/QemuServer.pm +++ b

[pve-devel] [PATCH v2 storage] Align size to 512 before calling 'qemu-img resize'

2020-01-13 Thread Fabian Ebner
since for qcow2, qemu-img expects a multiple of 512 and for raw it aligns to 512 with a warning, which we avoid Signed-off-by: Fabian Ebner --- PVE/Storage/Plugin.pm | 5 + 1 file changed, 5 insertions(+) diff --git a/PVE/Storage/Plugin.pm b/PVE/Storage/Plugin.pm index 0c39cbd..7382140

Re: [pve-devel] [PATCH storage 1/3] Lock storage when calling volume_import

2020-01-13 Thread Fabian Ebner
Could I get some feedback for this? The same locking is done for 'vdisk_alloc' and 'vdisk_clone' already (among others), so I thought it makes sense for 'volume_import' as well. On 12/12/19 11:17 AM, Fabian Ebner wrote: to avoid a potential race for two process

[pve-devel] [PATCH docs] Add section 'Compression in ZFS'

2020-01-16 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- local-zfs.adoc | 27 ++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/local-zfs.adoc b/local-zfs.adoc index 15a88bb..69979b5 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -180,7 +180,7 @@ underlying disk. zpool

[pve-devel] [PATCH v2 docs 3/3] Fix typos

2020-01-16 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- local-zfs.adoc | 10 +- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/local-zfs.adoc b/local-zfs.adoc index bb03506..71a4a4c 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -190,7 +190,7 @@ To activate compression (see section

[pve-devel] [PATCH v2 docs 2/3] Use consistent style for all shell commands

2020-01-16 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- local-zfs.adoc | 84 -- 1 file changed, 61 insertions(+), 23 deletions(-) diff --git a/local-zfs.adoc b/local-zfs.adoc index 7043a24..bb03506 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -178,41 +178,55 @@ To

[pve-devel] [PATCH v2 docs 1/3] Add section 'Compression in ZFS'

2020-01-16 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Thanks to Thomas and Aaron for their suggestions. Changes from v1: * Moved paragraph to serve as introduction * ZFS might not compress blocks that aren't compressible enough (7/8 of original size is the threshold) so I used "tries t

[pve-devel] [PATCH qemu-server 2/2] Use 'volname' instead of 'volid' for 'qemu_img_format'

2020-01-20 Thread Fabian Ebner
As 'qemu_img_format' just matches a regex, this doesn't make much of a difference, but AFAICT all other calls of 'qemu_img_format' use 'volname'. Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) d

[pve-devel] [PATCH qemu-server 1/2] Fix 2070: vm_start: for a migrating VM, use current format of disk if possible

2020-01-20 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 9ef3b71..59335c5 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -5376,13 +5376,16 @@ sub vm_start

Re: [pve-devel] [PATCH v2 qemu-server 1/2] resize_vm: request new size from storage after resizing

2020-01-21 Thread Fabian Ebner
On 1/13/20 11:47 AM, Fabian Ebner wrote: Because of alignment and rounding in the storage backend, the effective size might not match the 'newsize' parameter we passed along. Signed-off-by: Fabian Ebner --- Turns out that this happens in basically every storage backend that has 

[pve-devel] [PATCH pve-zsync 1/2] Factor out the regular expression for disk keys as a variable

2020-01-27 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- pve-zsync | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/pve-zsync b/pve-zsync index ea3178e..04c5c5a 100755 --- a/pve-zsync +++ b/pve-zsync @@ -53,6 +53,8 @@ my $HOSTRE = "(?:$HOSTv4RE1|\\[$IPV6RE\\])"; # ipv6 must al

[pve-devel] [PATCH pve-zsync 2/2] Add efidisk as a valid disk key

2020-01-27 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- pve-zsync | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pve-zsync b/pve-zsync index 04c5c5a..25add92 100755 --- a/pve-zsync +++ b/pve-zsync @@ -53,7 +53,7 @@ my $HOSTRE = "(?:$HOSTv4RE1|\\[$IPV6RE\\])"; # ipv6 must always

[pve-devel] [RFC storage 5/16] storage_migrate: return volume ID of migrated volume

2020-01-29 Thread Fabian Ebner
Since 'pvesm import' uses a new volume ID if the requested one is already present, callers should have a way to get the new volume ID. Signed-off-by: Fabian Ebner --- PVE/CLI/pvesm.pm | 2 +- PVE/Storage.pm | 41 + 2 files changed, 34 insert

[pve-devel] [PATCH qemu-server 11/16] Extract volume ID before calling 'parse_volume_id'

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuMigrate.pm | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index d025b09..81b52d1 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -686,8 +686,10 @@ sub phase2 { foreach

[pve-devel] [PATCH storage 4/16] storage_migrate: also log with an insecure connection if there is a log function

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/Storage.pm | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index 5fefa06..2b292f6 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -626,11 +626,21 @@ sub storage_migrate

[pve-devel] [RFC/PATCH] make storage migration more flexible

2020-01-29 Thread Fabian Ebner
es and telling me what changes would be needed or if the same approach as I took for QEMU would work for LXC? guest-common: Fabian Ebner (1): Implement update_volume_ids and add required helpers: foreach_volume and print_volume storage: Fabian Ebner (8): Remove unused string volume_im

[pve-devel] [RFC guest-common 1/16] Implement update_volume_ids and add required helpers: foreach_volume and print_volume

2020-01-29 Thread Fabian Ebner
This function is intened to be used after doing a migration where some of the volume IDs changed. Signed-off-by: Fabian Ebner --- PVE/AbstractConfig.pm | 61 +++ 1 file changed, 61 insertions(+) diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm

[pve-devel] [PATCH storage 2/16] Remove unused string

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/Storage.pm | 1 - 1 file changed, 1 deletion(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index 0bd103e..5fefa06 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -573,7 +573,6 @@ sub storage_migrate { my $target_volid = "${target_st

[pve-devel] [PATCH qemu-server 10/16] rename 'volid' to 'drivestr' where it's not only a volume ID

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuMigrate.pm | 10 +- PVE/QemuServer.pm | 4 ++-- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 49848e8..d025b09 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -491,7

[pve-devel] [RFC storage 6/16] pvesm import: allow specifying storage+vmid instead of full volumeid

2020-01-29 Thread Fabian Ebner
the volume ID might look differently. E.g. 'mydir:102/vm-102-disk-0.raw' on a 'dir' storage would be 'mylvm:vm-102-disk-0' on an 'lvm' storage. Signed-off-by: Fabian Ebner --- An alternative approach would be to translate the volids as mentioned in the

[pve-devel] [RFC storage 7/16] volume_import_formats: if no volume name is specified, return all formats the storage supports

2020-01-29 Thread Fabian Ebner
Like this it is possible to determine if the transfer of a volume is possible wihout already having the name of the volume on the target storage. When doing the import, 'volume_import' can then choose a new name automatically. Signed-off-by: Fabian Ebner --- For example, migration w

[pve-devel] [RFC qemu-server 13/16] Take note of changes to volume ids when migrating and update config

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Might make sense to combine this patch and patch 16. PVE/QemuMigrate.pm | 22 +++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 81b52d1..702fda0 100644 --- a/PVE/QemuMigrate.pm

[pve-devel] [RFC storage 8/16] storage_migrate: use only storeid when no volume name is specified

2020-01-29 Thread Fabian Ebner
an LVM storage. Signed-off-by: Fabian Ebner --- PVE/Storage.pm | 10 +++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/PVE/Storage.pm b/PVE/Storage.pm index d708c03..6aea2ef 100755 --- a/PVE/Storage.pm +++ b/PVE/Storage.pm @@ -561,7 +561,6 @@ sub storage_migrate { m

[pve-devel] [RFC qemu-server 15/16] sync_disks: log output of storage_migrate

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Not sure about this one. On the one hand it adds even more to the migration logs, which are already rather long. On the other hand it might contain useful information. PVE/QemuMigrate.pm | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/PVE

[pve-devel] [RFC qemu-server 14/16] Allow specifying targetstorage for offline migration

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/API2/Qemu.pm | 3 --- 1 file changed, 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm index 89e2477..f21fb69 100644 --- a/PVE/API2/Qemu.pm +++ b/PVE/API2/Qemu.pm @@ -3379,9 +3379,6 @@ __PACKAGE__->register_method({ $param->{

[pve-devel] [RFC storage 9/16] storage_migrate: Make error message more verbose

2020-01-29 Thread Fabian Ebner
The volid contains the format and that's relevant information for why migration is not possible. For example, a raw volume can be migrated between an LVM storage and a filesystem based storage, but a qcow2 volume cannot. Signed-off-by: Fabian Ebner --- PVE/Storage.pm | 5 - 1 file ch

[pve-devel] [RFC qemu-server 12/16] Implement abstract foreach_volume and print_volume

2020-01-29 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuConfig.pm | 12 1 file changed, 12 insertions(+) diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm index 1ba728a..a983e52 100644 --- a/PVE/QemuConfig.pm +++ b/PVE/QemuConfig.pm @@ -130,6 +130,18 @@ sub get_replicatable_volumes { return

[pve-devel] [RFC qemu-server 16/16] Update volume IDs in one go

2020-01-29 Thread Fabian Ebner
Use 'update_volume_ids' for the live-migrated disks as well. Signed-off-by: Fabian Ebner --- PVE/QemuMigrate.pm | 23 +-- 1 file changed, 9 insertions(+), 14 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index af1cf01..6a0f034 100644

[pve-devel] [RFC storage 3/16] volume_import: Use a new name when the the name to import with already exists

2020-01-29 Thread Fabian Ebner
The ID of the new volume is returned and pvesm import prints it. This is useful for migration, since the storage on the target might already contain unused/orphaned disks. Signed-off-by: Fabian Ebner --- Breaks the current migration in QEMU/LXC if there is a collision, since the code doesn&#

[pve-devel] [PATCH qemu-server] Fix description for vm_config and change description for vm_pending

2020-02-04 Thread Fabian Ebner
The description for vm_config was out of date and from the description for vm_pending it was hard to tell what the difference to vm_config was. Signed-off-by: Fabian Ebner --- PVE/API2/Qemu.pm | 7 --- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/PVE/API2/Qemu.pm b/PVE/API2

Re: [pve-devel] [RFC guest-common 1/16] Implement update_volume_ids and add required helpers: foreach_volume and print_volume

2020-02-06 Thread Fabian Ebner
On 2/5/20 10:29 AM, Fabian Grünbichler wrote: On January 29, 2020 2:29 pm, Fabian Ebner wrote: This function is intened to be used after doing a migration where some of the volume IDs changed. Signed-off-by: Fabian Ebner --- PVE/AbstractConfig.pm | 61

Re: [pve-devel] [RFC guest-common 1/16] Implement update_volume_ids and add required helpers: foreach_volume and print_volume

2020-02-06 Thread Fabian Ebner
On 2/5/20 10:38 AM, Fabian Grünbichler wrote: On January 29, 2020 2:29 pm, Fabian Ebner wrote: This function is intened to be used after doing a migration where some of the volume IDs changed. forgot to ask this - this is in AbstractConfig because you intend to also re-use this for a similar

Re: [pve-devel] [RFC storage 6/16] pvesm import: allow specifying storage+vmid instead of full volumeid

2020-02-06 Thread Fabian Ebner
On 2/5/20 11:50 AM, Fabian Grünbichler wrote: On January 29, 2020 2:30 pm, Fabian Ebner wrote: Extends the API so that 'volume' can also only be a storage identifier. In that case the VMID needs to be specified as well. In 'import_volume' a new name for the allocation is

[pve-devel] [PATCH manager 1/4] Fix error message

2020-02-06 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- www/manager6/window/Snapshot.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/www/manager6/window/Snapshot.js b/www/manager6/window/Snapshot.js index 3b1070a2..32a66fda 100644 --- a/www/manager6/window/Snapshot.js +++ b/www/manager6/window

[pve-devel] [PATCH manager 2/4] Use 'isCreate' instead of 'snapname' to determine window layout

2020-02-06 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- www/manager6/window/Snapshot.js | 24 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/www/manager6/window/Snapshot.js b/www/manager6/window/Snapshot.js index 32a66fda..e4355106 100644 --- a/www/manager6/window/Snapshot.js

[pve-devel] [PATCH manager 3/4] Hide 'Include RAM' when VM isn't running

2020-02-06 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- www/manager6/tree/SnapshotTree.js | 3 +++ www/manager6/window/Snapshot.js | 4 ++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/www/manager6/tree/SnapshotTree.js b/www/manager6/tree/SnapshotTree.js index 0636ef68..7b5ac3ed 100644 --- a/www

[pve-devel] [PATCH manager 0/4] Small improvements to snapshot GUI

2020-02-06 Thread Fabian Ebner
There is no need to display 'Include RAM' when the VM is not running. I thought it would make sense to warn users when they take a snapshot where a file system freeze would be needed, but isn't possible. Thanks to Oguz and Stefan for some JavaScript consulting. Fabian Ebner (

[pve-devel] [PATCH manager 4/4] Warn about file system state when a freeze would be needed, but isn't possible

2020-02-06 Thread Fabian Ebner
snapshot. Signed-off-by: Fabian Ebner --- www/manager6/window/Snapshot.js | 42 + 1 file changed, 42 insertions(+) diff --git a/www/manager6/window/Snapshot.js b/www/manager6/window/Snapshot.js index 1a08637f..88f7248e 100644 --- a/www/manager6/window/Snapshot.js +++ b

[pve-devel] [PATCH v3 storage 1/2] volume_resize: use KiB instead of bytes

2020-02-17 Thread Fabian Ebner
Avoid some problems with 'qemu-img resize', which expects that the size is a multiple of 512 bytes for qcow2 images. Since vdisk_alloc already uses KiB, this also improves consistency a little. The tests for ZFS are also adapted to the new interface. Signed-off-by: Fabian Ebner --

[pve-devel] [PATCH v3 container] volume_resize now uses KiB instead of bytes

2020-02-17 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- src/PVE/API2/LXC.pm | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm index 6652e2e..ebe8f18 100644 --- a/src/PVE/API2/LXC.pm +++ b/src/PVE/API2/LXC.pm @@ -1671,13 +1671,15 @@ __PACKAGE__

[pve-devel] [PATCH v3 storage 2/2] Avoid using extra variable

2020-02-17 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/Storage/ZFSPoolPlugin.pm | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index dbe0465..b29fba5 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage

[pve-devel] [PATCH v3 qemu-server] qemu_block_resize: volume_resize now uses KiB instead of bytes

2020-02-17 Thread Fabian Ebner
Also gets rid of an error with qmp block_resize, which expects that the size is a multiple of 512 bytes for qcow2 volumes. Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 23176dd

[pve-devel] [PATCH storage] Check whether 'zfs get mountpoint' returns a valid absolute path

2020-02-18 Thread Fabian Ebner
-mode-fails.61927/#post-284123 Signed-off-by: Fabian Ebner --- PVE/Storage/ZFSPoolPlugin.pm | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index d72ee16..b538e3b 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++

[pve-devel] [PATCH container] Fix mounting ZFS snapshots whose dataset is not mounted below '/'

2020-02-18 Thread Fabian Ebner
hreads/lxc-backup-fails-unable-to-open-the-dataset-vzdump.64944/ Signed-off-by: Fabian Ebner --- Hopefully there is nothing that relies on the old behavior with $snapname. Or was it intended to be able to reach the 'zfs set acl' call with $snapname set? src/PVE/LXC.pm | 10 -

Re: [pve-devel] [PATCH v3 qemu-server] qemu_block_resize: volume_resize now uses KiB instead of bytes

2020-02-18 Thread Fabian Ebner
On 2/18/20 4:09 PM, Dominik Csapak wrote: one comment inline On 2/17/20 12:41 PM, Fabian Ebner wrote: Also gets rid of an error with qmp block_resize, which expects that the size is a multiple of 512 bytes for qcow2 volumes. Signed-off-by: Fabian Ebner ---   PVE/QemuServer.pm | 4 +++-   1

[pve-devel] [PATCH v4 qemu-server] Align size to 1 KiB bytes before doing 'qmp block_resize'

2020-02-19 Thread Fabian Ebner
1. Avoids the error "VM 111 qmp command 'block_resize' failed - The new size must be a multiple of 512" for qcow2 disks. 2. Because volume_import expects disk sizes to be a multiple of 1 KiB. Signed-off-by: Fabian Ebner --- Changes from v3: * No ABI change anymore

[pve-devel] [PATCH v4 storage] volume_resize: align size to 1 KiB

2020-02-19 Thread Fabian Ebner
1. Avoids the error qemu-img: The new size must be a multiple of 512 for qcow2 disks. 2. Because volume_import expects disk sizes to be a multiple of 1 KiB. Signed-off-by: Fabian Ebner --- PVE/Storage.pm | 3 +++ 1 file changed, 3 insertions(+) diff --git a/PVE/Storage.pm b/PVE/Storage.pm

Re: [pve-devel] [PATCH installer] fix behavior if zfs disks have no by-id path

2020-02-19 Thread Fabian Ebner
On 2/18/20 5:55 PM, Aaron Lauterer wrote: in some situations it is possible, that a disk does not have a /dev/disk/by-id path, mainly AFAICT inside VMs with virtio disks. Commit e1b490865f750e08f6c9c6b7e162e7def9dcc411 forgot to handle this situation which resultet in a failed installation. Sig

[pve-devel] [PATCH v2 guest-common 08/28] Add update_volume_ids

2020-02-24 Thread Fabian Ebner
This function is intened to be used after doing a migration where some of the volume IDs changed. Signed-off-by: Fabian Ebner --- PVE/AbstractConfig.pm | 29 + 1 file changed, 29 insertions(+) diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm index 9ce3d12

[pve-devel] [PATCH v2 qemu-server 02/28] Use parse_drive for EFI disk

2020-02-24 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index a4d2e4e..751a075 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -3523,7 +3523,7 @@ sub config_to_command { my $path

[pve-devel] [PATCH v2 qemu-server 03/28] print_drive: Use $skip to avoid the need to copy the hash

2020-02-24 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuServer.pm | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 751a075..1580514 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -1655,9 +1655,8 @@ sub parse_drive { sub

[pve-devel] [PATCH v2 guest-common 05/28] Add interface for volume-related helpers

2020-02-24 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- Not related to this series, but: this should also make it easier to preserve meta-information (e.g. size, discard) to unused disks without re-implementing add_unused_volume in the modules. PVE/AbstractConfig.pm | 23 +++ 1 file changed, 23

[pve-devel] [PATCH v2 qemu-server 09/28] parse_drive: Allow parsing vmstate volumes

2020-02-24 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuServer/Drive.pm | 12 +--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm index 35ac99d..e922392 100644 --- a/PVE/QemuServer/Drive.pm +++ b/PVE/QemuServer/Drive.pm @@ -378,16 +378,22

[pve-devel] [PATCH v2 qemu-server 10/28] Implement volume-related helpers

2020-02-24 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/QemuConfig.pm | 24 1 file changed, 24 insertions(+) diff --git a/PVE/QemuConfig.pm b/PVE/QemuConfig.pm index 1ba728a..b0dc3b9 100644 --- a/PVE/QemuConfig.pm +++ b/PVE/QemuConfig.pm @@ -8,6 +8,7 @@ use PVE::INotify; use PVE

[pve-devel] [PATCH v2 guest-common 06/28] Add snapshot_foreach_unused_volume

2020-02-24 Thread Fabian Ebner
Signed-off-by: Fabian Ebner --- PVE/AbstractConfig.pm | 13 + 1 file changed, 13 insertions(+) diff --git a/PVE/AbstractConfig.pm b/PVE/AbstractConfig.pm index bd43cbe..5c449f6 100644 --- a/PVE/AbstractConfig.pm +++ b/PVE/AbstractConfig.pm @@ -508,6 +508,19 @@ sub

<    1   2   3   4   5   >