This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/Make
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
www/manager6/storage/LvmThinEdit.js | 18 +-
www/manager6/storage/ZFSPoolEdit.js | 23
longer send to server
## (optional) pve-manager (2/3): cleanup related files
* var to let statement change
* some indentation
## ((very) optional) pve-manager (3/3): cleanup all var statements
* replaces all var with let statements
Stefan Hrdlicka (2):
fix #2822: add iscsi, lvm, lvmthin
They can already be set directly via the cluster.fw file. Net::IP is just a
bit more picky with what it allows:
For example:
error: 192.168.1.155/24
correct: 192.168.1.0/24
This cleans the entered IP and removes the non zero host bits.
Signed-off-by: Stefan Hrdlicka
---
src/PVE/API2
V2 -> V3
* review fix: removed closure from clean_cidr
V1 -> V2
* zero out host bits instead of ignoring error
* regex "cleanup"
Stefan Hrdlicka (2):
allow non zero ip address host bits to be entered
cleanup: don't capture "/xx" of CIDR
src/PVE/API2/Fir
Signed-off-by: Stefan Hrdlicka
---
src/PVE/Firewall.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index ae5f221..4924d51 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -68,7 +68,7 @@ PVE::JSONSchema
Signed-off-by: Stefan Hrdlicka
---
src/PVE/Firewall.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 3c35b44..73e940b 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Firewall.pm
@@ -68,7 +68,7 @@ PVE::JSONSchema
They can already be set directly via the cluster.fw file. Net::IP is just a
bit more picky with what it allows:
For example:
error: 192.168.1.155/24
correct: 192.168.1.0/24
This cleans the entered IP and removes the non zero host bits.
Signed-off-by: Stefan Hrdlicka
---
src/PVE/API2
V1 -> V2
* zero out host bits instead of ignoring error
* regex "cleanup"
Stefan Hrdlicka (2):
allow non zero ip address host bits to be entered
cleanup: don't capture "/xx" of CIDR
src/PVE/API2/Firewall/IPSet.pm | 2 +-
src/PVE
Tried this change locally. Didn't break smart output for me :).
Tested-by: Stefan Hrdlicka
On 11/28/22 12:29, Fiona Ebner wrote:
This reverts commit c3442aa5546b029a524928d10c7ecabe0024c137.
Nowadays, relying on 'readlink /sys/block/nvmeXnY/device' won't always
lead to th
Bump ...
Hi,
had a customer ticket this week with this issue. Couldn't test because
the patch doesn't apply :).
On 9/2/22 13:35, Fiona Ebner wrote:
This reverts commit c3442aa5546b029a524928d10c7ecabe0024c137.
Nowadays, relying on 'readlink /sys/block/nvmeXnY/device' won't always
lead to t
ted
Stefan Hrdlicka (1):
fix #1965: cache firewall/cluster.fw file
src/PVE/Firewall.pm | 108 ++--
1 file changed, 75 insertions(+), 33 deletions(-)
--
2.30.2
___
pve-devel mailing list
pve-de
for large IP sets (for example > 25k) it takes noticable longer to parse the
files, this commit caches the cluster.fw file and reduces parsing time
Signed-off-by: Stefan Hrdlicka
---
src/PVE/Firewall.pm | 108 ++--
1 file changed, 75 insertions(+),
if a storage is not available a volume will be added to the container
config as unused. before it would just disappear from the config
Signed-off-by: Stefan Hrdlicka
---
PVE/QemuServer.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE
volume from the storage. If this fails it writes a warning.
review fixes
- rename parameter to ignore-storage-errors
- move eval further up the call chain
Signed-off-by: Stefan Hrdlicka
---
src/PVE/API2/LXC.pm | 8
src/PVE/LXC.pm | 6 --
2 files changed, 12 insertions(+), 2
Signed-off-by: Stefan Hrdlicka
---
PVE/QemuServer.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 51e9a51..331677f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2341,10 +2341,9 @@ sub destroy_vm
Add a checkbox to the remove dialog of LXC containers to force
deleting a container if the storage it uses has been removed.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/lxc/Config.js | 1 +
www/manager6/window/SafeDestroyGuest.js | 34 +
2 files changed
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 7164462..7527106 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -668,7 +668,7 @@ sub update_lxc_config {
# some init
* removed ticket number form cleanup
* check if storage exists for unused disks
# qemu-server
* add same force option as for containers
* match detach/remove behavoir between VM/container
* shorten line
# pve-manager
* added same option for VMs as for containers
Stefan Hrdlicka (4)
detach of a mount point with a removed underlying storage causes it to
be labeled as a an 'unused disk'
remove of a 'unused disk' with a removed underlying storage causes it to
be removed from the configuration
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC/Config.pm | 6 ++
prevent partial storage deletion if the template has a linked clone
container
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC.pm | 12
1 file changed, 12 insertions(+)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index fe68f75..7164462 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE
On 11/15/22 13:17, Fiona Ebner wrote:
Am 15.11.22 um 11:55 schrieb Stefan Hrdlicka:
@@ -2341,10 +2346,10 @@ sub destroy_vm {
my $volid = $drive->{file};
return if !$volid || $volid =~ m|^/|;
-
- die "base volume '$volid' is still in us
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index fe68f75..635cf44 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -668,7 +668,7 @@ sub update_lxc_config {
# some init
V1 -> V2:
# overall
* matched detaching/removing drives behavior for VM & containers
It currently works this way:
- Detach drive
- drive shows up as unused
- remove drive
- drive will be removed without removing data (obviously)
# pve-storage
* added storage_exists function for matching
detach of a mount point with a removed underlying storage causes it to
be labeled as a an 'unused disk'
remove of a 'unused disk' with a removed underlying storage causes it to
be removed from the configuration
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC/Config.pm | 6 ++
Add a checkbox to the remove dialog of LXC containers and VMs to force
deleting a container/VM if the storage it uses has been removed.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/lxc/Config.js | 1 +
www/manager6/qemu/Config.js | 1 +
www/manager6/window
if a storage is not available a volume will be added to the container
config as unused. before it would just disappear from the config
Signed-off-by: Stefan Hrdlicka
---
PVE/QemuServer.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Qemu.pm | 8
PVE/QemuServer.pm | 23 ---
2 files changed, 24 insertions(+), 7 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 30348e6..2a0806f 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
volume from the storage. If this fails it writes a warning.
review fixes
- rename parameter to ignore-storage-errors
- move eval further up the call chain
Signed-off-by: Stefan Hrdlicka
---
src/PVE/API2/LXC.pm | 8
src/PVE/LXC.pm | 6 --
2 files changed, 12 insertions(+), 2
add fields for additional settings required by ZFS dRAID
Signed-off-by: Stefan Hrdlicka
---
www/manager6/node/ZFS.js | 69
1 file changed, 69 insertions(+)
diff --git a/www/manager6/node/ZFS.js b/www/manager6/node/ZFS.js
index 5b3bdbda..75d7d8e1 100644
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Disks/ZFS.pm | 55 ++-
1 file changed, 54 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/D
add some basic explanation how ZFS dRAID works including
links to openZFS for more details
add documentation for two dRAID parameters used in code
Signed-off-by: Stefan Hrdlicka
---
local-zfs.adoc | 44 +++-
1 file changed, 43 insertions(+), 1 deletion
the data & spares fields are now required to be selected in the GUI
** via the API the two config params are not required for now
# pve-docs
* openZFS replaced with OpenZFS
V3 -> V4:
# pve-docs
* added note to explain why the GUI expects one more disk then the
minimum that dRAID would al
On 10/28/22 11:28, Thomas Lamprecht wrote:> Some issue due to weird and
unmentioned dependence on $noerr and
> while at it some small comment and commit message style nits that
> I might have either ignored or "fixed" up myself other way.
>
> On 25/10/2022 16:
They can already be set directly via the cluster.fw file. Net::IP is just a
bit more picky with what it allows:
For example:
error: 192.168.1.155/24
correct: 192.168.1.0/24
also improves #3554
Signed-off-by: Stefan Hrdlicka
---
src/PVE/Firewall.pm | 8
1 file changed, 8
This patch adds the firewall/cluster.fw to caching. On my system with a
list of 25k IP sets CPU consumption for the process went from ~20 % to
~10 % with this caching enabled. Still pretty high but better then
before.
pve-firewall
---
src/PVE/Firewall.pm | 110 +++
for large IP sets (for example > 25k) it takes noticable longer to parse the
files, this commit caches the cluster.fw file and reduces parsing time
Signed-off-by: Stefan Hrdlicka
---
src/PVE/Firewall.pm | 110 +++-
1 file changed, 77 insertions(+),
added file for cache from bugzilla case #1965
Signed-off-by: Stefan Hrdlicka
---
data/PVE/Cluster.pm | 1 +
data/src/status.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index abcc46d..2afae73 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE
V1 -> V2
* changed to optional chaining
Stefan Hrdlicka (2):
fix #1981: get next free disk id on change of bus/device
cleanup: style fix
www/manager6/form/ControllerSelector.js | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--
2.3
Signed-off-by: Stefan Hrdlicka
---
www/manager6/form/ControllerSelector.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/form/ControllerSelector.js
b/www/manager6/form/ControllerSelector.js
index 8a52737d..d7c2625d 100644
--- a/www/manager6/form
Signed-off-by: Stefan Hrdlicka
---
FYI: When IDE already has 4 devices and the user tries to add another one,
the number device number isn't changed since there isn't any space
left.
www/manager6/form/ControllerSelector.js | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
di
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
www/manager6/storage/LvmThinEdit.js | 18 +-
www/manager6/storage/ZFSPoolEdit.js | 23
This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/Make
efault value of "Scan node" to Proxmox.NodeName
* don't allow empty "Scan node"
Stefan Hrdlicka (2):
fix #2822: add iscsi, lvm, lvmthin & zfs storage for all cluster nodes
cleanup: "var" to "let", fix some indentation in related files
www/m
Signed-off-by: Stefan Hrdlicka
---
FYI: When IDE already has 4 devices and the user tries to add another one,
the number device number isn't changed since there isn't any space
left.
www/manager6/form/ControllerSelector.js | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
Signed-off-by: Stefan Hrdlicka
---
www/manager6/form/ControllerSelector.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/form/ControllerSelector.js
b/www/manager6/form/ControllerSelector.js
index 6daede95..e6baa893 100644
--- a/www/manager6/form
When adding a disk to an existing VM and switching between SCSI and IDE
(or any other bus) the gui will now select the next free device id
automatically.
Stefan Hrdlicka (2):
fix #1981: get next free disk id on change of bus/device
cleanup: style fix
www/manager6/form/ControllerSelector.js
This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/Make
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
www/manager6/storage/LvmThinEdit.js | 18 +-
www/manager6/storage/ZFSPoolEdit.js | 23
ookupReference
* moved used template literals for building path strings
V4 -> V5:
# pve-manager (1/2)
* s/lookupReference/lookup/
* move ComboBoxSetStoreNode & StrageScanNodeSelector
to www/manager6/form
* move array pushes to initialization of array
Stefan Hrdlicka (2):
fix #2822: add i
adds a function that can take a volume id and return the relevant
storage config
Signed-off-by: Stefan Hrdlicka
---
PVE/Storage.pm | 9 +
1 file changed, 9 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index b9c53a1..9e95e3d 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
Signed-off-by: Stefan Hrdlicka
---
PVE/QemuServer.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 85b005e..558e8a9 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2361,7 +2361,11 @@ sub destroy_vm
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Qemu.pm | 8
PVE/QemuServer.pm | 27 ---
2 files changed, 28 insertions(+), 7 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index d9ef201..e51f777 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
V1 -> V2:
# overall
* matched detaching/removing drives behavior for VM & containers
It currently works this way:
- Detach drive
- drive shows up as unused
- remove drive
- drive will be removed without removing data (obviously)
# pve-storage
* added storage_exists function for matching
Add a checkbox to the remove dialog of LXC containers and VMs to force
deleting a container/VM if the storage it uses has been removed.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/lxc/Config.js | 1 +
www/manager6/qemu/Config.js | 1 +
www/manager6/window
volume from the storage. If this fails it writes a warning.
review fixes
- rename parameter to ignore-storage-errors
- move eval further up the call chain
Signed-off-by: Stefan Hrdlicka
---
src/PVE/API2/LXC.pm | 8
src/PVE/LXC.pm | 8 ++--
2 files changed, 14 insertions(+), 2
detach of a mount point with a removed underlying storage causes it to
be labeled as a an 'unused disk'
remove of a 'unused disk' with a removed underlying storage causes it to
be removed from the configuration
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC/Config.pm | 5 +
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index e380b12..4e29e9b 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -668,7 +668,7 @@ sub update_lxc_config {
# some init
if a storage is not available a volume will be added to the container
config as unused. before it would just disappear from the config
Signed-off-by: Stefan Hrdlicka
---
PVE/QemuServer.pm | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE
On 7/25/22 15:31, Wolfgang Bumiller wrote:
On Mon, Jul 25, 2022 at 12:40:21PM +0200, Fiona Ebner wrote:
Am 20.07.22 um 16:49 schrieb Stefan Hrdlicka:
The patch adds a new option 'force-remove-storage' that stops pct
destory from dying if the storage is not available. This also a
replace all "var" with "let" in files related to patch for ticket #2822
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
ookupReference
* moved used template literals for building path strings
Stefan Hrdlicka (2):
fix #2822: add lvm, lvmthin & zfs storage for all cluster nodes
cleanup: "var" to "let", fix some indentation in related files
www/manager6/storage/Base.js| 59 +++
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js
Hello :)
On 7/27/22 12:19, Fiona Ebner wrote:
Am 19.07.22 um 13:57 schrieb Stefan Hrdlicka:
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on
this makes it possible to add all mount options offered by mount.cifs
NFS & CIFS now share the options parameter since the use it for
the same prupose
Signed-off-by: Stefan Hrdlicka
Reviewed-by: Fiona Ebner
---
PVE/Storage/CIFSPlugin.pm | 16 +---
PVE/Storage/NFSPlugin.pm
Signed-off-by: Stefan Hrdlicka
---
pve-storage-cifs.adoc | 8
1 file changed, 8 insertions(+)
diff --git a/pve-storage-cifs.adoc b/pve-storage-cifs.adoc
index bb4b902..a8fc350 100644
--- a/pve-storage-cifs.adoc
+++ b/pve-storage-cifs.adoc
@@ -57,6 +57,13 @@ path::
The local mount
V1 -> V2:
# pve-storage (1/2)
* fixed nitpicks
# pve-docs (2/2)
* extended options explanation
* changed example option to `echo_interval=60` as second parameter
Stefan Hrdlicka (1):
fixes #2920: add options parameter to cifs plugin
PVE/Storage/CIFSPlugin.pm | 16 +---
volume from the storage. If this fails it writes a warning.
Signed-off-by: Stefan Hrdlicka
---
src/PVE/API2/LXC.pm | 8
src/PVE/LXC.pm | 20 +++-
2 files changed, 23 insertions(+), 5 deletions(-)
diff --git a/src/PVE/API2/LXC.pm b/src/PVE/API2/LXC.pm
index 589f96f
The patch adds a new option 'force-remove-storage' that stops pct
destory from dying if the storage is not available. This also adds a
menu option for the delete dialog of containers.
Stefan Hrdlicka (2):
fix 3711: enable delete of LXC container via force option
fix #3711 clean
remove spaces where they are not needed
Signed-off-by: Stefan Hrdlicka
---
src/PVE/LXC.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/src/PVE/LXC.pm b/src/PVE/LXC.pm
index 74c8d17..42d94ac 100644
--- a/src/PVE/LXC.pm
+++ b/src/PVE/LXC.pm
@@ -668,7 +668,7
Add a checkbox to the remove dialog of LXC containers to force
deleting a container if the storage it uses has been removed.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/lxc/Config.js | 1 +
www/manager6/window/SafeDestroyGuest.js | 34 +
2 files changed
replace all "var" with "let" in files related to patch for ticket 2822
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js
ssible to
select for example an iSCSI device only availabe on one node that
isn't availabe on the other ones. I wasn't sure if this should be
changed in this context as well.
Stefan Hrdlicka (2):
fix #2822: add lvm, lvmthin & zfs storage for all cluster nodes
cleanup: &qu
replace all "var" with "let" in files related to patch for ticket 2822
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Base.js| 10 +-
www/manager6/storage/IScsiEdit.js | 6 +++---
www/manager6/storage/LVMEdit.js | 14 +++---
This adds a dropdown box for iSCSI, LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/storage/Bas
al) pve-manager (3/3): cleanup all var statements
* replaces all var with let statements
Stefan Hrdlicka (3):
fix #2822: add iscsi, lvm, lvmthin & zfs storage for all cluster nodes
cleanup: "var" to "let", fix some indentation in related files
cleanup: "var&quo
add some basic explanation how ZFS dRAID works including
links to openZFS for more details
add documentation for two dRAID parameters used in code
Signed-off-by: Stefan Hrdlicka
---
local-zfs.adoc | 41 -
1 file changed, 40 insertions(+), 1 deletion
add fields for additional settings required by ZFS dRAID
Signed-off-by: Stefan Hrdlicka
---
www/manager6/node/ZFS.js | 69
1 file changed, 69 insertions(+)
diff --git a/www/manager6/node/ZFS.js b/www/manager6/node/ZFS.js
index 5b3bdbda..75d7d8e1 100644
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Disks/ZFS.pm | 55 ++-
1 file changed, 54 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/D
the data & spares fields are now required to be selected in the GUI
** via the API the two config params are not required for now
# pve-docs
* openZFS replaced with OpenZFS
Stefan Hrdlicka (1):
fix #3967: enable ZFS dRAID creation via API
PVE/API2/Disks/ZFS.pm | 55 +
add some basic explanation how ZFS dRAID works including
links to openZFS for more details
add documentation for two dRAID parameters used in code
Signed-off-by: Stefan Hrdlicka
---
local-zfs.adoc | 41 -
1 file changed, 40 insertions(+), 1 deletion
add fields for additional settings required by ZFS dRAID
Signed-off-by: Stefan Hrdlicka
---
www/manager6/node/ZFS.js | 44
1 file changed, 44 insertions(+)
diff --git a/www/manager6/node/ZFS.js b/www/manager6/node/ZFS.js
index 5b3bdbda..5276ff84 100644
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Disks/ZFS.pm | 44 ++-
1 file changed, 43 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/D
os
* reword last paragraph to make it (hopefully :)) more helpful
----
Stefan Hrdlicka (1):
fix #3967: enable ZFS dRAID creation via API
PVE/API2/Disks/ZFS.pm | 44 ++-
1 file changed, 43 insertions(+), 1 deletion(-)
--
www/manager6/node/
On 6/3/22 14:24, Dominik Csapak wrote:
comments inline
On 6/2/22 13:22, Stefan Hrdlicka wrote:
add fields for additional settings required by ZFS dRAID
Signed-off-by: Stefan Hrdlicka
---
requires the changes in pve-storageto work
www/manager6/node/ZFS.js | 47
Tried it and works as expected.
I can't answer the original mail, since I joined the mailing list a bit later
:).
Tested-by: Stefan Hrdlicka
On 6/1/22 12:27, Aaron Lauterer wrote:
Can someone take a look at this? Patch should still apply.
On 5/2/22 16:05, Aaron Lauterer wrote:
By not
add some basic explanation how ZFS dRAID works including
links to openZFS for more details
add documentation for two dRAID parameters used in code
Signed-off-by: Stefan Hrdlicka
---
local-zfs.adoc | 40 +++-
1 file changed, 39 insertions(+), 1 deletion
It is possible to set the number of spares and the size of
data stripes via draidspares & dreaddata parameters.
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Disks/ZFS.pm | 40 +++-
1 file changed, 39 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/D
add fields for additional settings required by ZFS dRAID
Signed-off-by: Stefan Hrdlicka
---
requires the changes in pve-storageto work
www/manager6/node/ZFS.js | 47
1 file changed, 47 insertions(+)
diff --git a/www/manager6/node/ZFS.js b/www/manager6
The patch series adds dRAID configuration to the API and WebGUI.
Besides that there is an update to the documenation adding some basic
info about dRAID.
--
PVE/API2/Disks/ZFS.pm | 40 +++-
1 file changed, 39 insertions(+), 1 deletion(-)
--
www/manager6/node
please ignore this, there was an old file in my folder
On 5/24/22 16:45, Stefan Hrdlicka wrote:
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the first node in the list is
selected. It restricts the the storage to be
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the first node in the list is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
www/manager6/contro
ue is set the the current node.
The drop down sets the node parameter which is then used for a proxyto
call and routed to the correct node.
-- pve-storage
Stefan Hrdlicka (1):
fix #2822: add lvm, lvmthin & zfs storage for all cluster nodes
PVE/API2/Storage/Config.pm | 7 +++
1 file c
This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
cluster node needs to be chosen. As default the current node is
selected. It restricts the the storage to be only availabe on the
selected node.
Signed-off-by: Stefan Hrdlicka
---
Depends on the change in pve-storage
this enables forwarding of request to the correct node if a node is set
Signed-off-by: Stefan Hrdlicka
---
PVE/API2/Storage/Config.pm | 7 +++
1 file changed, 7 insertions(+)
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 6bd770e..82b73ca 100755
--- a/PVE/API2
Signed-off-by: Stefan Hrdlicka
---
this depends on 1/2 since this changes the documentation :)
pve-storage-cifs.adoc | 5 +
1 file changed, 5 insertions(+)
diff --git a/pve-storage-cifs.adoc b/pve-storage-cifs.adoc
index bb4b902..60775a4 100644
--- a/pve-storage-cifs.adoc
+++ b/pve-storage
This adds the options parameter to the CIFS plugin.
check propertyList is common for all plugins and it is therefore
not possible to have options twice with different descriptions. This patch moves
the options property from NFSPlugin.pm to the Plugin.pm level. The options
parameter has the same us
this makes it possible to add all mount options offered by mount.cifs
NFS & CIFS now share the options parameter since the use it for
the same prupose
Signed-off-by: Stefan Hrdlicka
---
PVE/Storage/CIFSPlugin.pm | 10 --
PVE/Storage/NFSPlugin.pm | 4
PVE/Storage/Plugi
since this output is printed to the command line it should
be encoded to avoid the wide character warnings
Signed-off-by: Stefan Hrdlicka
---
PVE/CLI/qm.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index cf0d6f3..6a2e161 100755
--- a/PVE
1 - 100 of 104 matches
Mail list logo