Re: [pve-devel] [PATCH manager] api: ceph: mon: make checking for duplicate addresses more robust

2022-06-07 Thread Fabian Ebner
Ping

Am 04.03.22 um 14:09 schrieb Fabian Ebner:
> Because $mon->{addr} might come with a port attached (affects monitors
> created with PVE 5.4 as reported in the community forum [0]), or even
> be a hostname (according to the code in Ceph/Services.pm). Although
> the latter shouldn't happen for configurations created by PVE.
> 
> [0]: https://forum.proxmox.com/threads/105904/
> 
> Fixes: 86ed64f9 ("api: ceph: mon: fix handling of IPv6 addresses in 
> assert_mon_prerequisites")
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/API2/Ceph/MON.pm | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
> index 12c9caf0..5771bb46 100644
> --- a/PVE/API2/Ceph/MON.pm
> +++ b/PVE/API2/Ceph/MON.pm
> @@ -132,8 +132,10 @@ my $assert_mon_prerequisites = sub {
>  for my $mon (values %{$monhash}) {
>   next if !defined($mon->{addr});
>  
> - my $ip = PVE::Network::canonical_ip($mon->{addr});
> - $used_ips->{$ip} = 1;
> + for my $ip ($ips_from_mon_host->($mon->{addr})->@*) {
> + $ip = PVE::Network::canonical_ip($ip);
> + $used_ips->{$ip} = 1;
> + }
>  }
>  
>  for my $monip (@{$monips}) {


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH guest-common v2 1/2] ReplicationState: purge state from non local vms

2022-06-07 Thread Fabian Ebner
Am 03.06.22 um 09:16 schrieb Dominik Csapak:
> when running replication, we don't want to keep replication states for
> non-local vms. Normally this would not be a problem, since on migration,
> we transfer the states anyway, but when the ha-manager steals a vm, it
> cannot do that. In that case, having an old state lying around is
> harmful, since the code does not expect the state to be out-of-sync
> with the actual snapshots on disk.
> 
> One such problem is the following:
> 
> Replicate vm 100 from node A to node B and C, and activate HA. When node
> A dies, it will be relocated to e.g. node B and start replicate from
> there. If node B now had an old state lying around for it's sync to node
> C, it might delete the common base snapshots of B and C and cannot sync
> again.

To be even more robust, we could ensure that the last_sync snapshot
mentioned in the job state is actually present before starting to remove
replication snapshots in prepare() on the source side, or change it to
only remove older snapshots. But prepare() is also used on the target
side to remove stale volumes, so we'd have to be careful not to break
the logic for that.

I'm working on the v2 of a series for improving removal of stale volumes
anyways, so I'll see if I can add something there.

> 
> Deleting the state for all non local guests fixes that issue, since it
> always starts fresh, and the potentially existing old state cannot be
> valid anyway since we just relocated the vm here (from a dead node).
> 
> Signed-off-by: Dominik Csapak 
> Reviewed-by: Fabian Grünbichler 

Both patches:
Reviewed-by: Fabian Ebner 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-storage 1/2] fix #2822: add lvm, lvmthin & zfs storage on all cluster nodes

2022-06-07 Thread Fabian Ebner
Am 24.05.22 um 16:45 schrieb Stefan Hrdlicka:
> this enables forwarding of request to the correct node if a node is set
> 
> Signed-off-by: Stefan Hrdlicka 
> ---
>  PVE/API2/Storage/Config.pm | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
> index 6bd770e..82b73ca 100755
> --- a/PVE/API2/Storage/Config.pm
> +++ b/PVE/API2/Storage/Config.pm
> @@ -146,6 +146,13 @@ __PACKAGE__->register_method ({
>  protected => 1,
>  path => '',
>  method => 'POST',
> +proxyto_callback => sub {
> + my ($rpcenv, $proxyto, $uri_param) = @_;
> + my $node = delete $uri_param->{node};

Messing with the uri params here is hacky. It destroys the consistency
between the API schema and what the API call actually handles.

It's not really required to proxy the call to a specific node. The only
downside is that we don't do an early storage access check if the
storage is not enabled for the current node.

> + $node = "localhost" if !$node;
> +
> + return $node;
> +},
>  description => "Create a new storage.",
>  permissions => {
>   check => ['perm', '/storage', ['Datastore.Allocate']],


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH pve-manager 2/2] fix #2822: add lvm, lvmthin & zfs storage on all cluster nodes

2022-06-07 Thread Fabian Ebner
Am 24.05.22 um 16:45 schrieb Stefan Hrdlicka:
> This adds a dropdown box for LVM, LVMThin & ZFS storage options where a
> cluster node needs to be chosen. As default the current node is
> selected. It restricts the the storage to be only availabe on the
> selected node.
> 

IMHO it's not immediately clear what the "Scan node" is for if the
selector is at the very top. I'd put it closer to the VG/thin pool/ZFS
pool selectors it's affecting. And maybe worth adding a tooltip too?

This patch changes the pre-selected nodes restriction to be the current
node. Previously, it was all nodes.

For LVM, when selecting an iSCSI storage as the base storage, the
content/LUN listing could also be proxied to the "Scan node", because
the storage might not be accessiable from the local node.

> Signed-off-by: Stefan Hrdlicka 
> ---
> 
> Depends on the change in pve-storage
> 
>  www/manager6/controller/StorageEdit.js | 20 
>  www/manager6/storage/Base.js   | 25 +
>  www/manager6/storage/LVMEdit.js|  9 -
>  www/manager6/storage/LvmThinEdit.js| 14 +-
>  www/manager6/storage/ZFSPoolEdit.js| 24 +++-
>  5 files changed, 81 insertions(+), 11 deletions(-)
> 
> diff --git a/www/manager6/controller/StorageEdit.js 
> b/www/manager6/controller/StorageEdit.js
> index cb73b776..25745a5b 100644
> --- a/www/manager6/controller/StorageEdit.js
> +++ b/www/manager6/controller/StorageEdit.js
> @@ -25,3 +25,23 @@ Ext.define('PVE.controller.StorageEdit', {
>   },
>  },
>  });
> +
> +Ext.define('PVE.storage.StorageLocalController', {
> +extend: 'PVE.controller.StorageEdit',
> +alias: 'controller.storageLocal',
> +apiBaseUrl: '/api2/json/nodes/',
> +
> +clearValueSetUrl: function(reference, path, nodeSelector) {

Nit: could pass nodeSelector.value directly and from the name it's not
clear that the node restrictions are updated.

> + var me = this;

Please use let or const instead of var, see:
https://pve.proxmox.com/wiki/Javascript_Style_Guide#Variables

> +
> + var refObj = me.lookupReference(reference);
> + refObj.clearValue();
> +
> + var refObjStore = refObj.getStore();
> + refObjStore.getProxy().setUrl(me.apiBaseUrl + nodeSelector.value + 
> path);
> + refObjStore.load();

It's great that you were able to be so concise, but I'm not too happy
about the bit of coupling that comes with it. To avoid accessing the
internals of these other components, I'd teach them a setNodename()
method. The clear() could also happen as part of the setNodename() methods.

And we could call setNodename() and update the node restrictions
directly, avoiding the need for this controller. But maybe others prefer
the approach with the contorller?

> +
> + var nodeRestriction = me.lookupReference('nodeRestriction');
> + nodeRestriction.setValue(nodeSelector.value);
> +},
> +});
> diff --git a/www/manager6/storage/Base.js b/www/manager6/storage/Base.js
> index 7f6d7a09..92654131 100644
> --- a/www/manager6/storage/Base.js
> +++ b/www/manager6/storage/Base.js
> @@ -36,6 +36,7 @@ Ext.define('PVE.panel.StorageBase', {
>   {
>   xtype: 'pveNodeSelector',
>   name: 'nodes',
> + reference: 'nodeRestriction',
>   disabled: me.storageId === 'local',
>   fieldLabel: gettext('Nodes'),
>   emptyText: gettext('All') + ' (' + gettext('No restrictions') 
> +')',
> @@ -76,6 +77,30 @@ Ext.define('PVE.panel.StorageBase', {
>  },
>  });
>  
> +Ext.define('PVE.panel.StorageBaseLocal', {

Not really accurate, because it's not a base for all local storage types
and it's a base for LVM, which can also be shared.

> +extend: 'PVE.panel.StorageBase',
> +controller: 'storageLocal',
> +
> +initComponent: function() {
> + var me = this;
> +
> + me.columnT = [
> + {
> + xtype: 'pveNodeSelector',
> + reference: 'pveNodeSelector',

A bit generic for a reference, I'd at least include the "scan" part.

> + name: 'node',
> + itemId: 'pveNodeSelector',
> + fieldLabel: gettext('Scan node'),
> + allowBlank: false,
> + disallowedNodes: undefined,
> + onlineValidator: true,
> + preferredValue: Proxmox.NodeName,
> + }];
> + me.callParent();
> +},
> +
> +});
> +
>  Ext.define('PVE.storage.BaseEdit', {
>  extend: 'Proxmox.window.Edit',
>  
> diff --git a/www/manager6/storage/LVMEdit.js b/www/manager6/storage/LVMEdit.js
> index 2a9cd283..d6f8da06 100644
> --- a/www/manager6/storage/LVMEdit.js
> +++ b/www/manager6/storage/LVMEdit.js
> @@ -84,7 +84,7 @@ Ext.define('PVE.storage.BaseStorageSelector', {
>  });
>  
>  Ext.define('PVE.storage.LVMInputPanel', {
> -extend: 'PVE.panel.StorageBase',
> +extend: 'PVE.panel.StorageBaseLocal',
>  
>  onlineHelp: 'storage_lvm',
>  
> @@ -105,6 +105,7 @@ Ext.define('PV

Re: [pve-devel] [PATCH pve-manager 2/3] fix #3967: enable ZFS dRAID creation in WebGUI

2022-06-07 Thread Stefan Hrdlicka



On 6/3/22 14:24, Dominik Csapak wrote:

comments inline

On 6/2/22 13:22, Stefan Hrdlicka wrote:

add fields for additional settings required by ZFS dRAID

Signed-off-by: Stefan Hrdlicka 
---
requires the changes in pve-storageto work

  www/manager6/node/ZFS.js | 47 
  1 file changed, 47 insertions(+)

diff --git a/www/manager6/node/ZFS.js b/www/manager6/node/ZFS.js
index 5b3bdbda..5f3bbfef 100644
--- a/www/manager6/node/ZFS.js
+++ b/www/manager6/node/ZFS.js
@@ -42,6 +42,9 @@ Ext.define('PVE.node.CreateZFS', {
  fieldLabel: gettext('RAID Level'),
  name: 'raidlevel',
  value: 'single',
+    listeners: {
+    change: 'onTargetChange',
+    },
  comboItems: [
  ['single', gettext('Single Disk')],
  ['mirror', 'Mirror'],
@@ -49,8 +52,29 @@ Ext.define('PVE.node.CreateZFS', {
  ['raidz', 'RAIDZ'],
  ['raidz2', 'RAIDZ2'],
  ['raidz3', 'RAIDZ3'],
+    ['draid', 'dRAID'],
+    ['draid2', 'dRAID2'],
+    ['draid3', 'dRAID3'],
  ],
  },
+    {
+    xtype: 'proxmoxintegerfield',
+    fieldLabel: gettext('DRAID data devices'),
+    minValue: 1,
+    disabled: true,
+    hidden: true,
+    reference: 'draiddata',
+    name: 'draiddata',
+    },
+    {
+    xtype: 'proxmoxintegerfield',
+    fieldLabel: gettext('DRAID spares'),
+    minValue: 0,
+    disabled: true,
+    hidden: true,
+    reference: 'draidspares',
+    name: 'draidspares',
+    },


is that something someone always wants to configure?
or more an advanced thing? in the latter case, i'd
probably put them in the advanced options



I would say you nearly always want to change at least the number of 
spares. The default values for creating a dRAID without setting anything 
are 0 spares and 8 devices per redundancy group (if disks >=8). The 
advantage of dRAID is fast rebuilding of a broken disk which you will 
only get (as far as I read) when you have spares configured.


The data devices per redundancy group would be possible to hide. But 
they have an impact on performance depending on the number chosen as 
well as storage usage. I'm not sure that it makes sense hiding one value 
of the two. I will think about it ;).




  {
  xtype: 'proxmoxKVComboBox',
  fieldLabel: gettext('Compression'),
@@ -101,6 +125,29 @@ Ext.define('PVE.node.CreateZFS', {
  me.callParent();
  },
+
+    controller: {


nit: normally we put the controller  on top of the class, not at the
bottom


+    xclass: 'Ext.app.ViewController',
+
+    onTargetChange: function(selection) {
+    var me = this;
+    var dataField = me.lookupReference('draiddata');
+    var sparesField = me.lookupReference('draidspares');
+    if (selection.value.startsWith("draid")) {
+    //show draid settings
+    dataField.setVisible(true);
+    dataField.setDisabled(false);
+    sparesField.setVisible(true);
+    sparesField.setDisabled(false);
+    } else {
+    //hide draid settings
+    dataField.setVisible(false);
+    dataField.setDisabled(true);
+    sparesField.setVisible(false);
+    sparesField.setDisabled(true);
+    }


this could be more elegantly solved in two other ways:
1. use a viewmodel with a formula that returns true/false
    depending on the startsWith('draid') and then use a
    bind on the hidden/disabled setting of the fields

2. put the bool into a variable and use that, like this

let isDraid = ...startsWith('draid');
dataField.setVisible(isDraid);
dataField.setDisabled(!isDraid);
...




+    },
+    },
  });
  Ext.define('PVE.node.ZFSList', {






___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied-series: [PATCH guest-common v2 1/2] ReplicationState: purge state from non local vms

2022-06-07 Thread Thomas Lamprecht
Am 03/06/2022 um 09:16 schrieb Dominik Csapak:
> when running replication, we don't want to keep replication states for
> non-local vms. Normally this would not be a problem, since on migration,
> we transfer the states anyway, but when the ha-manager steals a vm, it
> cannot do that. In that case, having an old state lying around is
> harmful, since the code does not expect the state to be out-of-sync
> with the actual snapshots on disk.
> 
> One such problem is the following:
> 
> Replicate vm 100 from node A to node B and C, and activate HA. When node
> A dies, it will be relocated to e.g. node B and start replicate from
> there. If node B now had an old state lying around for it's sync to node
> C, it might delete the common base snapshots of B and C and cannot sync
> again.
> 
> Deleting the state for all non local guests fixes that issue, since it
> always starts fresh, and the potentially existing old state cannot be
> valid anyway since we just relocated the vm here (from a dead node).
> 
> Signed-off-by: Dominik Csapak 
> Reviewed-by: Fabian Grünbichler 
> ---
>  src/PVE/ReplicationState.pm | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
>

applied, with Fabi's R-b, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] api: ceph: mon: make checking for duplicate addresses more robust

2022-06-07 Thread Thomas Lamprecht
Am 04/03/2022 um 14:09 schrieb Fabian Ebner:
> Because $mon->{addr} might come with a port attached (affects monitors
> created with PVE 5.4 as reported in the community forum [0]), or even
> be a hostname (according to the code in Ceph/Services.pm). Although
> the latter shouldn't happen for configurations created by PVE.
> 
> [0]: https://forum.proxmox.com/threads/105904/
> 
> Fixes: 86ed64f9 ("api: ceph: mon: fix handling of IPv6 addresses in 
> assert_mon_prerequisites")
> Signed-off-by: Fabian Ebner 
> ---
>  PVE/API2/Ceph/MON.pm | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel