Hi...
I ran storage replication before update de PVE server, and almost
everything I could do with web interface.
Now, after upgrade to
pveversion pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve)
I can migrate from node to another one, just if I append the
--with-local-disks.
Is there
factor out code in a new create_efidisk submethod, as else this is
hardly readable, the efidisk0 case is a special case to, so refer
from putting this specialised handling directly into the much shorter
code for all other cases.
Also the disk was created with a specific format and then a format
de
this was only kept for PVE 4.X where the switch to the newer OVMF
image with actual working persisten EFIVARS was made.
We do not ship this old image in PVE 5.0 anymore so remove this
legacy code as it can never trigger anyhow.
Signed-off-by: Thomas Lamprecht
---
PVE/QemuServer.pm | 38
On Thu, Aug 24, 2017 at 08:32:51AM +0200, Fabian Grünbichler wrote:
> looks good, I will apply this once 12.2.0 is out and we have a
> (hopefully stable) final output schema for the various ceph status and
> health commands.
>
just noticed two things:
- your patch does not apply (I think your mai
we sometimes want to give the api call a parameter,
with this, we don't have to encode it into the url
everytime ourselves, but just give a 'params' object
Signed-off-by: Dominik Csapak
---
www/manager6/window/SafeDestroy.js | 13 +++--
1 file changed, 11 insertions(+), 2 deletions(-)
d
with this, you can create a pveceph managed storage for a ceph pool
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 67 ++-
1 file changed, 66 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.j
to be able to automatically generate the ceph storages when creating a
pool
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 5 +
1 file changed, 5 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index ea142b47..7941324e 100644
--- a/www/manager
automatically remove the pveceph managed storages when deleting the
pool on the gui
this is ok because we cannot delete the pool anyway when we have images
on it, and recreating them when creating a pool is now trivial
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Pool.js | 3 +++
1 file
to accompany fabians patches for autocreating the storages for ceph
pools, this series adds those features to the gui
Dominik Csapak (4):
add a params object to the safedestroy window
add create storages checkbox to ceph pool creation
add remove_storages parameter to the pool destruction
a
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 08/25/2017 10:48 AM, Fabian Grünbichler wrote:
this patch series implements storage.cfg management for pveceph-managed ceph
clusters. the following is implemented:
- add new 'pveceph' flag to RBD storages
- pveceph addstorage/lsstorages/removestorage to add/list/remove storage
entries, per p
---
www/manager6/lxc/CreateWizard.js | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/www/manager6/lxc/CreateWizard.js b/www/manager6/lxc/CreateWizard.js
index c2f16a1a..86e710ee 100644
--- a/www/manager6/lxc/CreateWizard.js
+++ b/www/manager6/lxc/CreateWizard.js
@@ -148,7
Disables the quota checkbox for unprivileged containers in the creation wizard,
as well as when editing or adding mountpoints.
---
I figured I should actually fix *all* instances accordingly.
www/manager6/lxc/CreateWizard.js | 10 ++
www/manager6/lxc/ResourceEdit.js | 8 +++-
www/mana
Hello proxmox dev's,
just a quick question: Do you build / ship zfs on proxmox with
'--enable-debug' ? If not, i guess i would be great to get more information
in case of an crash right? Just asking.
Thanks
___
pve-devel mailing list
pve-devel@pve.proxm
adds a single storage for the given pool and options, using
the current monitor information
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 46 ++
1 file changed, 46 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index a49fa9c
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7690d7a1..918f9dd6 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -1780,6 +1780,8 @@ __PACKAGE__->register_met
to keep storage.cfg consistent with changes to the
pveceph-managed cluster. only storages with the 'pveceph'
flag are updated.
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 22 ++
1 file changed, 22 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
in
and rename variable for consistency.
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 14 +-
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index d7877d8b..1134587b 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -202
lists storages configured for a pool, as well as their
configured monitors
Signed-off-by: Fabian Grünbichler
---
note: if the given pool name is not correct, this is currently not detected. it
would make the call vastly more expensive, as we would need to ask Ceph for a
list of pools..
PVE/API2
Signed-off-by: Fabian Grünbichler
---
PVE/CLI/pveceph.pm | 20
1 file changed, 20 insertions(+)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 00e45763..16df2584 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -167,6 +167,26 @@ our $cmddef = {
cr
introduce new API parameter 'add_storages'. if set, one
storage each is configured using the created pool:
- for containers using KRBD
- for VMs using librbd
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 27 ++-
1 file changed, 26 insertions(+), 1 deletion(-)
Signed-off-by: Fabian Grünbichler
---
PVE/CLI/pveceph.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 422ac709..00e45763 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -166,6 +166,7 @@ our $cmddef = {
}],
createpool => [
Signed-off-by: Fabian Grünbichler
---
PVE/CLI/pveceph.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/CLI/pveceph.pm b/PVE/CLI/pveceph.pm
index 16df2584..4d58c966 100755
--- a/PVE/CLI/pveceph.pm
+++ b/PVE/CLI/pveceph.pm
@@ -187,6 +187,7 @@ our $cmddef = {
printf "%-${maxlen
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 24 +++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 470ff216..d7877d8b 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -1998,7 +1998,13 @@ __PACKAGE__-
removes all storages configured for a pool, or a single
specified one.
Signed-off-by: Fabian Grünbichler
---
this one is where the {name} issue is most obvious..
PVE/API2/Ceph.pm | 46 ++
1 file changed, 46 insertions(+)
diff --git a/PVE/API2/Ceph.pm
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 18 ++
1 file changed, 18 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index a1f5bebe..e7df80a2 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -787,6 +787,24 @@ my $add_storage = sub {
}
};
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 41 +
1 file changed, 41 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index f7353884..a1f5bebe 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -12,6 +12,7 @@ use PVE::INotif
so it can be reused for modifying the storage definitions
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 72 +---
1 file changed, 37 insertions(+), 35 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index c4d6ffcb..d6bc
this patch series implements storage.cfg management for pveceph-managed ceph
clusters. the following is implemented:
- add new 'pveceph' flag to RBD storages
- pveceph addstorage/lsstorages/removestorage to add/list/remove storage
entries, per pool
- optionally adding/removing storages when creat
Signed-off-by: Fabian Grünbichler
---
PVE/CephTools.pm | 2 ++
1 file changed, 2 insertions(+)
diff --git a/PVE/CephTools.pm b/PVE/CephTools.pm
index 0c0d7c18..23f2f0f1 100644
--- a/PVE/CephTools.pm
+++ b/PVE/CephTools.pm
@@ -12,6 +12,7 @@ use PVE::Tools qw(extract_param run_command file_get_con
to allow differentiating between user-created RBD storage
entries, and those created and managed by pveceph.
Signed-off-by: Fabian Grünbichler
---
PVE/Storage/RBDPlugin.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 250ee7c..
$storeid must already be validated via the API/Config
parser
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 12
1 file changed, 12 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 7aee4b66..a49fa9c7 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 10 ++
1 file changed, 10 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index e7df80a2..7aee4b66 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -805,6 +805,16 @@ my $get_storages = sub {
return $res;
modified version of the one we use in the RBD storage
plugin, but input format is slightly different here.
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Ceph.pm | 13 +
1 file changed, 13 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index d6bc4c6a..f7353884 100644
otherwise, pveceph cannot update its storage entries.
Signed-off-by: Fabian Grünbichler
---
PVE/Storage/RBDPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 8eb8d46..1a32663 100644
--- a/PVE/Storage/RBDPlugin.p
in luminous, the output of the status/health has changed (again),
so we have to access the correct properties
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Status.js | 10 --
www/manager6/ceph/StatusDetail.js | 8 +---
2 files changed, 13 insertions(+), 5 deletions(-)
d
this adds the summary as first line,
and the long warning message as monospaced text (like on the console)
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/Status.js | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.
---
src/PVE/JSONSchema.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/src/PVE/JSONSchema.pm b/src/PVE/JSONSchema.pm
index 3295599..92d60b9 100644
--- a/src/PVE/JSONSchema.pm
+++ b/src/PVE/JSONSchema.pm
@@ -163,6 +163,17 @@ sub pve_verify_vmid {
return $vmid;
}
+registe
We allow uppercase characters in snapshot names.
pvesm import and export must allow uppercase characters too.
[PATCH V2 pve-storage]
Use JSON schema instead of hardcoding.
[PATCH V3 pve-storage]
Correct typo.
___
pve-devel mailing list
pve-devel@pve.p
---
PVE/CLI/pvesm.pm | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index 9455595..12e68fc 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -183,8 +183,7 @@ __PACKAGE__->register_method ({
base => {
d
42 matches
Mail list logo