Hi,
in
/usr/share/perl5/PVE/Firewall.pm
find
if ($ipfilter_ipset) {
ruleset_addrule($ruleset, $chain, "-m set ! --match-set
$ipfilter_ipset src -j DROP");
}
and try to add
if ($ipfilter_ipset) {
ruleset_addrule($ruleset, $chain, "-m set ! --m
From: Thomas Lamprecht
If on bootup one of our VMs is locked by an backup we safely can
assume that this backup job does not run anymore and that the lock
has no reason anymore and just hinders uptime of services.
As at this time we (the node) have quorum so we may safely assume
that we have a c
From: Thomas Lamprecht
small refactoring in get_filtered_vmlist: save a VMs config in its
own subhash to avoid collisions with other data which we want to save
in the vmid list, for now this is only `type` but in the next patch
I want to save also the class
Signed-off-by: Thomas Lamprecht
---
From: Fabian Grünbichler
fix the fix for #1024
Signed-off-by: Fabian Grünbichler
---
PVE/API2/Nodes.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm
index eb1ef69b..8d828436 100644
--- a/PVE/API2/Nodes.pm
+++ b/PVE/API2/Nodes.pm
@
> This change would allow to select the iothread option on VM creation if
> the user wants that.
I am quite unsure about that - this is usually just a waste of resources ...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cg
Hi there,
today I had to figure the hard way that the Firewall Option 'IP filter'
(at least in PVE 5.0 for Containers) drops packets silently without any
logging at all, even if the log_level_* is set.
If I set the log_level, I'd expect to see _all_ dropped packets in the
Log. (This gave me a hel
Signed-off-by: Dominik Csapak
---
www/manager6/storage/RBDEdit.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/storage/RBDEdit.js b/www/manager6/storage/RBDEdit.js
index 55ac8541..f8ddef90 100644
--- a/www/manager6/storage/RBDEdit.js
+++ b/www/manager6/storage/
this is for adding a pve managed ceph rbd storage, so that the user
just has to select the pool, and does not need to write the monitor
hosts and copy the keyring
the old "RBD" is renamed to "RBD (external)"
Signed-off-by: Dominik Csapak
---
www/manager6/Utils.js | 9 --
www/mana
this allows us to give the user a list of pve managed ceph pools
Signed-off-by: Dominik Csapak
---
www/manager6/Makefile | 1 +
www/manager6/form/CephPoolSelector.js | 42 +++
2 files changed, 43 insertions(+)
create mode 100644 www/manager6/form
Signed-off-by: Dominik Csapak
---
PVE/API2/Ceph.pm | 2 --
1 file changed, 2 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 9fef0487..e9211325 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -243,8 +243,6 @@ __PACKAGE__->register_method ({
if ($param->{jour
this series does three things
* does not automatically create a wal device anymore
* make bluestore default in the gui
* add the option to add pve managed ceph storage in the gui
Dominik Csapak (5):
ceph: do not automatically use wal if only journal is given
add CephPoolSelector
add new "RB
Signed-off-by: Dominik Csapak
---
www/manager6/ceph/OSD.js | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js
index 7abda7f9..490c9789 100644
--- a/www/manager6/ceph/OSD.js
+++ b/www/manager6/ceph/OSD.js
@@ -116,7 +116,8 @@ Ex
Hi
I would like to change the default scsi controller we use in the Wizard
from virtio-scsi-pci, to virtio-scsi-single in a forthcoming path serie
which touch the HD edition window.
The difference between the two types is that with virtio-scsi-single
each new disk is added on its own controller on
Signed-off-by: Fabian Grünbichler
---
identical to v2
www/manager6/ceph/Pool.js | 3 +++
1 file changed, 3 insertions(+)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index fefddc9d..9f37a745 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -2,6 +2,
modeled after the mechanism used in window/Edit.js
Signed-off-by: Fabian Grünbichler
---
changed in v3:
- close view befor showing alert in failure case
- drop trailing , to satisfy jslint
new in v2
www/manager6/window/SafeDestroy.js | 26 --
1 file changed, 24 insertio
From: Dominik Csapak
automatically remove the pveceph managed storages when deleting the
pool on the gui
this is ok because we cannot delete the pool anyway when we have images
on it, and recreating them when creating a pool is now trivial
Signed-off-by: Dominik Csapak
---
identical to Dominik
Signed-off-by: Fabian Grünbichler
---
new in v3
www/manager6/ceph/Pool.js | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index 9f37a745..f633f43e 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.j
From: Dominik Csapak
we sometimes want to give the api call a parameter,
with this, we don't have to encode it into the url
everytime ourselves, but just give a 'params' object
Signed-off-by: Dominik Csapak
---
identical to Dominik's v1
www/manager6/window/SafeDestroy.js | 13 +++--
1
in order to get task log entries and easily accessible
task/error logs.
Signed-off-by: Fabian Grünbichler
---
rebased for v3/v4
new in v2
note: git show -U1 -w is recommended to view this ;)
PVE/API2/Ceph.pm | 148 ---
1 file changed, 76 inse
From: Dominik Csapak
to be able to automatically generate the ceph storages when creating a
pool
Signed-off-by: Dominik Csapak
Signed-off-by: Fabian Grünbichler
---
v3: changed text to 'Add Storages'
v1/2: identical to Dominik's v1
www/manager6/ceph/Pool.js | 5 +
1 file changed, 5 inse
vdisk_list can potentially take very long, and we don't want
the API request to time out.
Signed-off-by: Fabian Grünbichler
---
new in v3
PVE/API2/Ceph.pm | 32 +---
1 file changed, 17 insertions(+), 15 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
i
only storages which don't have the 'monhost' option set are removed
Signed-off-by: Fabian Grünbichler
---
changes since v3:
- use monhost instead of pveceph
changes since v2:
- adapted for $get_storages changes
- inlined $remove_storage
changes since v1:
- die if any of the storages could not b
Signed-off-by: Fabian Grünbichler
---
changes since v2:
- merged and rebased patches 15 and 17
PVE/API2/Ceph.pm | 22 +-
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 0776f3c7..7b1f5293 100644
--- a/PVE/API2/Ceph.pm
+
Signed-off-by: Fabian Grünbichler
---
new in v2, unchanged in v3/v4
www/manager6/Utils.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 0b850977..fecf9aff 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -612,6 +612,7 @@
introduce new API parameter 'add_storages'. if set, one
storage each is configured using the created pool:
- for containers using KRBD
- for VMs using librbd
Signed-off-by: Fabian Grünbichler
---
rebased for v3
changes since v1:
- drop monitor info retrieval (no longer needed)
- die if any of th
Signed-off-by: Fabian Grünbichler
---
changes since v3:
- drop pveceph parameter
changes since v2:
- drop keyring handling
changes since v1:
- drop $monhash parameter
- don't generate and set monhost storage parameter
PVE/API2/Ceph.pm | 15 +++
1 file changed, 15 insertions(+)
dif
Signed-off-by: Fabian Grünbichler
---
re-introduced in v3
PVE/API2/Ceph.pm | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 53483dde..171a6131 100644
--- a/PVE/API2/Ceph.pm
+++ b/PVE/API2/Ceph.pm
@@ -1846,10 +1846,8 @@ __PACKAGE_
add /etc/pve/ceph.conf to commands / option strings instead
of the monitor list provided via the 'monhost' option.
Signed-off-by: Fabian Grünbichler
---
changes since v3:
- adapt to pveceph flag no longer existing
PVE/Storage/RBDPlugin.pm | 40 +++-
1 file ch
Signed-off-by: Fabian Grünbichler
---
changes since v2:
-return all rbd storages of pool, not only pveceph ones
PVE/API2/Ceph.pm | 16
1 file changed, 16 insertions(+)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
index 52b425f6..0776f3c7 100644
--- a/PVE/API2/Ceph.pm
+++ b/
Signed-off-by: Fabian Grünbichler
---
changes since v3:
- adapt to pveceph flag no longer existing
PVE/API2/Storage/Config.pm | 24 ++--
1 file changed, 22 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Storage/Config.pm b/PVE/API2/Storage/Config.pm
index 4668af6..6c9b3
this patch series implements storage.cfg management for pveceph-managed ceph
clusters.
the following is implemented:
- allow rbd storages without a hard-coded monitor list, using
/etc/pve/ceph.conf instead for pveceph-managed clusters and their storages
- optionally adding/removing storages whe
these were line by line identical except for the binary path
Signed-off-by: Fabian Grünbichler
---
PVE/Storage/RBDPlugin.pm | 38 +-
1 file changed, 9 insertions(+), 29 deletions(-)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 30f769
to allow differentiating between user-created external RBD storage
entries (WITH monhost), and those created and managed by pveceph
(without).
making monhost non-fixed allows easily opting into the managed behaviour via
'pvesm set STORAGE -delete monhost', but is also helpful for external clusters
>>yes, lets keep the current approach using iso images.
for lxc, lxd seem to use cloudinit nocloud provider
http://lxd.readthedocs.io/en/latest/cloud-init/#custom-network-configuration-with-cloud-init
so , it's a simple file written in /var/lib/cloud/seed/nocloud-net/ on rootfs.
- Mail o
We only use it to send commands faster like resume
Signed-off-by: Alexandre Derumier
---
PVE/QemuMigrate.pm | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index fc847cc..5e18520 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrat
applied
On Mon, Aug 28, 2017 at 04:12:34PM +0200, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> this seems in line with our other VMA modifications, and I get the following
> stacktrace without:
>
> Thread 1 (Thread 0x7fa27cffd700 (LWP 29427)):
> #0 blk_bs (blk=0x0) at
On Tue, Sep 05, 2017 at 09:32:57AM +0200, Thomas Lamprecht wrote:
> On 08/31/2017 11:38 AM, Fabian Grünbichler wrote:
> > only storages which have the 'pveceph' flag set are removed
> >
> > Signed-off-by: Fabian Grünbichler
> > ---
> > changes since v2:
> > - adapted for $get_storages changes
> >
any comment ?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
any comment on this patch?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 08/31/2017 11:38 AM, Fabian Grünbichler wrote:
this patch series implements storage.cfg management for pveceph-managed ceph
clusters.
the following is implemented:
- add new 'pveceph' flag to RBD storages, using /etc/pve/ceph.conf instead of
a hard-coded monitor list
- optionally adding/r
subject double add: "add add ..."
I saw it only on the tenth times reading this
On 08/31/2017 11:38 AM, Fabian Grünbichler wrote:
From: Dominik Csapak
to be able to automatically generate the ceph storages when creating a
pool
Signed-off-by: Dominik Csapak
Signed-off-by: Fabian Grünbichler
On 08/31/2017 11:38 AM, Fabian Grünbichler wrote:
only storages which have the 'pveceph' flag set are removed
Signed-off-by: Fabian Grünbichler
---
changes since v2:
- adapted for $get_storages changes
- inlined $remove_storage
changes since v1:
- die if any of the storages could not be remove
On 08/31/2017 11:38 AM, Fabian Grünbichler wrote:
to ensure the XOR-like connection between monhost and pveceph
Signed-off-by: Fabian Grünbichler
---
new in v3, based on $check_monhost_pveceph from patch 4
PVE/Storage/RBDPlugin.pm | 12
1 file changed, 12 insertions(+)
diff --
43 matches
Mail list logo