I am not sure if json is a good idea here. We use colon separated lists for
everything else, so I would prefer that. It is easier to parse inside C, which
is important when you want to generate RRD databases from inside pmxcfs ...
Also, consider that it is quite hard to change that format later, b
On 6/24/19 1:56 PM, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> Notes:
> leftout on purpose:
> - checking of sources.list (no parser, lots of false negatives, needs to
> happen after upgrade to corosync 3)
>
> still missing for PVE 6.x / post-upgrade ver
details in commit
Alexandre Derumier (1):
add new status sub and move code from test
PVE/Network/SDN.pm | 59 +++--
test/statuscheck.pl | 37 +---
2 files changed, 58 insertions(+), 38 deletions(-)
--
2.20.1
__
- based on my last patch serie on pve-manager
- need my last patch for pve-network
Alexandre Derumier (2):
pvestatd : broadcast sdn transportzone status
api : cluster ressources : add sdn
PVE/API2/Cluster.pm | 28 ++--
PVE/Service/pvestatd.pm | 22 +++
Signed-off-by: Alexandre Derumier
---
PVE/API2/Cluster.pm | 28 ++--
1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
index bfeaf784..fd083359 100644
--- a/PVE/API2/Cluster.pm
+++ b/PVE/API2/Cluster.pm
@@ -182,7 +182,
Signed-off-by: Alexandre Derumier
---
PVE/Service/pvestatd.pm | 22 ++
1 file changed, 22 insertions(+)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index e138b2e8..bad1b73d 100755
--- a/PVE/Service/pvestatd.pm
+++ b/PVE/Service/pvestatd.pm
@@ -37,6 +37,12 @
old status sub was renamed ifquery_check
also check if local config exist or if local config is too old.
(fixme : compare mtime, maybe could we use some kind of version for this?)
we can have 4 status code:
- pending : local config is absent but sdn.cfg exist
- unknown : local config is too old,
> Thomas Lamprecht hat am 24. Juni 2019 20:52
> geschrieben:
>
>
> On 6/24/19 1:56 PM, Fabian Grünbichler wrote:
> > Signed-off-by: Fabian Grünbichler
> > ---
> > Notes:
> > leftout on purpose:
> > - checking of sources.list (no parser, lots of false negatives, needs
> > to happen a
On 6/24/19 1:56 PM, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> Notes:
> leftout on purpose:
> - checking of sources.list (no parser, lots of false negatives, needs to
> happen after upgrade to corosync 3)
>
> still missing for PVE 6.x / post-upgrade ver
On 6/19/19 10:23 AM, Stefan Reiter wrote:
> Kernels 4.18+ (4.17+ for evmcs) support new Hyper-V enlightenments for
> Windows KVM guests. QEMU supports these since 3.0 and 3.1 respectively.
> tlbflush and ipi improve performance on overcommitted systems, evmcs
> improves nested virtualization.
>
>
>>You need to know that for now the data is always node dependent,
>>so currently there's no cluster wide kv-status where one can easily
>>save things node-independently, or is the SDN status node dependent
>>anyway?
Yes, I need to have it for each node.
(It's the status of local network conf
Signed-off-by: Christian Ebner
---
PVE/API2/Backup.pm | 108 +
1 file changed, 59 insertions(+), 49 deletions(-)
diff --git a/PVE/API2/Backup.pm b/PVE/API2/Backup.pm
index 3dfe8a0d..141402b1 100644
--- a/PVE/API2/Backup.pm
+++ b/PVE/API2/Backup
On 6/24/19 10:33 AM, Stefan Reiter wrote:
> The "guest-shutdown" guest agent call is blocking for some reason, so if
> it fails (e.g. agent not installed on guest) only the default timeout of
> 10 minutes (see QMPClient.pm, sub cmd) would apply.
>
> With this change, if (and only if) a timeout is
Hi,
On 6/24/19 4:53 PM, Alexandre DERUMIER wrote:
> for SDN, I'm looking to use broadcast_node_kv, to send status of
> transportzones.(not vnet directly, as it could be really huge with a lot a
> vnet).
>
You need to know that for now the data is always node dependent,
so currently there's no
Hi,
for SDN, I'm looking to use broadcast_node_kv, to send status of
transportzones.(not vnet directly, as it could be really huge with a lot a
vnet).
I would like to known if it's possible to broadcast a hash instead a string.
(the code seem to refuse ref).
something like "kv/sdn":$myhash->{
> On June 24, 2019 3:27 PM Thomas Lamprecht wrote:
>
>
> On 6/24/19 2:37 PM, Stefan Reiter wrote:
> > Migration tests ran OK, works fine in both directions (old <-> new), as
> > long as QEMU version stays the same and both systems have a kernel
> > supporting the hv_tlbflush feature.
>
> sorr
On 6/24/19 2:37 PM, Stefan Reiter wrote:
> Migration tests ran OK, works fine in both directions (old <-> new), as
> long as QEMU version stays the same and both systems have a kernel
> supporting the hv_tlbflush feature.
sorry, can you clarify what you mean with "as long as QEMU version stays
the
While we nowadays can work much better with package upgrades relating
the cluster stack it still happens that a pve-cluster upgrade can
produce a false-positive 401 (auth failure) code for a currently
valid ticket, e.g., because a pmxcfs lock was requested but the
pmxcfs was currently not mounted d
On 6/24/19 2:16 PM, Thomas Lamprecht wrote:
> Signed-off-by: Thomas Lamprecht
> ---
> ...estart-pmxcfs-and-trigger-pve-api-up.patch | 58 +++
> patches/series| 1 +
> 2 files changed, 59 insertions(+)
> create mode 100644
> patches/0003-PVE-d-pos
> On June 21, 2019 10:53 AM Thomas Lamprecht wrote:
>
>
> Am 6/19/19 um 10:23 AM schrieb Stefan Reiter:
> > Kernels 4.18+ (4.17+ for evmcs) support new Hyper-V enlightenments for
> > Windows KVM guests. QEMU supports these since 3.0 and 3.1 respectively.
> > tlbflush and ipi improve performanc
we cannot bump the part before ~ to ensure the upgrade to Debian pull
in the for Buster build version (it's 1.0.5-1 there, tilde makes ours
always the lesser choice) so add a +2 which ranks (in terms of
importance) before the older "1.0.5-1~bpo9" but after 1.0.5-1
Signed-off-by: Thomas Lamprecht
Signed-off-by: Thomas Lamprecht
---
...estart-pmxcfs-and-trigger-pve-api-up.patch | 58 +++
patches/series| 1 +
2 files changed, 59 insertions(+)
create mode 100644
patches/0003-PVE-d-postinst-restart-pmxcfs-and-trigger-pve-api-up.patch
diff --
Signed-off-by: Fabian Grünbichler
---
Notes:
leftout on purpose:
- checking of sources.list (no parser, lots of false negatives, needs to
happen after upgrade to corosync 3)
still missing for PVE 6.x / post-upgrade version:
- modification of checked versions
- ceph-volume
to make 'ceph versions' and 'ceph XX versions' accessible.
Signed-off-by: Fabian Grünbichler
---
PVE/Ceph/Tools.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
index 617aba66..319e2ddd 100644
--- a/PVE/Ceph/Tools.pm
+++ b/PVE/Ceph/Tools.pm
@@
Signed-off-by: Alexandre Derumier
---
PVE/API2/Cluster.pm | 17 +
1 file changed, 17 insertions(+)
diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
index 8af5f3f0..bfeaf784 100644
--- a/PVE/API2/Cluster.pm
+++ b/PVE/API2/Cluster.pm
@@ -24,6 +24,12 @@ use PVE::API2::Firewall
need my last patch serie for pve-network
Changelog v2:
- dynamically include SDN
Alexandre Derumier (2):
api2 : cluster: add sdn api endpoint
api2: network reload : generate local sdn config
PVE/API2/Cluster.pm | 17 +
PVE/API2/Network.pm | 11 +++
2 files changed,
Signed-off-by: Alexandre Derumier
---
PVE/API2/Network.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/API2/Network.pm b/PVE/API2/Network.pm
index 00337fe2..58adfbfb 100644
--- a/PVE/API2/Network.pm
+++ b/PVE/API2/Network.pm
@@ -16,6 +16,12 @@ use IO::File;
use base qw(
The "guest-shutdown" guest agent call is blocking for some reason, so if
it fails (e.g. agent not installed on guest) only the default timeout of
10 minutes (see QMPClient.pm, sub cmd) would apply.
With this change, if (and only if) a timeout is specified via CLI/API,
it is used instead. In case i
28 matches
Mail list logo