On 27.04.22 17:04, Alexandre Derumier wrote:
> Hi,
>
> 2 users have reported problems when vlan-aware bridge are presents.
> Seem that frr is not parsing correctly netlink message because
> currently frr use static buffer size, and netlink message can be
> too big when a lot of vlan are defined.
>
On 26.04.22 12:13, Dominik Csapak wrote:
> this series improves the behaviour of the file-restore when some mount
> operations take longer than the 30 second pveproxy timeout, and improves
> the startup speed of the restore vm
>
> we do this by moving the disk init into the background of the daemo
On 11.03.22 20:05, Stoiko Ivanov wrote:
> to iterate over all configured ESPs and refresh the boot-loader
> installations.
>
> the init function was changed to not run refresh directly - to prevent
> refresh from running once for each ESP
>
> currently reinit does not imply refresh
>
> Signed-of
On 11.03.22 20:05, Stoiko Ivanov wrote:
> was forgotten during the general renaming of pve-efiboot ->
> proxmox-boot.
>
> follows commit 8c0a22adfe15dc00cf2194647bb254201d8d187b
>
> Signed-off-by: Stoiko Ivanov
> ---
> debian/pve-kernel-helper.postinst | 4
> proxmox-boot/functions
On 28.03.22 15:07, Matthias Heiserer wrote:
> The same code is used once in widget toolkit and twice in PVE already,
> so it makes sense to add it as a separate button.
>
> Signed-off-by: Matthias Heiserer
> ---
> changes from v1:
> move into separate class
> rename vars to something a little bit
From: Thomas Lamprecht
Signed-off-by: Thomas Lamprecht
---
www/manager6/dc/Backup.js | 49 +++
1 file changed, 34 insertions(+), 15 deletions(-)
diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
index 9b129266..df4a70fd 100644
--- a/www/mana
Signed-off-by: Fabian Ebner
---
Technically should come earlier, but I didn't want to write a
second version for before Thomas's patch and rebase after.
www/manager6/dc/Backup.js | 12
1 file changed, 12 insertions(+)
diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup
Add a tooltip to the comment field, to better distinguish it from the
notes-template.
Signed-off-by: Fabian Ebner
---
www/manager6/dc/Backup.js | 18 ++
1 file changed, 18 insertions(+)
diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
index 2b892c6f..9b129266 1
Introduce 'protected' to automatically mark a backup as protected
upon completion, and 'notes-template' to generate notes from a
template string with certain variables.
Add both to the UI for manual backups and add 'notes-template' to the
UI for backup jobs.
Changes from v3:
* dropped already
Signed-off-by: Fabian Ebner
---
www/manager6/Utils.js | 16
1 file changed, 16 insertions(+)
diff --git a/www/manager6/Utils.js b/www/manager6/Utils.js
index 4611ff0f..08778f5c 100644
--- a/www/manager6/Utils.js
+++ b/www/manager6/Utils.js
@@ -1784,6 +1784,22 @@ Ext.define('PVE.
Signed-off-by: Fabian Ebner
---
PVE/VZDump.pm | 22 +++---
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index 5f78746d..fcbd87d5 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -80,16 +80,20 @@ my $generate_notes = sub {
vm
Setting a width, so the text area can fill the horizontal space.
Suggested-by: Thomas Lamprecht
Signed-off-by: Fabian Ebner
---
www/manager6/window/Backup.js | 34 +-
1 file changed, 33 insertions(+), 1 deletion(-)
diff --git a/www/manager6/window/Backup.js b/ww
Signed-off-by: Fabian Ebner
---
Changes from v3:
* also escape \\ and \n
PVE/VZDump.pm | 30 ++
1 file changed, 30 insertions(+)
diff --git a/PVE/VZDump.pm b/PVE/VZDump.pm
index de29ca60..5f78746d 100644
--- a/PVE/VZDump.pm
+++ b/PVE/VZDump.pm
@@ -70,6 +70,29 @@
---
src/PVE/HA/Balancer/Nodes.pm| 36 +++--
src/PVE/HA/Balancer/Services.pm | 40 +++--
src/PVE/HA/Manager.pm | 24
3 files changed, 67 insertions(+), 33 deletions(-)
diff --git a/src/PVE/HA/Balancer/Nodes.pm b
---
src/test/test-basic0-balance-affinity/README | 1 +
src/test/test-basic0-balance-affinity/cmdlist | 3 ++
.../datacenter.cfg| 5 ++
.../hardware_status | 5 ++
.../test-basic0-balance-affinity/log.expect | 40 ++
.../mana
---
debian/pve-ha-manager.install | 1 +
src/PVE/HA/Config.pm | 22 +
src/PVE/HA/Env.pm | 6 +++
src/PVE/HA/Env/PVE2.pm| 6 +++
src/PVE/HA/Makefile | 2 +-
src/PVE/HA/Manager.pm | 1 +
src/PVE/HA/ResourcesGroups.pm | 90 ++
---
src/PVE/HA/Sim/Hardware.pm | 150 +
1 file changed, 150 insertions(+)
diff --git a/src/PVE/HA/Sim/Hardware.pm b/src/PVE/HA/Sim/Hardware.pm
index 96a4064..3c3622b 100644
--- a/src/PVE/HA/Sim/Hardware.pm
+++ b/src/PVE/HA/Sim/Hardware.pm
@@ -110,6 +110,46 @@ s
For offline vms in recovery state, we look at rrd for last 20minutes average
(excluding spike with 90th percentile)
For online vms, we get last rrd streamed value.
Need to implement a method to compute last minute average for cpu usage without
need to re-read rrd file.
For other metrics, we can u
Use a new method to find destination node for the service recovery
First, we ordering services by topsis score
Then we try to find the best target node.
FILTERING
-
1)
We check is node is able to start vm
- host have enough cores
- host have enough memory
- storage availability
- not
This is a vm centric loadbalancer with some inspiration of the
vmware drs 2.0 scheduler.
https://blogs.vmware.com/vsphere/2020/05/vsphere-7-a-closer-look-at-the-vm-drs-score.html
This look at bad performance vms, give a cluster topsis score for each
bad vm.
for each vm (CT are skipped as we can't
Hi,
This is a big rework of my previous patches series.
Currently, work in progress, so you don't need to review it for now.
(but if you want, comments are still welcome ;)
Biggest changes:
- This now use a new ranking algorithm: AHP-Topsis. (details are in patch)
- Adding a balancer. The main
Topis
https://www.youtube.com/watch?v=kfcN7MuYVeI
AHP:
https://www.youtube.com/watch?v=J4T70o8gjlk
AHP-Topis implementation in vm balancing:
https://arxiv.org/pdf/1002.3329.pdf
https://meral.edu.mm/record/4285/files/9069.pdf
Topsis (Technique for Order Preference by Similarity to Ideal Solution)
Hi,
2 users have reported problems when vlan-aware bridge are presents.
Seem that frr is not parsing correctly netlink message because
currently frr use static buffer size, and netlink message can be
too big when a lot of vlan are defined.
https://forum.proxmox.com/threads/implementations-of-sdn-
Signed-off-by: Alexandre Derumier
---
debian/changelog | 6 ++
1 file changed, 6 insertions(+)
diff --git a/debian/changelog b/debian/changelog
index 05d1646..763c9a9 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+frr (8.2.2-1+pve1) bullseye; urgency=medium
+
+ * upda
https://github.com/FRRouting/frr/pull/10482
This fix bugs for 2 proxmox users, when vlan-aware bridges exists
with a lot of vlans, netlink message are too big.
Signed-off-by: Alexandre Derumier
---
debian/patches/frr/0001-zebra-buffering.patch | 92
debian/patches/frr/0002-zebra-buffering
On 26.04.22 14:35, Daniel Tschlatscher wrote:
> Creating the filestream for the tasklog download is sourced in its own
> function to avoid redundant implementations in pmg-api and pve-manager
> .
>
> Signed-off-by: Daniel Tschlatscher
> ---
> src/PVE/Tools.pm | 27 +++
>
Hi,
I have send a v4 with fixes for all your comments
Le jeudi 31 mars 2022 à 15:01 +0200, Fabian Ebner a écrit :
> > + #add cloud-init drive
>
> Is there a reason to care about pending changes on the drive itself
> here?
About this, I'm currently return the config drive in get_pending_confi
---
PVE/API2/Qemu.pm| 68
PVE/CLI/qm.pm | 1 +
PVE/QemuServer/Cloudinit.pm | 78 +
3 files changed, 147 insertions(+)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 3af2132..5608ebb 100644
---
This allow to regenerate the config drive with 1 api call.
This also avoid to delete drive first, and recreate it again.
As it's a readonly drive, we can simply live update it,
and eject/replace it with qemu monitor
---
PVE/API2/Qemu.pm | 43 +++
PVE/CLI/
---
PVE/QemuServer.pm | 31 +--
1 file changed, 5 insertions(+), 26 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 53be830..998f7c8 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4837,6 +4837,10 @@ sub vmconfig_hotplug_pending {
This allow to regenerate config drive if pending values exist
when we change vm options.
---
PVE/QemuServer.pm | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 2710f53..56d77f4 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/Q
Hi,
This is an attempt to cleanup current behaviour of cloudinit online changes.
Currently, we setup cloudinit options as pending, until we generate the config
drive.
This is not 100% true, because some option like vm name, nic mac address can be
changed,
without going to pending, so user can'
Instead using vm pending options for pending cloudinit generated config,
write current generated cloudinit config in a new [special:cloudinit] SECTION.
Currently, some options like vm name, nic mac address can be hotplugged,
so they are not way to know if the cloud-init disk is already updated.
-
Currently when only generate it at vm start
---
PVE/QemuServer.pm | 11 +++
1 file changed, 11 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 8aa550b..53be830 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5088,6 +5088,8 @@ sub vmconfig_apply_pending {
On 27.04.2022 14:33, Matthias Heiserer wrote:
Previously, it was in the advanced section.
In the qemu wizard, bind iothread to the SCSI controller, so that
the unlikely/impossible combination of anything other than SCSI-single
with iothread can't be accidentally selected.
However, in the guest
sent some replies to the relevant parts,
all in all seems to work ok (nothing major functionally)
regarding cli, ecprofile is fine imo, i don't think we have to write out
'erasure-code-profile' in our cli (should be clear from context)
the only thing we might want to do is to (optionally?) creat
some comments inline
On 4/8/22 12:14, Aaron Lauterer wrote:
Allow to set 'k' and 'm' values and optionally the device class and
failure domains.
Implemented in a way that also exposes the functionality via the API.
Signed-off-by: Aaron Lauterer
---
PVE/API2/Ceph.pm| 6 +
PVE/A
some comments inline
aside from those, i think there are still some parts missing.
from my test, i alwasy got an error that it could not set 'size' for
the ecpool when i wanted to edit it (e.g. autoscale on/off)
so there seems to be some checks missing (can we now beforehand
if a pool is an ecpoo
On 27.04.22 14:32, DERUMIER, Alexandre wrote:
> I didn' see, but Thomas have reworked it:
> https://git.proxmox.com/?p=pve-manager.git;a=commit;h=640c0b26891c408d0456c355b3724c1be18cc75f
>
> and the behaviour seem to be different:
argh, sorry, the can_access_vnet sub should also had an:
return 1
> given that
> - we can't require some new ACL path/priv for regular bridges until
> the
> next major release (as that would be quite the breaking change ;))
> - removing access to the last VNET would suddenly make all regular
> bridges available (again) with your original patch, which is
On April 27, 2022 2:32 pm, DERUMIER, Alexandre wrote:
> Hi Fabian
> Le mercredi 27 avril 2022 à 13:36 +0200, Fabian Grünbichler a écrit :
>> commit 052fbb2a4d1bdeb490b2e3b67cd7555e460ebe93 introduced permission
>> > checks here that caused all regular bridges to be removed from the
>> > returned li
Previously, only a plaintext line in the task log showed something was off.
Now, the GUI will show it as a warning.
Signed-off-by: Matthias Heiserer
---
PVE/QemuServer.pm | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 5366df2..d
Existing installs are not changed by this.
Especially in benchmarks, SCSI with iothreads is significantly faster
than normal SCSI.
Signed-off-by: Matthias Heiserer
---
www/manager6/qemu/OSDefaults.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/qemu/OSDefaults
Previously, it was in the advanced section.
In the qemu wizard, bind iothread to the SCSI controller, so that
the unlikely/impossible combination of anything other than SCSI-single
with iothread can't be accidentally selected.
However, in the guest options, iothread and SCSI controller can still
Hi Fabian
Le mercredi 27 avril 2022 à 13:36 +0200, Fabian Grünbichler a écrit :
> commit 052fbb2a4d1bdeb490b2e3b67cd7555e460ebe93 introduced permission
> > checks here that caused all regular bridges to be removed from the
> > returned list as soon as the SDN package is installed, unless the
> > us
On 18.03.22 11:21, Harikrishnan R wrote:
> While NixOS generally overrides any static contents in /etc/hostname
> with the hostname defined in `networking.hostname`, it can use the
> contents of `/etc/hostname` provided by PVE if this option is not set.
>
> Signed-off-by: Harikrishnan R
> ---
>
with two small follow-ups:
- re-order conditions for fallback to old path (ideally avoiding one
stat())
- drop aliased private sub
please don't forget to submit the udev change upstream as well ;)
On April 13, 2022 11:43 am, Aaron Lauterer wrote:
> By adding our own customized rbd udev rules a
On 27.04.22 12:19, Fabian Grünbichler wrote:
> run_workers is responsible for updating the state after workers have
> exited. if the current LRM state is 'active', but a shutdown_request was
> issued in 'restart' mode (like on package upgrades), this call is the
> only one made in the LRM work() lo
On 27.04.22 13:36, Fabian Grünbichler wrote:
> commit 052fbb2a4d1bdeb490b2e3b67cd7555e460ebe93 introduced permission
> checks here that caused all regular bridges to be removed from the
> returned list as soon as the SDN package is installed, unless the user
> is root@pam or there exists a VNET wit
commit 052fbb2a4d1bdeb490b2e3b67cd7555e460ebe93 introduced permission
checks here that caused all regular bridges to be removed from the
returned list as soon as the SDN package is installed, unless the user
is root@pam or there exists a VNET with the same ID.
this is arguably a breaking change, s
run_workers is responsible for updating the state after workers have
exited. if the current LRM state is 'active', but a shutdown_request was
issued in 'restart' mode (like on package upgrades), this call is the
only one made in the LRM work() loop.
skipping it if there are active services means t
On 16.03.22 12:34, Matthias Heiserer wrote:
> When clicking on a column to sort it, the filter doesn't reset.
> Previously, it forgot the filter until the value was changed.
>
> Signed-off-by: Matthias Heiserer
> ---
> Changes from v1:
> Introduce a config property to (en|dis)disable clearing the
On 06.03.22 13:46, Alexandre Derumier wrote:
> Hi,
>
> Currently, if a guest vm allocate a memory page, and freed it later in the
> guest,
> the memory is not freed on the host side.
>
> Balloon device have a new option since qemu 5.1 "free-page-reporting" (and
> need host kernel 5.7)
>
> http
On 24.02.22 15:21, Stefan Sterz wrote:
> To be consistent with PBS's implementation of multi-line comments
> remove "\s*" here too. Since the regex isn't lazy .* matches
> everything \s* would anyway. (Note that new lines occurs after "$").
>
> Signed-off-by: Stefan Sterz
> ---
> PVE/QemuServer.
On 07.04.22 12:05, Fabian Ebner wrote:
> Text enclosed in unescaped curly braces will be interpreted as an
> attribute reference breaking and e.g. lead to the description not
> showing up at all a generated man page further down the line.
>
> Signed-off-by: Fabian Ebner
> ---
>
> New in v3.
>
>
On 19.04.22 10:45, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
> PVE/VZDump.pm | 19 ++-
> 1 file changed, 10 insertions(+), 9 deletions(-)
>
>
applied both patches, thanks!
___
pve-devel mailing list
pve-devel@lists.prox
On 20.04.22 16:19, Alexandre Derumier wrote:
> Signed-off-by: Alexandre Derumier
> ---
> www/manager6/sdn/zones/EvpnEdit.js | 4
> 1 file changed, 4 insertions(+)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https
On 20.04.22 16:19, Alexandre Derumier wrote:
> Currently, when multiple exit-nodes are defined, each exit-nodes exchanges
> their own default route, so traffic is looping between both exit nodes
> instead going out.
>
> This add a new route-map to filter received type-5 on exit node
>
> Signed-of
On 22.04.22 14:15, Fabian Ebner wrote:
> but rather multiple times becoming exponentially less frequent.
>
> Suggested-by: Thomas Lamprecht
> Signed-off-by: Fabian Ebner
> ---
> PVE/API2/Replication.pm | 15 ++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
>
applied, thanks
Am 27.04.22 um 09:05 schrieb Thomas Lamprecht:
> On 25.04.22 09:28, Fabian Ebner wrote:
>> Am 23.04.22 um 11:38 schrieb Thomas Lamprecht:
>>> On 21.04.22 13:26, Fabian Ebner wrote:
Namely, if there is a storage in the backup configuration that's not
available on the current node.
>>>
>>>
On 12.04.22 15:34, Dominik Csapak wrote:
> for getting multiple properties from the in memory config of the
> guests. I added a new CSF_IPC_ call to maintain backwards compatibility.
"guests in one go. Keep the existing IPC call as is for backward
compatibility and add this as separate, new one".
On 25.04.22 09:28, Fabian Ebner wrote:
> Am 23.04.22 um 11:38 schrieb Thomas Lamprecht:
>> On 21.04.22 13:26, Fabian Ebner wrote:
>>> Namely, if there is a storage in the backup configuration that's not
>>> available on the current node.
>>
>> Better than the status quo, but in the long run all the
62 matches
Mail list logo