Am 17.11.22 um 15:00 schrieb Fiona Ebner:
> Signed-off-by: Fiona Ebner
> ---
>
> New in v2.
>
> src/PVE/HA/Resources/PVECT.pm | 2 ++
> src/PVE/HA/Resources/PVEVM.pm | 2 ++
> 2 files changed, 4 insertions(+)
>
> diff --git a/src/PVE/HA/Resources/PVECT.pm b/src/PVE/HA/Resources/PVECT.pm
> inde
Signed-off-by: John Hollowell
---
src/PVE/APIServer/AnyEvent.pm | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/src/PVE/APIServer/AnyEvent.pm b/src/PVE/APIServer/AnyEvent.pm
index d958642..ed1321d 100644
--- a/src/PVE/APIServer/AnyEvent.pm
+++ b/src/PVE/APIServer/AnyEvent.p
Signed-off-by: John Hollowell
---
src/PVE/APIServer/AnyEvent.pm | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/PVE/APIServer/AnyEvent.pm b/src/PVE/APIServer/AnyEvent.pm
index f397a8c..d958642 100644
--- a/src/PVE/APIServer/AnyEvent.pm
+++ b/src/PVE/APIServer/AnyEv
This fixes an issue where an upload request without a Content-Type in
the file's multipart part would prevent the upload and throw
missleading errors. This patch removes the requirement and ignores
all multipart headers once the needed information has been extracted.
I have tested these changes ag
Am 10/11/2022 um 14:24 schrieb Stefan Hrdlicka:
> add fields for additional settings required by ZFS dRAID
>
> Signed-off-by: Stefan Hrdlicka
> ---
> www/manager6/node/ZFS.js | 69
> 1 file changed, 69 insertions(+)
>
>
applied, thanks! Note that I rewo
Am 10/11/2022 um 14:24 schrieb Stefan Hrdlicka:
> add some basic explanation how ZFS dRAID works including
> links to openZFS for more details
>
> add documentation for two dRAID parameters used in code
>
> Signed-off-by: Stefan Hrdlicka
> ---
> local-zfs.adoc | 44 +
Am 25/03/2022 um 11:55 schrieb Aaron Lauterer:
> librados2-perl: Aaron Lauterer (2):
> mon_command: refactor to pass all data to perl
> mon_command: optionally ignore errors
>
> PVE/RADOS.pm | 24 +++-
> RADOS.xs | 18 ++
> 2 files changed, 25 insertion
Am 17/11/2022 um 15:09 schrieb Aaron Lauterer:
> The main motivation behind this series is to leverage several safety
> checks that Ceph has to make sure it is ok to stop or destroy a service.
>
> A new cmd-safety endpoint is added which is called from the GUI wherever
> possible to show a warning
Am 16/11/2022 um 16:47 schrieb Dominik Csapak:
> pve-manager:
>
> Dominik Csapak (13):
> api: /cluster/resources: add tags to returned properties
> api: allow all users to (partially) read datacenter.cfg
> ui: save ui options from /cluster/options instead of version
> ui: parse and save ta
Am 17/11/2022 um 15:56 schrieb Dominik Csapak:
> overall ui improvements of the tags ui, see the individual patches
> for details
>
> intended as follow up to my v11 tags ui series
>
> Dominik Csapak (5):
> ui: rework inline tag editing
> ui: tags: make sorting more natural
> ui: tags: hide
things that changed:
* removed 'add Tag' inline button with proper button that adds
empty tag
* don't require to confirm each tag, simply update the color "live"
* set a minimum width for the editing box, so that it's easier to click
* replace cancel/finish icons with proper buttons
* fix tagChar
and make it more like the 'traffic control' time grid in pbs
Signed-off-by: Dominik Csapak
---
www/manager6/dc/RegisteredTagsEdit.js | 6 +-
www/manager6/dc/UserTagAccessEdit.js | 6 +-
www/manager6/form/ListField.js| 86 ---
3 files changed, 60 insertions(+),
on every change, collect all tags and update the filter of all tag
fields
Signed-off-by: Dominik Csapak
---
www/manager6/form/Tag.js | 2 +-
www/manager6/form/TagEdit.js | 18 ++
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/www/manager6/form/Tag.js b/www/ma
overall ui improvements of the tags ui, see the individual patches
for details
intended as follow up to my v11 tags ui series
Dominik Csapak (5):
ui: rework inline tag editing
ui: tags: make sorting more natural
ui: tags: hide already set tags in dropdown
ui: change style of ListField
u
by sorting the lower cased variants, and only if they are identical
sort the original values with 'localeCompare'
Signed-off-by: Dominik Csapak
---
www/manager6/Utils.js| 6 +-
www/manager6/form/TagEdit.js | 4 +++-
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/www/m
with a combogrid and the example text 'preview'
Signed-off-by: Dominik Csapak
---
www/manager6/dc/OptionView.js | 37 ---
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/www/manager6/dc/OptionView.js b/www/manager6/dc/OptionView.js
index aeab024e4..
Am 17/11/2022 um 14:33 schrieb Fabian Grünbichler:
> pve-container:
>
> Fabian Grünbichler (3):
> migration: add remote migration
> pct: add 'remote-migrate' command
> migrate: print mapped volume in error
>
> debian/control | 3 +-
> src/PVE/API2/LXC.pm| 635
Ceph provides us with several safety checks to verify that an action is
safe to perform. This endpoint provides means to acces them.
The actual mon commands are not exposed directly. Instead the two
actions "stop" and "destroy" are offered.
In case it is not okay to perform an action, Ceph provide
Check if stopping of a service (OSD, MON, MDS) will be problematic for
Ceph. The warning still allows the user to proceed.
Ceph also has a check if the destruction of a MON is okay, so let's use
it.
Instead of the common OK button, label it with `Stop OSD` and so forth
to hopefully reduce the "cl
The main motivation behind this series is to leverage several safety
checks that Ceph has to make sure it is ok to stop or destroy a service.
A new cmd-safety endpoint is added which is called from the GUI wherever
possible to show a warning.
This series needs commit 80deebd or newer from the lib
If an OSD is removed during the wrong conditions, it could lead to
blocked IO or worst case data loss.
Check against global flags that limit the capabilities of Ceph to heal
itself (norebalance, norecover, noout) and if there are degraded
objects.
Unfortunately, the 'safe-to-destroy' Ceph API end
for calculating node usage of services based upon static CPU and
memory configuration as well as scoring the nodes with that
information to decide where to start a new or recovered service.
For getting the service stats, it's necessary to also consider the
migration target (if present), becuase th
Right now, the online node usage calculation for the HA manager only
considers the number of active services on each node. This patch
series allows switching to a 'static' scheduler mode instead, where
static usage information from the nodes and guest configurations is
used instead.
With this vers
briefly describing the 'basic' and 'static' modes and with a note
mentioning plans for balancers.
Signed-off-by: Fiona Ebner
---
Changes from v1:
* Mention that it also affects shutdown policy migrations.
* Describe static mode in more detail.
ha-manager.adoc | 45 +
no functional change is intended.
One test needs adaptation too, because it created its own version of
$online_node_usage.
Signed-off-by: Fiona Ebner
---
No changes from v1.
src/PVE/HA/Manager.pm | 35 +--
src/test/test_failover1.pl | 19 ++
if something goes wrong with the TOPSIS scoring. Not expected to
happen, but it's rather cheap to be on the safe side.
Signed-off-by: Fiona Ebner
---
New in v2.
src/PVE/HA/Usage/Static.pm | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/src/PVE/HA/Usage/Static.pm
In HA manager, the function recompute_online_node_usage() is called
very often currently and the 'static' mode needs to read the guest
configs which adds a bit of overhead.
Signed-off-by: Fiona Ebner
---
New in v2.
ha-manager.adoc | 3 +++
1 file changed, 3 insertions(+)
diff --git a/ha-manag
Suggested-by: Thomas Lamprecht
Signed-off-by: Fiona Ebner
---
Changes from v1:
* Extend existing method rather than introducing a new one.
src/PVE/HA/Env/PVE2.pm | 10 +-
src/PVE/HA/LRM.pm | 4 ++--
src/PVE/HA/Sim/Env.pm | 5 -
3 files changed, 11 insertions(+), 8 delet
Signed-off-by: Fiona Ebner
---
New in v2.
src/PVE/HA/Resources/PVECT.pm | 2 ++
src/PVE/HA/Resources/PVEVM.pm | 2 ++
2 files changed, 4 insertions(+)
diff --git a/src/PVE/HA/Resources/PVECT.pm b/src/PVE/HA/Resources/PVECT.pm
index 4c9530d..e77d98c 100644
--- a/src/PVE/HA/Resources/PVECT.pm
++
The method will be extended to include other HA-relevant settings from
datacenter.cfg.
Suggested-by: Thomas Lamprecht
Signed-off-by: Fiona Ebner
---
New in v2.
src/PVE/HA/Env.pm | 4 ++--
src/PVE/HA/Env/PVE2.pm | 2 +-
src/PVE/HA/LRM.pm | 2 +-
src/PVE/HA/Sim/Env.pm | 2 +-
4 files
See the READMEs for more information about the tests.
Signed-off-by: Fiona Ebner
---
New in v2.
src/test/test-crs-static1/README | 4 +
src/test/test-crs-static1/cmdlist | 4 +
src/test/test-crs-static1/datacenter.cfg | 6 +
src/test/test-crs-static1/hardwar
to be used for static resource scheduling.
In container's vmstatus(), the 'cores' option takes precedence over
the 'cpulimit' one, but it felt more accurate to prefer 'cpulimit'
here.
Signed-off-by: Fiona Ebner
---
Changes from v1:
* Properly add it to the simulation environment.
src/PVE/
in preparation to also support static resource scheduling via another
such Usage plugin.
The interface is designed in anticipation of the Usage::Static plugin,
the Usage::Basic plugin doesn't require all parameters.
In Usage::Static, the $haenv will necessary for logging and getting
the static no
Signed-off-by: Fiona Ebner
---
Changes from v1:
* Switch to get_datacenter_settings() replacing the previous
get_crs_settings() in v1.
src/PVE/HA/Manager.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/src/PVE/HA/Manager.pm b/src/PVE/HA/Manager.pm
index 63e6c8a..1638442 10
With the Usage::Static plugin, scoring is not as cheap anymore and
select_service_node() is called for each running service.
This should cover most calls of select_service_node().
Signed-off-by: Fiona Ebner
---
No changes from v1.
src/PVE/HA/Manager.pm | 4 ++--
1 file changed, 2 insertions(+
With the Usage::Static plugin, scoring is not as cheap anymore and
select_service_node() is called for each running service.
Signed-off-by: Fiona Ebner
---
No changes from v1.
src/PVE/HA/Manager.pm | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/src/PVE/HA/Manag
In preparation for scheduling based on static information, where the
scoring of nodes depends on information from the service's
VM/CT configuration file (and the $sid is required to query that).
Signed-off-by: Fiona Ebner
---
No changes from v1.
src/PVE/HA/Manager.pm | 4 +++-
src/test/te
Note that recompute_online_node_usage() becomes much slower when the
'static' resource scheduler mode is used. Tested it with ~300 HA
services (minimal containers) running on my virtual test cluster.
Timings with 'basic' mode were between 0.0004 - 0.001 seconds
Timings with 'static' mode were betw
to be used for static resource scheduling. In the simulation
environment, the information can be added in hardware_status.
Signed-off-by: Fiona Ebner
---
Changes from v1:
* Properly add it to the simulation environment.
src/PVE/HA/Env.pm | 6 ++
src/PVE/HA/Env/PVE2.pm | 1
On November 17, 2022 2:33 pm, Fabian Grünbichler wrote:
> this series adds remote migration for VMs and CTs.
>
> both live and offline migration of VMs including NBD and
> storage-migrated disks should work, containers don't have any live
> migration so both offline and restart mode work identical
which wraps the remote_migrate_vm API endpoint, but does the
precondition checks that can be done up front itself.
this now just leaves the FP retrieval and target node name lookup to the
sync part of the API endpoint, which should be do-able in <30s ..
an example invocation:
$ qm remote-migrate
remote migration uses a websocket connection to a task worker running on
the target node instead of commands via SSH to control the migration.
this websocket tunnel is started earlier than the SSH tunnel, and allows
adding UNIX-socket forwarding over additional websocket connections
on-demand.
the
no semantic changes intended, except for:
- no longer passing the main migration UNIX socket to SSH twice for
forwarding
- dropping the 'unix:' prefix in start_remote_tunnel's timeout error message
Signed-off-by: Fabian Grünbichler
---
Notes:
v6:
- rport/port
- properly conditionaliz
in case of remote migration, we use the `update_vm_api` helper for
checking permissions on the incoming config. this would also cause an
incoming cloud-init image to be overwritten, since the VM is not running
yet at this point.
provide a parameter which can be set by an incoming *remote* migratio
the following two endpoints are used for migration on the remote side
POST /nodes/NODE/qemu/VMID/mtunnel
which creates and locks an empty VM config, and spawns the main qmtunnel
worker which binds to a VM-specific UNIX socket.
this worker handles JSON-encoded migration commands coming in via thi
modelled after the VM migration, but folded into a single commit since
the actual migration changes are a lot smaller here.
Signed-off-by: Fabian Grünbichler
---
Notes:
v7:
- fix order of parsing parameters (thanks Stefan Hanreich!)
- add libpve-access-control dependency (for Sys.Inc
entry point for the remote migration on the source side, mainly
preparing the API client that gets passed to the actual migration code
and doing some parameter parsing.
querying of the remote sides resources (like available storages, free
VMIDs, lookup of endpoint details for specific nodes, ...)
since that is the ID on the target node..
Signed-off-by: Fabian Grünbichler
---
src/PVE/LXC/Migrate.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 82305c0..35455e1 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@
this series adds remote migration for VMs and CTs.
both live and offline migration of VMs including NBD and
storage-migrated disks should work, containers don't have any live
migration so both offline and restart mode work identical except for the
restart part.
groundwork for extending to pvesr a
Signed-off-by: Fabian Grünbichler
---
Notes:
new in v7
PVE/QemuServer.pm | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index dea5f251..9a62b29d 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5226,7 +5226,7 @@ sub
works the same as `qm remote-migrate`, with the addition of `--restart`
and `--timeout` parameters.
Signed-off-by: Fabian Grünbichler
---
Notes:
v6: new
src/PVE/CLI/pct.pm | 124 +
1 file changed, 124 insertions(+)
diff --git a/src/PVE/CLI/pct.p
Previously, cloning a stopped VM didn't respect bwlimit. Passing the -r
(ratelimit) parameter to qemu-img convert fixes this issue.
Signed-off-by: Leo Nunner
---
Changes from v1:
- Remove unneeded "undef"s, as to not unnecessarily touch unrelated
lines
- Add test for bwlimit
PVE/Q
Le jeudi 17 novembre 2022 à 13:52 +0100, Wolfgang Bumiller a écrit :
> ^ This is the same for VMs and the MTU is *sort of* tied to the
> bridge's
> mtu anyway. (Tbh I don't see the point in even having a setting for
> it
> in the first place...)
In production, all my proxmox vmbr (vlan-aware),ethX
On Wed, Nov 16, 2022 at 08:11:22PM +0100, Thomas Lamprecht wrote:
> Am 11/11/2022 um 12:14 schrieb Stefan Hanreich:
> > Some Notes:
> > - Setting the MTU while the container is running, does not update the MTU of
> > the running container. If this is intended behavior it might be smart to
> > d
the "fix" is unsual for non #bugids, rather tag the subsystem you're modifying,
e.g.:
restore: clean up config when invalid source archive is given
Am 17/11/2022 um 10:39 schrieb Daniel Tschlatscher:
> Before, if an invalid/non-existant ostemplate parameter was passed,
s/existant/existent/
And
Am 16/11/2022 um 16:48 schrieb Dominik Csapak:
> 'get_allowed_tags':
> returns the allowed tags for the given user
>
> 'assert_tag_permissions'
> helper to check permissions for tag setting/updating/deleting
> for both container and qemu-server
>
> gets the list of allowed tags from the DataCente
Am 10/11/2022 um 15:37 schrieb Fiona Ebner:
> Initially, with a setting for HA to switch between basic (just count
> services) and static (use static node and resource information).
>
> Signed-off-by: Fiona Ebner
> ---
> data/PVE/DataCenterConfig.pm | 25 +
> 1 file chang
Am 24/10/2022 um 16:33 schrieb Stefan Hrdlicka:
> added file for cache from bugzilla case #1965
>
> Signed-off-by: Stefan Hrdlicka
> ---
> data/PVE/Cluster.pm | 1 +
> data/src/status.c | 1 +
> 2 files changed, 2 insertions(+)
>
>
applied this one already, thanks!
Am 17.11.22 um 11:50 schrieb Markus Frank:>>> @@ -2113,6 +2171,17 @@ sub
parse_guest_agent {
>>> return $res;
>>> }
>>> +sub parse_memory_encryption {
>>> + my ($value) = @_;
>>> +
>>> + return if !$value;
>>> +
>>> + my $res = eval { parse_property_string($memory_encryption_fmt,
On 11/14/22 09:51, Fiona Ebner wrote:
Am 22.09.22 um 13:54 schrieb Stefan Hanreich:
Signed-off-by: Stefan Hanreich
---
Should there be a third hook that's called when the snapshot fails? That
would allow doing cleanup in all cases. Could still be added later when
actually requested by user
--- Begin Message ---
ah ok, thanks for letting me know. i did not expect that it's difficult
to handle...but that explains a lot.
Am 17.11.22 um 11:38 schrieb Wolfgang Bumiller:
On Thu, Nov 17, 2022 at 11:23:19AM +0100, Roland wrote:
nice!
btw, what about this one ?
https://bugzilla.proxmox.
Thanks for the feedback. I will send v3 when I was able to test it on an EPYC
CPU.
On 11/14/22 14:06, Fiona Ebner wrote:
Am 11.11.22 um 15:27 schrieb Markus Frank:
This Patch is for enabling AMD SEV (Secure Encrypted
Virtualization) support in QEMU and enabling future
memory encryption technol
On Thu, Nov 17, 2022 at 11:23:19AM +0100, Roland wrote:
> nice!
>
> btw, what about this one ?
>
> https://bugzilla.proxmox.com/show_bug.cgi?id=3909#c3
>
> actually, the firewall stuff is getting blindly executed every 10
> seconds, that's causing a lot of noise.
>
> couldn't/shouldn't this be
--- Begin Message ---
nice!
btw, what about this one ?
https://bugzilla.proxmox.com/show_bug.cgi?id=3909#c3
actually, the firewall stuff is getting blindly executed every 10
seconds, that's causing a lot of noise.
couldn't/shouldn't this be handled more intelligenty ?
roland
Am 17.11.22 um 1
Am 07/11/2022 um 14:18 schrieb Aaron Lauterer:
> If we are okay with the way this would change RADOS.xs and RADOS.pm, I can
> send a follow up for patch 3 and 4 as a few other places started to issue mon
> commands in the meantime.
how about renaming mon_command to mon_cmd and re-introduce mon_c
Am 16/11/2022 um 16:47 schrieb Dominik Csapak:
> pve-cluster:
>
> Dominik Csapak (5):
> add CFS_IPC_GET_GUEST_CONFIG_PROPERTIES method
> Cluster: add get_guest_config_properties
> datacenter.cfg: add option for tag-style
> datacenter.cfg: add tag rights control to the datacenter config
>
sorry for the delayed reply
some nits & please rebase ;-)
On Mon, Oct 24, 2022 at 04:33:58PM +0200, Stefan Hrdlicka wrote:
> for large IP sets (for example > 25k) it takes noticable longer to parse the
> files, this commit caches the cluster.fw file and reduces parsing time
>
> Signed-off-by: St
Before, if an invalid/non-existant ostemplate parameter was passed,
the task would abort, but would leave an empty config file behind.
This also applies to errors for invalid mount point configurations.
In both cases, the empty config will now be removed.
Signed-off-by: Daniel Tschlatscher
---
s
Before, a failed restore would only remove the container config, but
the firewall config would remain.
Now, the firewall config is also removed, except for the case when the
user only has the VM.Backup permission. In this case the firewall
would not have been restored/changed by us and is left as i
Some users have a more complicated CRUSH hierarchy, for example with a
stretched cluster. The additional hierarchy steps (datacenter, rack,
room, ...) are shown in the OSD panel. Showing a generic icon for any
CRUSH types that have not a specific icon configured will make it easier
to navigate the
Am 16/11/2022 um 05:13 schrieb Alexandre Derumier:
> Signed-off-by: Alexandre Derumier
> ---
> www/manager6/qemu/NetworkEdit.js | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied, with 128 changed to 64 as in qemu-server, thanks!
__
applied, thanks
made a short follow up to also disable the remove button for cloudinit drives
if one does not have cloud init permissions
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-dev
72 matches
Mail list logo