Sugested-by: Thomas Lamprecht
Signed-off-by: Stoiko Ivanov
---
did some minimal testing (ztest for a while, containers with replication
and a migration between 2 nodes) - looked ok
The changelog also seems harmless from a quick glance.
debian/patches/0005-Enable-zed-emails.patch| 2 +-
It seems like the mentioned clippy bug has since been fixed.
Signed-off-by: Lukas Wagner
---
proxmox-sys/src/fs/dir.rs | 4
proxmox-sys/src/fs/mod.rs | 2 --
2 files changed, 6 deletions(-)
diff --git a/proxmox-sys/src/fs/dir.rs b/proxmox-sys/src/fs/dir.rs
index 6aee316..0b409d7 100644
---
Suggested-by: Wolfgang Bumiller
Signed-off-by: Lukas Wagner
---
proxmox-shared-memory/src/lib.rs | 4 ++--
proxmox-sys/src/fs/file.rs | 4 ++--
proxmox-sys/src/fs/mod.rs| 9 -
3 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/proxmox-shared-memory/src/lib.rs b/
Under the hood, this function calls `mkdtemp` from libc. Unfortunatly
the nix crate did not provide bindings for this function, so we have
to call into libc directly.
Signed-off-by: Lukas Wagner
---
Notes:
Changes from v1 -> v2:
- Use remove_dir instead of unlink
- Log error if c
This patch series introduces a caching mechanism for expensive status
update calls made from pvestatd.
As a first step, I introduced the cache to the arguably
most expensive call, namely `storage_info` from pve-storage. Instead
of caching the results of the `storage_info` function as a whole, we
o
Signed-off-by: Lukas Wagner
---
proxmox-sys/src/fs/mod.rs | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/proxmox-sys/src/fs/mod.rs b/proxmox-sys/src/fs/mod.rs
index 8d790a4..f54aaf6 100644
--- a/proxmox-sys/src/fs/mod.rs
+++ b/proxmox-sys/src/fs/mod.rs
@@ -71,12 +71,12 @@
Cache storage plugin status so that pvestatd and API calls can use the
cached results, without having to query all storage plugins again.
Introduces the `ignore-cache` on some storage status API calls. By
default it is 0, but when set to 1 the values from the cache will be
ignored.
Signed-off-by:
These bindings are contained in the `SharedCacheBase` class, which is
subclassed by `SharedCache` in Perl. The subclass was needed to
implement the `get_or_update` method since that requires to call a
closure as a passed parameter.
Signed-off-by: Lukas Wagner
---
Notes:
Changes v1 -> v2:
This crate contains a file-backed cache with expiration logic.
The cache should be safe to be accessed from multiple processes at
once.
The cache stores values in a directory, based on the key.
E.g. key "foo" results in a file 'foo.json' in the given base
directory. If a new value is set, the file
See the following QEMU commits for reference:
0c5f3dcbb2 ("configure: add --enable-pypi and --disable-pypi")
ac4ccac740 ("configure: rename --enable-pypi to --enable-download, control
subprojects too")
6f3ae23b29 ("configure: remove --with-git-submodules=") removed
The last one removed the option
There are still some issues with graph locking, e.g. deadlocks during
backup canceling [0] and initial attempts to fix it didn't work [1].
Because the AioContext locks still exist, it should still be safe to
disable graph locking.
[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-09/msg00
Taking a snapshot became prohibitively slow because of the
migration_transferred_bytes() call in migration_rate_exceeded() [0].
This also applied to the async snapshot taking in Proxmox VE, so
work around the issue until it is fixed upstream.
[0]: https://gitlab.com/qemu-project/qemu/-/issues/182
Upstream QEMU commit 4271f40383 ("virtio-net: correctly report maximum
tx_queue_size value") made setting an invalid tx_queue_size for a
non-vDPA/vhost-user net device a hard error. Now, qemu-server before
commit 089aed81 ("cfg2cmd: netdev: fix value for tx_queue_size") did
just that, so the newer
Patch changes:
For backup, opening the backup dump block driver needed to be adapted,
because of coroutine context changes.
Block graph locking was disabled, because of deadlocks.
Snapshot code has a huge performance regression which required a
workaround.
Meta-changes:
Use --disable-download
It's not enough to initialize the submodules anymore, as some got
replaced by wrap files, see QEMU commit 2019cabfee ("meson:
subprojects: replace submodules with wrap files").
Download the subprojects during initialization of the QEMU submodule,
so building (without the automagical --enable-downl
ping? Patch still applies
previous patch versions with discussion are:
https://lists.proxmox.com/pipermail/pve-devel/2023-August/058794.html
https://lists.proxmox.com/pipermail/pve-devel/2023-August/058803.html
On 8/23/23 11:44, Aaron Lauterer wrote:
Allows to automatically create multiple OSDs
ping? patches still apply cleanly
On 8/22/23 11:04, Aaron Lauterer wrote:
It is possible to have multiple OSD daemons on a single disk. This is
useful if fast NVME drives are used to utilize their full potential.
For these situations we want to list all OSD daemons that are located on
the disk
When there is no comment for a backup group, the comment of the last
snapshot in this group will be shown slightly grayed out as long as
the group is collapsed.
Signed-off-by: Philipp Hufnagl
---
www/css/ext6-pbs.css | 3 +++
www/datastore/Content.js | 17 ++---
2 files changed,
grub packages in debian split between:
* meta-packages, which handles (among other things) the reinstalling
grub to the actual device/ESP in case of a version upgrade (grub-pc,
grub-efi-amd64)
* bin-packages, which contain the actual boot-loaders
The bin-packages can coexist on a system, but th
just realized while talking with Friedrich off-list - if this gets applied
it probably would make sense to include it in the pve7to8 (same for pbs
and pmg) checks (and also in the upgrade guides)
(mostly meant as a note to myself)
On Thu, 28 Sep 2023 16:05:33 +0200
Stoiko Ivanov wrote:
> grub p
Am 28/09/2023 um 16:29 schrieb Stoiko Ivanov:
> just realized while talking with Friedrich off-list - if this gets applied
> it probably would make sense to include it in the pve7to8 (same for pbs
> and pmg) checks (and also in the upgrade guides)
> (mostly meant as a note to myself)
Potentially
This patch add support for remote migration when target
cpu model is different.
target-reboot param need to be defined to allow migration
whens source vm is online.
When defined, only the live storage migration is done,
and instead to transfert memory, we cleanly shutdown source vm
and restart th
This patch series allow remote migration between cluster with different cpu
model.
2 new params are introduced: "target-cpu" && "target-reboot"
If target-cpu is defined, this will replace the cpu model of the target vm.
If vm is online/running, an extra "target-reboot" safeguard option is neede
---
PVE/QemuMigrate.pm | 420 +++--
1 file changed, 214 insertions(+), 206 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index f41c61f..5ea78a7 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -726,6 +726,219 @@ sub cleanup_bi
Le mercredi 26 avril 2023 à 15:14 +0200, Fabian Grünbichler a écrit :
> On April 25, 2023 6:52 pm, Alexandre Derumier wrote:
> > This patch add support for remote migration when target
> > cpu model is different.
> >
> > The target vm is restart after the migration
>
> so this effectively introdu
Am 28/09/2023 um 12:37 schrieb Stoiko Ivanov:
> Sugested-by: Thomas Lamprecht
> Signed-off-by: Stoiko Ivanov
> ---
> did some minimal testing (ztest for a while, containers with replication
> and a migration between 2 nodes) - looked ok
> The changelog also seems harmless from a quick glance.
>
26 matches
Mail list logo