Suggested-by: Dietmar Maurer
Signed-off-by: Fiona Ebner
---
proxmox-apt/src/repositories/standard.rs | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/proxmox-apt/src/repositories/standard.rs
b/proxmox-apt/src/repositories/standard.rs
index 4bb57d4..29cc788 100644
--- a/prox
It was already done in tunnel v1.
Avoid to avoid migration (and keep both source/targetvm locked) if nbdstop
error occur
2023-09-28 16:20:39 ERROR: error - tunnel command '{"cmd":"nbdstop"}' failed -
failed to handle 'nbdstop' command - VM 140 qmp command 'nbd-server-stop'
failed - got timeout
---
PVE/QemuServer.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 1b1ccf4..0259c0f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -8267,7 +8267,7 @@ sub generate_smbios1_uuid {
sub nbd_stop {
my ($vmid) = @_;
-
Hi,
We had some sporadic nbd-stop error when trying to migrate vm with rbd storage
+ writeback between 2 differents cluster:
(This is without my other targetcpu patch)
2023-09-28 16:20:39 ERROR: error - tunnel command '{"cmd":"nbdstop"}' failed -
failed to handle 'nbdstop' command - VM 140 qmp
Upstream report of the issue [0]. I ran into it too by chance by
filling up my NFS storage with the VM's qcow2 disk.
[0]: https://bugzilla.redhat.com/show_bug.cgi?id=2234374
Signed-off-by: Fiona Ebner
---
...ile-posix-Clear-bs-bl.zoned-on-error.patch | 87 +++
...osix-Check-bs-b
Am 29.09.23 um 10:28 schrieb Alexandre Derumier:
> I'm not sure, maybe it's related to writeback, because it never happend with
> a fresh started vm, but vms running since some time can trigger this.
> (I'm not sure, maybe nbd need to flush pending datas in cache ?)
>
It does drain the export's
On 9/28/23 15:33, Philipp Hufnagl wrote:
> When there is no comment for a backup group, the comment of the last
> snapshot in this group will be shown slightly grayed out as long as
> the group is collapsed.
>
> Signed-off-by: Philipp Hufnagl
> ---
> www/css/ext6-pbs.css | 3 +++
> www/d
This new endpoint allows to get the values of config keys that are
either set in the config db or the ceph.conf file.
Values that are set in the ceph.conf file have priority over values set
in the conifg db via 'ceph config set'.
Expects the --config-keys parameter as a semicolon separated list o
instead of relying purely on listeners that then manually change other
components, we can use binds, formulas and a basic controller.
This makes it quite a bit easier to let multiple components react to
changes.
A cbind is used for the size component to set the initial start value.
Other options,
Instead of hard coded defaults for the size and min_size parameter,
check if we have defaults configured in the ceph.conf or config db and
use those.
There are clusters where different defaults are needed. For example if
the cluster spans two rooms and needs to survive the loss of one. A
size/min_
The main goal of this series is to improve the handling of configured
default size & min_size values when creating a new Ceph Pool in the GUI.
A new Ceph API endpoint, 'cfg/value', is added. It allows us to fetch
values for config keys that are set either in the config DB of Ceph or
in the ceph.co
If there is a pending DMA operation during ide_bus_reset(), the fact
that the IDEstate is already reset before the operation is canceled
can be problematic. In particular, ide_dma_cb() might be called and
then use the reset IDEstate which contains the signature after the
reset. When used to constru
ping? patches still apply cleanly
On 6/14/23 11:30, Aaron Lauterer wrote:
For that we need to add a new format option that checks against valid
VLAN tags and ranges, for example: 2 4 100-200
The check, if the default value should be used, needs to fail not just
when not defined, but also in cas
13 matches
Mail list logo