On 09.07.20 14:45, Fabian Ebner wrote:
> Signed-off-by: Fabian Ebner
> ---
>
> New in v4
>
> PVE/VZDump/Common.pm | 4
> 1 file changed, 4 insertions(+)
>
>
applied, thanks!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists
On 30.07.20 12:18, Dominic Jäger wrote:
> Signed-off-by: Dominic Jäger
> ---
> v1->v2: unchanged
>
> src/window/Edit.js | 4
> 1 file changed, 4 insertions(+)
>
> diff --git a/src/window/Edit.js b/src/window/Edit.js
> index d7972b6..2dfab19 100644
> --- a/src/window/Edit.js
> +++ b/src/win
On 14.08.20 10:33, Dominik Csapak wrote:
> fixes 4 issues:
> * use correct /api2/ext url to get the 'success' parameter
> * check 'used' property for 'unused' (pbs vs pve)
> * use 'name' instead of 'devpath' for id
> (name always contains the correct id for the product,
> e.g. /dev/sdd for pve
On 20.08.20 13:50, Fabian Ebner wrote:
> For prune selections, it doesn't matter what the current time is,
> only the timestamps of the backups matter.
>
> Signed-off-by: Fabian Ebner
> ---
>
> Sorry for missing this when I sent the series.
>
> PVE/API2/Storage/PruneBackups.pm | 5 ++---
> PVE
On 20.08.20 15:32, Stefan Reiter wrote:
> This still works even if all drives were clean. It then shows the very
> magical line:
>
> INFO: backup was done incrementally, reused 34.00 GiB (100%)
>
> Signed-off-by: Stefan Reiter
> ---
> PVE/VZDump/QemuServer.pm | 10 +-
> 1 file changed
On 20.08.20 15:32, Stefan Reiter wrote:
> QEMU handles it just as well as with VMA, so this was probably just
> forgotten to implement for PBS.
>
> Signed-off-by: Stefan Reiter
> ---
> PVE/VZDump/QemuServer.pm | 1 +
> 1 file changed, 1 insertion(+)
>
>
applied, thanks!
_
By using a JobTxn, we can sync dirty bitmaps only when *all* jobs were
successful - meaning we don't need to remove them when the backup fails,
since QEMU's BITMAP_SYNC_MODE_ON_SUCCESS will now handle that for us.
To keep the rate-limiting and IO impact from before, we use a sequential
transaction
Signed-off-by: Stefan Reiter
---
include/qemu/job.h | 12
job.c | 24
2 files changed, 36 insertions(+)
diff --git a/include/qemu/job.h b/include/qemu/job.h
index 32aabb1c60..f7a6a0926a 100644
--- a/include/qemu/job.h
+++ b/include/qemu/job.h
@@
RFC since very experimental and only lightly tested.
Our backup starts one QEMU block job per drive that is included in the final
archive. Currently, we start them all in 'paused' state, manually calling
job_start for the next one we whenever one calls its completion callback.
By using a transact
QEMU handles it just as well as with VMA, so this was probably just
forgotten to implement for PBS.
Signed-off-by: Stefan Reiter
---
PVE/VZDump/QemuServer.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/VZDump/QemuServer.pm b/PVE/VZDump/QemuServer.pm
index e222463..4640009 100644
---
On 22.07.20 12:20, Dominic Jäger wrote:
> The output of "pvecm delnode someNode" is "Killing node X". Even though this
> only says something about an attempt and not about success, it is not "no
> output is returned".
>
> Signed-off-by: Dominic Jäger
> ---
> pvecm.adoc | 6 +++---
> 1 file chang
This still works even if all drives were clean. It then shows the very
magical line:
INFO: backup was done incrementally, reused 34.00 GiB (100%)
Signed-off-by: Stefan Reiter
---
PVE/VZDump/QemuServer.pm | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/PVE/VZDump
On 22.07.20 12:20, Dominic Jäger wrote:
> /etc/corosync/* includes the directory uidgid.d.
> Consequentlly, a correct rm call requires -r.
>
> Signed-off-by: Dominic Jäger
> ---
> pvecm.adoc | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
applied, thanks!
___
On 03.08.20 13:07, Dominic Jäger wrote:
> Dashes --- put the "Install PVE on Debian Buster" link into a code segment on
> pve.proxmox.com/wiki/Installation.
>
> Additionally, the both links are not ZFS performance tips => Move them further
> below.
>
> Signed-off-by: Dominic Jäger
> ---
> This i
On 06.08.20 09:53, Fabian Ebner wrote:
> Using load-key alone is not enough to be able to use the storage.
>
> Signed-off-by: Fabian Ebner
> ---
> local-zfs.adoc | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>
applied, albeit not by me someone just forgot to send out this mail
On 11.08.20 14:30, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> PVE/API2/ReplicationConfig.pm | 10 --
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
>
applied series, thanks!
___
pve-devel mailing list
pve-dev
On 06.08.20 13:13, Fabian Grünbichler wrote:
> this is needed for template backups with PBS until we have the backup
> equivalent of 'pbs-restore'.
>
> Signed-off-by: Fabian Grünbichler
> ---
> did some quick tests and didn't run into any issues - @Dietmar/@Stefan
> is that check needed for some
On 20.08.20 10:42, Stefan Reiter wrote:
> No major semantic changes, mostly just deprecations and changed function
> signatures. Drop the extra/ patches, as they have been applied upstream.
>
> The added extra/ patch was accepted upstream[0] but has not been picked
> up for 5.1. It is required for
hi,
i've tested:
- live migration
- backup & restore (also with pbs)
- snapshot & rollback
- pending changes
- disk hotplugging
unfortunately couldn't test pci passthrough because of conflicts in
iommu groups on my machine.
Tested-by: Oguz Bektas
On Thu, Aug 20, 2020 at 11:48:36AM +0200, Ogu
Signed-off-by: Fabian Ebner
---
Otherwise it looks strange IMHO
PVE/CLI/pvesm.pm | 5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/CLI/pvesm.pm b/PVE/CLI/pvesm.pm
index 93ef977..caac51b 100755
--- a/PVE/CLI/pvesm.pm
+++ b/PVE/CLI/pvesm.pm
@@ -932,6 +932,11 @@ our $cmddef = {
For prune selections, it doesn't matter what the current time is,
only the timestamps of the backups matter.
Signed-off-by: Fabian Ebner
---
Sorry for missing this when I sent the series.
PVE/API2/Storage/PruneBackups.pm | 5 ++---
PVE/CLI/pvesm.pm | 4 ++--
2 files changed, 4
i'm testing this on my cluster, will update once i'm done
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 19.08.20 12:30, Fabian Ebner wrote:
> Since it was necessary to switch to 'Type=Simple' in the systemd
> service (see 545d6f0a13ac2bf3a8d3f224c19c0e0def12116d ),
> 'systemctl start pve-container@ID' would not wait for the 'lxc-start'
> command anymore. Thus every container start was reported as
Am 20.08.20 um 10:53 schrieb Thomas Lamprecht:
On 12.08.20 12:01, Fabian Ebner wrote:
If there is no serious problem, it shouldn't be possible to run into
this timeout anyways. It's just (extracting and) reading the header of
the (compressed) vma file. And if there is a serious problem, then the
Am 20.08.20 um 10:56 schrieb Thomas Lamprecht:
On 12.08.20 12:01, Fabian Ebner wrote:
qcow2 images are allocated with --preallocation=metadata,
which can take a while for large images.
A 5 second timeout is set before reading the device map, so it's
s/seconds/minutes/ ?
No, $timeout = 5; i
and the associated parts for 'qm start'.
Each test will first populate the MigrationTest/run directory
with the relevant configuration files and files keeping track of the
state of everything necessary. Second, the mock-script for migration
is executed, which in turn will execute the 'qm start' mo
so it can be mocked when testing.
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 119 -
1 file changed, 64 insertions(+), 55 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 8ce265a..1b06640 100644
--- a/PVE/QemuMigrate.pm
allows to mock it when testing and a few lines less duplication
between the migration modules.
Signed-off-by: Fabian Ebner
---
Dependency bumps
{qemu-server,container} -> guest-common
are needed.
Changes from v1:
* collect patches into one series
* many new tests and improvements to the
Signed-off-by: Fabian Ebner
---
src/PVE/LXC/Migrate.pm | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm
index 90d74b4..d2938f6 100644
--- a/src/PVE/LXC/Migrate.pm
+++ b/src/PVE/LXC/Migrate.pm
@@ -302,9 +302,6 @@ sub
Signed-off-by: Fabian Ebner
---
PVE/QemuMigrate.pm | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 11fec4b..8ce265a 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -1181,14 +1181,7 @@ sub phase3_cleanup {
On 12.08.20 12:01, Fabian Ebner wrote:
> qcow2 images are allocated with --preallocation=metadata,
> which can take a while for large images.
>
> A 5 second timeout is set before reading the device map, so it's
s/seconds/minutes/ ?
> necessary to restore the old timeout before calling print_devm
On 12.08.20 12:01, Fabian Ebner wrote:
> Assume that the function is called within a worker not restricted by
> any timeout. This is true currently, because the only path leading to
> restore_vma_archive is via restore_file_archive being called within a
> worker by the create_vm API call.
you coul
On 12.08.20 12:01, Fabian Ebner wrote:
> If there is no serious problem, it shouldn't be possible to run into
> this timeout anyways. It's just (extracting and) reading the header of
> the (compressed) vma file. And if there is a serious problem, then the
> commands will most likely fail for a diff
No major semantic changes, mostly just deprecations and changed function
signatures. Drop the extra/ patches, as they have been applied upstream.
The added extra/ patch was accepted upstream[0] but has not been picked
up for 5.1. It is required for non-4M aligned backups to work with PBS.
[0] htt
On 8/20/20 10:17 AM, Thomas Lamprecht wrote:
On 12.02.20 14:32, Stefan Reiter wrote:
We already keep hugepages if they are created with the kernel
commandline (hugepagesz=x hugepages=y), but some setups (specifically
hugepages across multiple NUMA nodes) cannot be configured that way.
Since we a
On 12.02.20 14:32, Stefan Reiter wrote:
> We already keep hugepages if they are created with the kernel
> commandline (hugepagesz=x hugepages=y), but some setups (specifically
> hugepages across multiple NUMA nodes) cannot be configured that way.
> Since we always clear these hugepages at VM shutdo
Ping? This is an old one, but the bug report is still active.
Would need a rebase if the approach is deemed ok.
On 2/12/20 2:32 PM, Stefan Reiter wrote:
We already keep hugepages if they are created with the kernel
commandline (hugepagesz=x hugepages=y), but some setups (specifically
hugepages
37 matches
Mail list logo