On 28.05.21 14:09, Fabian Grünbichler wrote:
> running outdated VMs without master key support will generate a warning
> but proceed with a backup without encrypted key upload.
>
> Signed-off-by: Fabian Grünbichler
> ---
> context in second hunk changed..
>
> PVE/VZDump/QemuServer.pm | 13 +
On 28.05.21 14:14, Dominik Csapak wrote:
> build-depends naturally on the new proxmox-widget-toolkit-dev package
>
> Signed-off-by: Dominik Csapak
> ---
> Makefile | 10 +-
> api-viewer/PVEAPI.js | 489 +--
> debian/control | 1 +
> ex
On 23.04.21 12:14, Fabian Ebner wrote:
> widget-toolkit:
>
> Fabian Ebner (4):
> disk list: fix minor usage renderer issue
> disk list: factor out renderer for type and usage
> disk list: move title bar initialization to initComponent
> disk list: add wipe disk button
>
> src/panel/DiskL
Provides a fast cache read implementation with full async and
concurrency support.
Signed-off-by: Stefan Reiter
---
This is technically all that's needed for proxmox-backup-qemu to build and
function as intended, but I decided to also use this IMHO cleaner implementation
to replace the AsyncInde
Use the new CachedChunkReader with the shared_cache implementation to
provide a concurrency-safe async way of accessing data. This provides
two benefits:
* uses a shared LRU cache, which is very helpful for random-access like
during a live-restore
* does away with the global Mutex in read_image_
Implemented as a seperate struct SeekableCachedChunkReader that contains
the original as an Arc, since the read_at future captures the
CachedChunkReader, which would otherwise not work with the lifetimes
required by AsyncRead. This is also the reason we cannot use a shared
read buffer and have to a
Setting this to 0 is not just useless, but breaks the logic horribly
enough to cause random segfaults - better forbid this, to avoid someone
else having to debug it again ;)
Signed-off-by: Stefan Reiter
---
src/tools/lru_cache.rs | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/tools/lru_
superseded by CachedChunkReader, with less code and more speed
Signed-off-by: Stefan Reiter
---
src/backup.rs| 3 -
src/backup/async_index_reader.rs | 215 ---
2 files changed, 218 deletions(-)
delete mode 100644 src/backup/async_index_reader.rs
This series is the third attempt[0] at substantially improving live-restore
performance. This time, a fully async- and concurrency safe LRU cache is
implemented, and a new CachedChunkReader is used to provide lock-free reading
from a remote chunk source. The big performance improvements come from r
admin/datastore reads linearly only, so no need for cache (capacity of 1
basically means no cache except for the currently active chunk).
mount can do random access too, so cache last 8 chunks for possibly a
mild performance improvement.
Signed-off-by: Stefan Reiter
---
src/api2/admin/datastore.
Provides a shared AsyncLruCache of 256MB (w/ 4MB chunks) that can be
used by multiple readers at the same time. It is dropped once no more
readers exist, so the memory gets freed if all QEMU block/pbs instances
disappear.
Signed-off-by: Stefan Reiter
---
src/lib.rs | 7 ++-
src/sha
Supports concurrent 'access' calls to the same key via a
BroadcastFuture. These are stored in a seperate HashMap, the LruCache
underneath is only modified once a valid value has been retrieved.
Signed-off-by: Stefan Reiter
---
src/tools.rs | 1 +
src/tools/async_lru_cache.rs |
Explicitly test that data will stay available and can be retrieved
immediately via listen(), even if the future producing the data and
notifying the consumers was already run in the past.
Signed-off-by: Stefan Reiter
---
Wasn't broken or anything, but helps with understanding IMO.
src/tools/br
On 28.05.21 14:13, Dominik Csapak wrote:
> intended as build-dependency
> contains the apiviewer (will be used for all -docs packages),
> and Toolkit.js which will be used for the prune-simulator and
> lto-barcode-generator
>
> Dominik Csapak (3):
> Toolkit: move defaultDownloadServerUrl overrid
On 23.04.21 12:14, Fabian Ebner wrote:
> so admins wipe disks that are not actually used, but contain left-overs.
>
> The last patch needs dependency bumps for pve-storage and
> proxmox-widget-toolkit.
>
> storage:
>
> Fabian Ebner (5):
> diskmanage: add wipe_blockdev method
> diskmanage: fa
Since I just ran into it: It also breaks (at least container) backups
when there is a volume on a misconfigured storage.
Am 02.06.21 um 09:29 schrieb Fabian Ebner:
There's an edge case with 'restart' migration for containers that breaks
because of the new content type on startup checks:
If ther
Le mercredi 02 juin 2021 à 08:58 +0200, Thomas Lamprecht a écrit :
> Hi,
>
> On 02.06.21 08:39, aderum...@odiso.com wrote:
> > I was looking for qemu 6.0 new features,
> > and it seem that they have implement parallel async chunks backup
> > (and
> > I think for other block operations, through a n
> On 06/02/2021 12:16 PM wb wrote:
>
>
> > I also wonder why SAML? Would it be an option to use OpenId connect instead?
> As I was able to use SAML, I know the functional part and therefore, if I
> used SAML, it is only by ease.
>
> Switch to OpenID, why not. The time I set up a functional P
> I also wonder why SAML? Would it be an option to use OpenId connect instead?
As I was able to use SAML, I know the functional part and therefore, if I used
SAML, it is only by ease.
Switch to OpenID, why not. The time I set up a functional POC.
On the other hand, I would like to know your con
> > I wonder why you want to store temporary data in /etc/pve/tmp/saml.
> > Wouldn't it we good enough
> > to store that on the local file system?
> On the one hand, I enjoyed reusing your work.
> On the other hand, I think it is more secure to put this kind of data in
> /etc/pve/tmp/saml than in
Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
The goal of this is to expand the move-disk API endpoint to make it
possible to move a disk to another VM. Previously this was only possible
with manual intervertion either by renaming the VM disk or by manually
adding the disks volid to the config of
Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
Functionality has been added for the following storage types:
* dir based ones
* directory
* NFS
* CIFS
* gluster
* ZFS
* (thin) LVM
* Ceph
A new feature `rename` has been introduced to mark which storage
plugin supports the featu
Am 01.06.21 um 18:10 schrieb Aaron Lauterer:
Signed-off-by: Aaron Lauterer
---
PVE/Storage.pm | 8
1 file changed, 8 insertions(+)
diff --git a/PVE/Storage.pm b/PVE/Storage.pm
index aa36bad..93d09ce 100755
--- a/PVE/Storage.pm
+++ b/PVE/Storage.pm
@@ -201,6 +201,14 @@ sub storage_ca
There's an edge case with 'restart' migration for containers that breaks
because of the new content type on startup checks:
If there is an already running container with a volume on storage A, and
now storage A is reconfigured to not support 'rootdir' anymore, then
migration itself does work, bu
24 matches
Mail list logo