> >>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> >>> index f401baf..e579cdf 100644
> >>> --- a/PVE/QemuServer.pm
> >>> +++ b/PVE/QemuServer.pm
> >>> @@ -6991,7 +6991,15 @@ sub clone_disk {
> >>> # that is given by the OVMF_VARS.fd
> >>> my $src_path = PVE::Storage::pa
This code is quite strange. Please can you use a
normal: if .. then .. else ..
> +push @$cmd, '-H' if $healthonly;
> +push @$cmd, '-a', '-A', '-f', 'brief' if !$healthonly;
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.pr
What about using a memory mapped files as cache. That way, you do not
need to care about available memory?
> >> Maybe we could get the available memory and use that as hint, I mean as
> >> memory
> >> usage can be highly dynamic it will never be perfect, but better than just
> >> ignoring
> >> i
> Is there a reason why we assume that users without subscription do not want
> such notifications?
>
> As far as I see it, if we change it to
> > $dccfg->{notify_updates} // 1
> Then (until they change something)
> - users with active subscription should _continue_ to get notifications
> - enterp
FYI, I do it without any regex in rust:
https://git.proxmox.com/?p=proxmox-backup.git;a=blob;f=src/config/acl.rs;h=61e507ec42bf5a30f64f56564a1fb107d148fb7b;hb=HEAD#l272
I guess this is faster (at least in rust).
> On 04/19/2021 9:16 AM Lorenz Stechauner wrote:
>
>
> Syntax for permission pa
> I am the maintainer of StorPool’s external storage plugin for PVE[0]
> which integrates our storage solution as a backend for VM disks. Our
> software has the ability to create atomic (crash-consistent) snapshots
> of a group of storage volumes.
We already make sure that shaphots of a group
> This approach would also use more storage as you now have the overhead
> of FS metadata for every single ID you have marked as used.
>
> Dietmar, what do you think is the best option here? I'm personally
> leaning towards using the list with your run-length encoding suggestion,
> but I'm open to
> > [...]
> > so a factor of 32 less calls to cfs_fuse_write (including memdb_pwrite)
>
> That can be huge or not so big at all, i.e. as mentioned above, it would we
> good to
> measure the impact through some other metrics.
>
> And FWIW, I used bpftrace to count [0] with an unpatched pmxcfs, t
The format of the used_vmids.list is simply, but can lead to
a very large file over time (we want to avoid large files on /etc/pev/).
>PVE::Cluster::cfs_write_file('used_vmids.list', join("\n", @$vmid_list));
A future version could compress that list, by using integer ranges,
for example:
--
Please can you add the column showing write amplification using dd instead of
file_set_contents, so that we can also see the minimal write amplf. from sqlite.
> The table below illustrates the drastic reduction in write
> amplification when writing files of different sizes to `/etc/pve/` using
>
19 patches for this seems to be too much. Maybe we can try to cleanup the qemu
code for those checks and send those patches upstream (In order to make this
series shorter)?
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.c
Sorry, please ignore me. I now see that most patches are for qemu-server (not
qemu) ...
> On 4.1.2025 08:58 CET Dietmar Maurer wrote:
>
>
> 19 patches for this seems to be too much. Maybe we can try to cleanup the
> qemu code for those checks and send those patches upstre
> 1. Do we want to allow spaces in groups and/or usernames, or should we
> prefer replacement characters (e.g. mapping space(s) to _ or some other
> character)?
My feeling is that we need to allow all characters - else this will be an
endless issue ...
> 2. In case we want to allow spaces in
101 - 113 of 113 matches
Mail list logo