--- Begin Message ---
about discard, maybe we should add a note about enabling
issue_discards=1 in lvm.conf. (don't known if it could be enabled by
default for new install ?)
Message initial
De: Fabian Grünbichler
Répondre à: Proxmox VE development discussion
À: pve-devel@list
--- Begin Message ---
Thanks Fiona !
>>While 'snapshot-as-volume-chain' is not the perfect proxy, as that's
>>not only for LVM, it's an experimental feature that covers the LVM
>>case and it seems like a nice fit to try out the new option on
>>file-based storages too.
I'll try to do test with qco
--- Begin Message ---
>>Currently, the
>>option does not seem to make a difference with 'qemu-img measure', so
>>that needs to be further investigated.
it seem to be normal that preallocation option don't make a difference
here. qemu-img measure always compute size of all metadatas. (only
cluste
--- Begin Message ---
Hi Fiona, I'm on holiday, can't verify, but before using qemu-img
measure I had implemented compute of metadatas size (It was not 100%
perfect).
This is strange, because I thinked that "qemu-img measure" was working
correctly (we need to pass it blocksize && l2_extended optio
--- Begin Message ---
also thinking about that,
if we local daemon is down/dead, and that it's don't unqueue,
maybe some kind of ttl for the object in queue should be use.
if we go with a separate queue for each server, it's no a problem,
but they need to compete on the resize lock. (so we can'
--- Begin Message ---
>>Ok, are you thinking of having a local queue for each node as well?
>>since if there was a single queue for all nodes how would you manage
>>the
>>concurrency of writes of each node?
I really don't known to be honest.
What could happen, is a live migration, where an even
--- Begin Message ---
>>
>>However I have some questions:
>>
>>1. When qmeventd receives the BLOCK_WRITE_THRESHOLD event, should the
>>extend request (writing the nodename to the extend queue) be handled
>>directly in C, or would it be preferable to do it via an API call
>>such
>>as PUT /nodes/{n
--- Begin Message ---
>>
>>
>>Wolfgang now applied all patches and some follow-ups, we test this a
>>bit
>>more internal, but if nothing grave comes up this should be included
>>in
>>upcoming PVE 9 - great work!
Fantastic ! I would like to thank you guys for your support ! So
thanks Fiona, Fabian
--- Begin Message ---
>>Yeah I tried some quick tests and it seems to be a bit tricky. Or
>>maybe
>>I just missed something.
Just have done fast tests, I think I have found a way.
(I'll do more test tomorrow to see if everything is ok in guest)
sub qemu_volume_snapshot {
my ($vmid, $device
--- Begin Message ---
> + rename($src_file_blockdev->{filename},
> $target_file_blockdev->{filename});
>>
>>^ This seems out of place. This does not make sense for all storages
>>and
>>should already have happened via PVE::Storage::rename_snapshot(), no?
>>You don't want to literally rename
--- Begin Message ---
> +* Introduce volume_support_qemu_snapshot() plugin method
> + This method is used to known if the a snapshot need to be done
> by qemu
> + or by the storage api.
> + returned values are :
> + 'internal' : support snapshot with qemu internal snapshot
> + 'exte
--- Begin Message ---
> > > 4. all snapshot volumes on extsnap dir storages will print
> > > warnings
> > > like
> > >
> > > `this volume filename is not supported anymore`
> > >
> > > when hitting `parse_namedir` - those can likely be avoided by
> > > skipping the warning if the name matches the
--- Begin Message ---
>>6. it's fairly easy to accidentally create qcow2-formatted LVM
>>volumes, as opposed to the requirement to enable a non-UI storage
>>option at storage creation time for dir storages, we might want to
>>add some warning to the UI at least? or we could also guard usage of
>>th
--- Begin Message ---
>>4. all snapshot volumes on extsnap dir storages will print warnings
>>like
>>
>>`this volume filename is not supported anymore`
>>
>>when hitting `parse_namedir` - those can likely be avoided by
>>skipping the warning if the name matches the snapshot format and
>>external-sn
--- Begin Message ---
>> sub qemu_img_resize {
>>- my ($scfg, $path, $format, $size, $timeout) = @_;
>>+ my ($scfg, $path, $format, $size, $preallocation, $timeout) =
@_;
you have forgot to remove the $scfg param, so
it's breaking resize for both plugin && lvmplugin
Plugin.pm:PVE::Sto
--- Begin Message ---
>>1. missing activation when snapshotting an LVM volume if the VM is
>>not running
Ah yes, I didn't see it, on volume create, the volume is auto-
activate, but if you start/stop the vm, it's inactivate.
I'm seeing new patch to disable auto-activation too
https://git.proxmo
--- Begin Message ---
Hi Thomas
Am 10.07.25 um 17:46 schrieb DERUMIER, Alexandre:
> I'll try to fix all your comments for next week.
>
> I'm going on holiday end of the next week, the 18th july to around 10
> August, so I think It'll be the last time I can work on it before
> next
> month. But f
--- Begin Message ---
Hi Fabian,
I'll try to fix all your comments for next week.
I'm going on holiday end of the next week, the 18th july to around 10
August, so I think It'll be the last time I can work on it before next
month. But feel free to improve my patches during this time.
--- Begin Message ---
> +sub get_snap_name {
>>should this be public?
I'll make it private
> +sub get_snap_volname {
>>should this be public?
> +
> +sub parse_snapname {
>>should this be public?
This two methods are used in volume_snapshot_info(), defined in plugin,
and use by lvmplugin too
--- Begin Message ---
> +
> + # we can simply reformat the current lvm volume to avoid
> + # a long safe remove.(not needed here, as the allocated space
> + # is still the same owner)
> + eval { lvm_qcow2_format($class, $storeid, $scfg, $volname,
> $format, $snap) };
>>what if the volu
--- Begin Message ---
> + my $backing_path = $class->path($scfg, $name, $storeid,
> $backing_snap) if $backing_snap;
>>also, this should probably encode a relative path so that renaming
>>the VG and
>>adapting the storage.cfg entry works without breaking the back
>>reference?
About relative
--- Begin Message ---
>>okay, that means we instead need to become more strict with 'snapext'
>>storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-
>>*.fmt?
$plugin->parse_volname($volname) don't have $scfg param currently,
Do you want to extend it ? (and do change in every plug
--- Begin Message ---
>>Can we please use an actually telling name though? As "ext" is quite
>>often used as term for "extension", and we really win nothing with
>>doing this.
>>
>>Strongly preferring words to be spelled out in full and separated
>>with
>>hyphens, instead of something else or being
--- Begin Message ---
>
> I think it'll break parsing of already configured storage without
> snapext option ?
>>I don't think it does?
ah , I have tried with
{ fixed => 1 } only
but it's ok with
{ optional => 1, fixed => 1 }
pvesm set teststorage --snapext 1
update storage failed: can't cha
--- Begin Message ---
>>okay, that means we instead need to become more strict with 'snapext'
>>storages and restrict the volnames there.. maybe to (vm-|base-)-XXX-
>>*.fmt?
>>that means only allowing such names when allocating volumes, and
>>filtering
>>when listing images..
>>
>>since we want to
--- Begin Message ---
> +my sub alloc_backed_image {
> + my ($class, $storeid, $scfg, $volname, $backing_snap) = @_;
> +
> + my $path = $class->path($scfg, $volname, $storeid);
> + my $backing_path = $class->path($scfg, $volname, $storeid,
> $backing_snap);
>>should we use a relative path
--- Begin Message ---
> preallocation => { optional => 1 },
> + snapext => { optional => 1 },
>>needs to be "fixed", as the code doesn't handle mixing internal
>>and external snapshots on a single storage..
I think it'll break parsing of already configured storage without
snapext
--- Begin Message ---
>>we could consider adding a new API method `rename_snapshot` instead:
>>
>>my ($class, $scfg, $storeid, $volname, $source_snap, $target_snap) =
>>@_;
>>
>>for the two plugins here it could easily share most of the
>>implementation
>>with rename_volume, without blowing up the
--- Begin Message ---
>
> or maybe use something else than volume_snapshot_info here, simply
> glob
> all the vm disk && snap files and delete them in random order, as we
> want to delete it anyway.
yes, this is exactly what I meant with tricky ;)
>>if we start deleting snapshots from the "first
--- Begin Message ---
Ah, sorry, I just notice than I have rebase the wrong patch serie
version (I had already done some fix for last fiona review)
Fiona wanted a dedicated sub to create backed images like
our $QCOW2_CLUSTERS = {
backed => ['extended_l2=on','cluster_size=128k']
};
=pod
=hea
--- Begin Message ---
> + snapext => { optional => 1 },
>>needs to be "fixed", as the code doesn't handle mixing internal
>>and external snapshots on a single storage..
indeed, I'll fix it
>
>
> +my sub alloc_backed_image {
> + my ($class, $storeid, $scfg, $volname, $backing_snap) =
--- Begin Message ---
-
> + #we skip snapshot for tpmstate
> + return if $deviceid && $deviceid =~ m/tpmstate0/;
>>I think this is wrong.. this should return 'storage' as well?
Ah yes,indeed, I don't known why I was confused, and thinked we
couldn't take a storage snapshot of tmpstate when
--- Begin Message ---
>>these should probably stay in Plugin.pm
Ok, will do, no problem (Fiona asked me to move it out of Plugin)
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinf
--- Begin Message ---
>
> >>cluster_size is set to 128k, as it reduce qcow2 overhead (reduce
> >>disk,
> >>but also memory needed to cache metadatas)
>>
>>should we make this configurable?
I'm not sure yet, I have choose the best balance between memory <-
>performance (too big block reduce perfor
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion
Cc: Alexandre Derumier , Thomas
Lamprecht
Objet: Re: [pve-devel] [PATCH-SERIES v7 pve-storage/qemu-server] add
external qcow2 snapshot support
Date: 04/07/2025 13:58:38
> Alexand
--- Begin Message ---
Hi,
I Just notice this when rebasing my patches ^_^
(sorry my last patch serie was still tab+spaces)
I'm trying to install proxmox-perltidy but it depend on perltidy
20250311.05
dpkg: dependency problems prevent configuration of proxmox-perltidy:
proxmox-perltidy depends
--- Begin Message ---
>>
>>Do you have capacity for rebasing your external qcow2 patches on
>>top until early next week? Else I can also do it.. I'd like to
>>get those into shape for more widespread internal testing over
>>the next week, if possible.
yes, I'm currently working on it ! (I'm full t
--- Begin Message ---
Hi Fiona,
I have done a lot of test yesterday, I don't have found bug
tested:
- hotplug/unplug
- cdrom swap , cdrom none->iso , iso->none (no more error)
- cloudinit refresh
- backup to file/pbs , fleecing device
- restore from file/pbs , live restore
- move disk between di
--- Begin Message ---
patch 46 blocked in the mailing list ?
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
ange_medium() helper
Date: 01/07/2025 12:25:59
Am 01.07.25 um 12:20 schrieb DERUMIER, Alexandre via pve-devel:
> > > I cannot reproduce this here, could you share the exact commands
> > > and
> > > machine configuration?
>
> ah sorry, I didn't receive the patch29,
--- Begin Message ---
>>I cannot reproduce this here, could you share the exact commands and
>>machine configuration?
ah sorry, I didn't receive the patch29, and I have apply the patch from
the previous serie, and I think it's not calling the correct sub.
I'll retest it with correct patch, sorry
--- Begin Message ---
>>if you start a VM with cdrom=none, throttle group object is
>>generated:
sorry, I wanted to said : the throttle top node (which is using the
throttle group)
I think it's just related to blockdev-medium-change, as we can't
specify the node chain, qemu is simply autocreating
--- Begin Message ---
Another thing,
if the vm is start with cdrom=none, then you switch to an iso,
the throttle group is not generated (+the autogenerated nodenames)
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https
--- Begin Message ---
>>After a cloudinit regenerate, or if I swap a cdrom image to a new
>>cdrom
>>image,
>>
>>the old format && file blockdev are not removed, and the new
>>blockdevs
>>have autogenerated nodenames
not sure if it's a qemu bug, but I think this is why I have use open-
tray, rem
--- Begin Message ---
After a cloudinit regenerate, or if I swap a cdrom image to a new cdrom
image,
the old format && file blockdev are not removed, and the new blockdevs
have autogenerated nodenames
info blockdev -n
#block143: /var/lib/vz/images/107/vm-107-cloudinit.qcow2 (qcow2)
Cache m
--- Begin Message ---
Message initial
De: "DERUMIER, Alexandre"
À: pve-devel@lists.proxmox.com
Objet: Re: [pve-devel] [PATCH qemu-server 05/31] blockdev: add helpers
for attaching and detaching block devices
Date: 30/06/2025 12:35:22
> + # node and also implicit backing chil
--- Begin Message ---
> + # node and also implicit backing children referenced by a qcow2
> image.
> + eval { mon_cmd($vmid, 'blockdev-del', 'node-name' =>
> "$node_name"); };
> + if (my $err = $@) {
> + return if $err =~ m/Failed to find node with node-name/; #
> already gone
>>do
--- Begin Message ---
patch 29/31 seem to be missing. (I don't see it in lore.proxmox.com
too)
--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
--- Begin Message ---
Hi Fiona,
from my test, I needed to use the top throttle node to have to new
resize correctly reported to guest
https://lore.proxmox.com/all/mailman.947.1741688963.293.pve-de...@lists.proxmox.com/
(I'm going to test your patch serie today)
Message initial ---
--- Begin Message ---
Hi Dominik,
I'm going to send patch to use json format by default for -device
options, It should help here
I have a patch for multiple iothreads too on my side (for both virtio
&& virtio-scsi), maybe we could compare implementation.
on my side, I'm using use same implemen
--- Begin Message ---
>>Yes, but note that v2 of part one was already applied on master
>>except
>>those patches.
ah ok, sorry, I didn't see it !
>>I.e. this series here applies on current master. But
>>you'll need Debian Trixie going forward, as there now is a dependency
>>on
>>libpve-common-per
--- Begin Message ---
>>
>>I think I'll send the next version of the storage patches relatively
>>shortly after part three (still quite a bit of splitting up
>>QemuServer.pm on the way, e.g for introducing a BlockJob module).
>>I'll
>>try to send part three in the following days, I hope I get aroun
--- Begin Message ---
>>diff --git a/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
>>b/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
>>index 5c55c01b..474e8038 100644
>>--- a/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
>>+++ b/src/test/cfg2cmd/efidisk-on-rbd.conf.cmd
>>@@ -9,8 +9,9 @@
>> -pidfile /var/run/qem
--- Begin Message ---
>>Would be a good catch ;) But after the recent discussion upstream
>>[0],
>>the plan is to not patch QEMU, but set the option via RBD in the
>>storage
>>plugin itself for EFI disks, that's why the hint is still passed
>>along.
>>I added a comment that it's set via the storage
--- Begin Message ---
Hi Fiona,
This apply on top of the part one serie v2 ? as some commits seem to be
the same
(I see patches 32,31,28).
Does I need to apply the part one excluding theses patches (I hope that
patch 29,30 will apply without 28), then apply the part2 ?
--- End Message ---
_
--- Begin Message ---
Hi,
nice work !
Could it be possible to have an option to configure the
CONCURRENT_REQUESTS ?
(to avoid to put too much load on slow spinning storage)
Message initial
De: Adam Kalisz
À: pve-devel@lists.proxmox.com
Objet: Discussion of major PBS rest
--- Begin Message ---
>>But again, I would do that on top of this series, it should not
>>really change anything for the implementation here.
Yes sure, no rush, we still hae workaround with manually setting host
colocation rules.
I asked that to be sure that this case could be possible to impleme
--- Begin Message ---
Hi Daniel,
Thanks for your hard work on this.
I don't known if it's the best place, but 1 thing missing currently,
if ressource affinity, like for example,
if a vm use a specific storage, it need to run on a node where the
storage is present.
Same for the number of cores of
3.06.25 um 11:12 schrieb DERUMIER, Alexandre via pve-devel:
> > > + my $blockdev =
> > > PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg,
> > > $device, {});
> > > + mon_cmd($vmid, 'blockdev-add', %$blockdev, timeout =>
--- Begin Message ---
>>+ my $blockdev =
>>PVE::QemuServer::Blockdev::generate_drive_blockdev($storecfg,
>>$device, {});
>>+ mon_cmd($vmid, 'blockdev-add', %$blockdev, timeout => 60);
>>+
>>+ return 1;
should we handle error here ? (I don't known if a blockdev-add can
fail ,
--- Begin Message ---
>>1) Having "location" and "colocation" rules is, I think, going to be
>>unnecessarily confusing for people. While it isn't too complicated to
>>glean
>>the distinction once having read the descriptions of them (and I had
>>to go
>>read the descriptions), they don't convey imm
--- Begin Message ---
>>If you need further functionality for the external snapshot support,
>>you
>>can add it later :)
Yes sure ! If I remember, it's only after taking an new snapshot, so
maybe it's blockdev-reopen related.
if I start a vm with an existing backing file (like a linked clone),
i
--- Begin Message ---
> > With '-blockdev', it is necessary to activate the volumes to
> > generate
> > the command line, because it can be necessary to check whether the
> > volume is a block device or a regular file.
>>I was thinking about that, but do we have storage with
>>activate_volume
>>n
--- Begin Message ---
>>With '-blockdev', it is necessary to activate the volumes to generate
>>the command line, because it can be necessary to check whether the
>>volume is a block device or a regular file.
I was thinking about that, but do we have storage with activate_volume
need to be done fo
--- Begin Message ---
>>The 'snapshot' option, for QEMU's snapshot mode, i.e. writes are only
>>temporary, is not yet supported.
from qemu manpage:
"
-snapshot
Write to temporary files instead of disk image files. In
this case, the raw disk image you use is not written back.
--- Begin Message ---
I think I have wrongly dropped nbd with unix socket in my last patch
series, but previously it was:
+} elsif ($path =~ m/^nbd:(\S+):(\d+):exportname=(\S+)$/) {
+ my $server = { type => 'inet', host => $1, port => $2 };
+ $blockdev = { driver => 'nbd', serve
--- Begin Message ---
>>+ # QEMU recursively auto-removes the file children, i.e. file
>>and format node below the top
From my tests, it's not removing backing nodes when snapshots are used,
at least when then are defined with nodename. Don't have tested with
autogenerated backing nodes, I'll
--- Begin Message ---
Hi Fiona,
I'm going to test your patch serie this week
>>While the last patch actually does the switch, many operations are
>>not
>>yet supported. It is included to show what changes I made there. It
>should not yet be applied and supporting everything is the goal for a
f>oll
--- Begin Message ---
>>I was thinking the same way (probably influenced by oVirt's way of
>>achieving this)
Yes, I don't have reinvented the wheel. I have customers with ovirt in
production with this setup, and I known that it's working
> The only complex thing is to manage some kind of queue i
--- Begin Message ---
> > Later on, I'd like to contribute to a version which
> > enables thin provisioned snapshots.
>>For thin provision, I have done prelimary work in september 2024
>>(check
>>the pve-devel mailing),
>>but I was waiting to have first the snapshot finished.
here the patch serie
--- Begin Message ---
>> Later on, I'd like to contribute to a version which
>>enables thin provisioned snapshots.
For thin provision, I have done prelimary work in september 2024 (check
the pve-devel mailing),
but I was waiting to have first the snapshot finished.
I can help you if you want abou
--- Begin Message ---
Am 11.06.25 um 16:02 schrieb DERUMIER, Alexandre:
> (I'll be busy with proxmox training tomorrow && friday, so I'll
> rework
> on it next week)
>>I did mention in an earlier mail that I'm already working on a series
>>for the switch to blockdev based on your patches myself.
--- Begin Message ---
> >
> >>
> >I still unsure how to handle same volume multiple time (if we really
> >want it). I was thinking to use the deviceid in the name
> >(virtio0,..),
> >but it don't work when you unplug/replug to a deviceid.
>>Why wouldn't it work?
Ah sorry, I forgot that we delete
--- Begin Message ---
>>Also, while the use case here shouldn't be cryptographically
>>sensitive,
>>you never know, so I'll just use a different hash function than sha1.
>>I'll cut off the result from that hash to 30 hex digits. Then we
>>still
>>have one letter for the prefix of the node name.
so
--- Begin Message ---
>>
>>ah, indeed. I think I'll use the ide.conf test to verify (this is the
>>only test with cdrom medias attached, and it's using cifs-store,
>>where
>>io_uring is disabled).
>>
>>should be io_uring or threads if storage don't support io_uring.
>>
>>I'll fix it, and improve te
--- Begin Message ---
Am 03.06.25 um 09:55 schrieb Alexandre Derumier via pve-devel:
> +sub generate_blockdev_drive_aio {
> + my ($drive, $scfg) = @_;
> +
> + my $cache_direct = drive_uses_cache_direct($drive, $scfg);
> + $drive->{aio} = 'threads' if drive_is_cdrom($drive);
>>We didn't fo
--- Begin Message ---
>
> No manual editing needed, just use "qm set" twice with the same
> volume
> ;) Sure, those are most likely quite exotic use cases. If we want to,
> we
> could go ahead an prohibit this for PVE 9. There always is the -args
> escape hatch for people that really need it. Woul
--- Begin Message ---
Am 03.06.25 um 09:55 schrieb Alexandre Derumier via pve-devel:
> +sub encode_nodename {
> + my ($type, $volid, $snap) = @_;
> +
> + my $nodename = "$volid";
> + $nodename .= "-$snap" if $snap;
This will lead to clashes in some cases:
>>1. Currently, we allow attachin
--- Begin Message ---
Am 03.06.25 um 09:55 schrieb Alexandre Derumier via pve-devel:
> This new patch series enable blockdev for qemu machine > 10.0 to
> avoid breaking efidisk and maybe
> potential migrations bug
>>Did you see any actual issues with migration or mirroring the EFI
>>disk
>>now or
--- Begin Message ---
Hi Fiona,
I'm currently finishing my new patch series
>>
>>* Drop $snapshot parameter, currently there is no need to attach
>> snapshots via -blockdev. They would need to be attached read-only
>> too to not fail and this can always be added later.
I need it in different
--- Begin Message ---
> >
> > * Drop $snapshot parameter, currently there is no need to attach
> > snapshots via -blockdev. They would need to be attached read-only
> > too to not fail and this can always be added later.
>>I need it in different places in the external snapshot code (to
>>ge
5.25 um 11:08 schrieb DERUMIER, Alexandre via pve-devel:
> perl question: how to call a resursive private sub ? (it don't seem
> to
> allow it)
>>AFAIK, you can do it by declaring it up-front:
Works, thanks !
--- End Message ---
___
pv
--- Begin Message ---
>
>
> +sub generate_backing_blockdev {
>>make this private?
perl question: how to call a resursive private sub ? (it don't seem to
allow it)
> + my ($storecfg, $snapshots, $deviceid, $drive, $snap_id) = @_;
> +
> + my $snapshot = $snapshots->{$snap_id};
> + my
--- Begin Message ---
>>
>>has_features->{snapshot}->{raw} = 1 : it's always a storage snapshot
>>(assuming we don't want to implement external qcow2 snapshot with raw
>>backing)
>>
>>has_features->{snapshot}->{qcow2|other_format_than_raw)} = 1 : qemu
>>internal snapshot
>>has_features->{snapshot
--- Begin Message ---
Hi,
>
> my $cmd = [];
> push @$cmd, '/usr/bin/qemu-img', 'convert', '-p', '-n';
> - push @$cmd, '-l', "snapshot.name=$snapname"
> - if $snapname && $src_format && $src_format eq "qcow2";
> + push @$cmd, '-l', "snapshot.name=$snapname" if $snapname &&
> $snaps
--- Begin Message ---
>>I'd prefer having a second dedicated function
>>qemu_img_create_qcow2_with_backing() rather than an interface where
>>some
>>parameters need to be undef depending on how you want to use it. It
>>also
>>doesn't require a format parameter, because we only allow it for
>>qcow2.
--- Begin Message ---
>
>
> So no difference in needed memory, with or with extended_l2.
>
> but the l2-cache-size tuning is really something we should add in
> another patch I think ,for general performance with qcow2.
>>If we want to enable extended_l2=on, cluster_size=128k by default for
>>a
--- Begin Message ---
Thanks Aaron for this work,
pressure are really something usefull (more than the classic load
average), could be use to evict/migrate a vm from a node when pressure
is too high.
I was to original author of read_pressure/parse_pressure, but I never
had finished the rrd integr
--- Begin Message ---
>>
>>I intentionally do not handle CD-ROMs, qemu-server should be
>>concerned
>>with doing that. There is a comment about this ;)
ok. I think this isœ in general when no "storeid:volid" is defined
right ? could be a device or a file directly configured in the vm
configurat
--- Begin Message ---
>>Good point! I guess it's better to also do the activation in qemu-
>>server
>>then, so that we can match it up nicely without going over package
>>boundaries.
Ok, I'll add the activate in qemu-server too
Just found another bug:
PVE/Storage.pm
sub qemu_blockdev_options
--- Begin Message ---
>>Yes, it would be possible, and it is a close call. But I briefly
>>chatted
>>with Fabian off-list and we think it's better to do this in qemu-
>>server,
>>together with the CD-ROM handling. Since the whole use-case is
>>related
>>to a QEMU-specific interface already.
Ok, th
--- Begin Message ---
>>I intentionally do not handle CD-ROMs, qemu-server should be
>>concerned
>>with doing that. There is a comment about this ;)
I mean, could it better to have something like this ? :
sub qemu_blockdev_options {
my ($cfg, $volid, $snapname) = @_;
my ($storeid, $voln
--- Begin Message ---
>
> Also, for lvm volume, they are not currently activate at vm command
> line generation. (but anyway, I'll need it to retrieve backing chain,
> so maybe it's not a problem)
>>Thanks for testing! I'll add an activate_volume() call in the
>>PVE::Storage::qemu_blockdev_option
--- Begin Message ---
>>no, the plan is to drop the storage plugin for it with Proxmox VE 9.
>>We'll warn about it in the pve8to9 upgrade script. QEMU is expected
>>to
>>drop support with 10.1 in a few months. People that want to continue
>>using it, can still configure it as a shared directory sto
--- Begin Message ---
>>
>> if ($path =~ m|^/|) {
>> # The 'file' driver only works for regular files. The check
>>below is taken from
>> # block/file-posix.c:hdev_probe_device() in QEMU. Do not
bother >>with detecting 'host_cdrom'
>> # devices here, those are not managed by the s
--- Begin Message ---
Hi Fiona,
do we still support glusterfs for pve9 (as it's deprecated)?
Message initial
De: Fiona Ebner
Répondre à: Proxmox VE development discussion
À: pve-devel@lists.proxmox.com
Objet: [pve-devel] [RFC storage 1/3] plugin: add method to get qemu
blockde
--- Begin Message ---
I have done some tests with suballocated cluster and base image without
backing_file, indeed, I'm seeing a small performance degradation on big
1TB image.
with a 30GB image, I'm around 22000 iops 4k randwrite/randread (with
or without l2_extended=on)
with a 1TB image, the
--- Begin Message ---
>
> Do you see an usecase ? I think we don't you have any user cli
> command
> to list snapshots in differents plugins currently ?
> if later we add replication on external snapshots, we shouldn't
> return
> internal snapshot list in this case.
>>a bit torn on this - while i
--- Begin Message ---
> > sub free_image {
> my ($class, $storeid, $scfg, $volname, $isBase, $format) = @_;
>
> @@ -980,6 +994,51 @@ sub free_image {
> # TODO taken from PVE/QemuServer/Drive.pm, avoiding duplication
> would be nice
> my @checked_qemu_img_formats = qw(raw qcow qcow2 qed vmd
--- Begin Message ---
> + #if first snapshot,as it should be bigger, we merge child, and
> rename the snapshot to child
> + if(!$parentsnap) {
> + print "commit: merge content of $childpath into $snappath\n";
> + $cmd = ['/usr/bin/qemu-img', 'commit', $childpath];
> + eval { run_comman
1 - 100 of 215 matches
Mail list logo