--- Begin Message ---
Hi,
First, Thanks to all Proxmox Team for the help during the Fosdem!
It was 2 long days days for only 2 people, so it have give us some
time to rest a little bit && eat.
And thanks for the Dinner, it was great to meet you again.
Here my notes:
users feedback:
- a
--- Begin Message ---
Message initial
De: Fiona Ebner
À: Proxmox VE development discussion
Cc: Alexandre Derumier
Objet: Re: [pve-devel] [PATCH pve-storage] qcow2: resize: add
preallocation support
Date: 03/02/2025 15:39:41
Am 19.12.24 um 17:18 schrieb Alexandre Derumier via p
--- Begin Message ---
>>out of curiosity, besides the obvious cases where external snapshots
>>would be useful, i.e. on raw files and on lvm (not lvm-thin), what
>>other
>>cases would be useful given that they already have snapshot support
>>(qcow2 with the internal one, zfs, btrfs, lvm-thin etc
--- Begin Message ---
Hi Fabio !
>>
>>In this implementation I don't see the possibility of using them on
>>raw
>>disks (on files) from a fast look, or am I wrong? If so, why? I think
>>the main use would be in cases like that where you don't have
>>snapshot
>--- End Message ---
--- Begin Message ---
>>the path referenced in the running VM is stable. the path you are
>>looking for in the graph is not. e.g., the path might be something
>>some storage software returns. or udev. or .. and that can change
>>with any software upgrade or not be 100% deterministic in the first
>>
--- Begin Message ---
>>Yes, we don't need much to get enough collision-resistance. Just
>>wanted
>>to make sure and check it explicitly.
I have done some test with sha1, with base62 encode ('0..9', 'A..Z',
'a..z)
the node-name require to start with an alpha character prefix
encodebase62(sha
--- Begin Message ---
> Fiona Ebner hat am 15.01.2025 11:06 CET
> geschrieben:
>
>
> Am 15.01.25 um 10:51 schrieb Fabian Grünbichler:
> >
> > basically what we have is the following situation:
> >
> > - we have some input data (volid+snapname)
> > - we have a key derived from the input data
--- Begin Message ---
IMHO this isn't really a cryptographic use case, so I'd not worry too
much about any of that.
basically what we have is the following situation:
- we have some input data (volid+snapname)
- we have a key derived from the input data (block node name)
- we have a value (block
--- Begin Message ---
>
>>Upgrading libpve-storage-perl or an external storage plugin while the
>>VM
>>is running could lead to a different result for path() and thus
>>breakage, right?
mmm, yes, you are right
>>If we do need lookup, an idea to get around the character limit is
>>using
>>a hash
--- Begin Message ---
>
> Feel free to cleanup this patch and submit it to qemu devs, you are a
> better C developper than me ^_^
>>I can try to look into it, but could you give some more details how
>>exactly the issue manifests? What parameters are you using for
>>block-commit, how does the gra
--- Begin Message ---
>
> > > should this maybe have been vdisk_alloc and it just works by
> accident?
> It's not works fine with vdisk_alloc, because the volume need to be
> created without the size specified but with backing file param
> instead.
> (if I remember, qemu-img is looking at the ba
--- Begin Message ---
>
>
> > > should this maybe have been vdisk_alloc and it just works by
> accident?
> It's not works fine with vdisk_alloc, because the volume need to be
> created without the size specified but with backing file param
> instead.
> (if I remember, qemu-img is looking at the b
--- Begin Message ---
>>For almost all QMP commands, we only need to care about the node
>>that's
>>inserted for the drive.
(yes, that the throttle group in my implementation, and I have a fixed
name, I'm reusing the "drive-(ide|scsi|virtio)x naming"
>>And for your use-case, checking that the to
--- Begin Message ---
> Hmm, sounds like it might a bug, I can look into it. If really
> required
> to make it work, we can still set fixed node-names on the
> commandline,
> but also query them before usage to be sure we have the correct, i.e.
> currently inserted node.
>>AFAICT, this is because
--- Begin Message ---
>>something like this was what I was afraid of ;) this basically means
>>we need to have some way to lookup the nodes based on the structure
>>of the graph, which probably also means verifying that the structure
>>matches the expected one (e.g., if we have X snapshots, we expe
--- Begin Message ---
> Alexandre Derumier via pve-devel hat am
> 16.12.2024 10:12 CET geschrieben:
>>it would be great if there'd be a summary of the design choices and a
>>high level summary of what happens to the files and block-node-graph
>>here. it's a bit hard to judge from the code below
--- Begin Message ---
>
> +sub generate_backing_blockdev {
> + my ($storecfg, $snapshots, $deviceid, $drive, $id) = @_;
> +
> + my $snapshot = $snapshots->{$id};
> + my $order = $snapshot->{order};
> + my $parentid = $snapshot->{parent};
> + my $snap_fmt_nodename = "fmt-$deviceid-$
--- Begin Message ---
> + my $path = PVE::Storage::path($storecfg, $volid);
>>is this guaranteed to be stable? also across versions? and including
>>external storage plugins?
it can't be different than the value we have use for command line
generation. But I think that I should use $path dir
--- Begin Message ---
> + #change aio if io_uring is not supported on target
> + if ($dst_drive->{aio} && $dst_drive->{aio} eq 'io_uring') {
> + my ($dst_storeid) = PVE::Storage::parse_volume_id($dst_drive-
> >{file});
> + my $dst_scfg = PVE::Storage::storage_config($storecfg,
> $dst_storeid)
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion
Cc: Alexandre Derumier
Objet: Re: [pve-devel] [PATCH v3 qemu-server 04/11] blockdev:
vm_devices_list : fix block-query
Date: 08/01/2025 15:31:36
> Alexandre Derumier via pve-deve
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion
Cc: Alexandre Derumier
Objet: Re: [pve-devel] [PATCH v3 qemu-server 03/11] blockdev : convert
qemu_driveadd && qemu_drivedel
Date: 08/01/2025 15:26:37
> Alexandre Derumier via pv
--- Begin Message ---
> - $device .= ",drive=drive-$drive_id,id=$drive_id";
> + $device .= ",id=$drive_id";
> + $device .= ",drive=drive-$drive_id" if $device_type ne 'cd' ||
> $drive->{file} ne 'none';
>>is this just because you remove the whole drive when ejecting? not
>>sure whether that is re
--- Begin Message ---
>>but you don't know up front that you want to collapse all the
>>snapshots. for each single removal, you have to merge the delta
>>towards the overlay, not the base, else the base contents is no
>>longer matching its name.
>>
>>think about it this way:
>>
>>you take a snapsho
01/2025 10:55:14
Am 10.01.25 um 08:44 schrieb DERUMIER, Alexandre via pve-devel:
> Message initial
> De: Fabian Grünbichler
> À: Proxmox VE development discussion
> Cc: Alexandre Derumier
> Objet: Re: [pve-devel] [PATCH-SERIES v3 pve-storage/qemu-server/pve-
&
--- Begin Message ---
>>yes, for the "first" snapshot that is true (since that one is
>>basically the baseline data, which will often be huge compared to the
>>snapshot delta). but streaming (rebasing) saves us the rename, which
>>makes the error handling a lot easier/less risky. maybe we could
>sp
--- Begin Message ---
>>one downside with this part in particular - we have to always
>>allocate full-size LVs (+qcow2 overhead), even if most of them will
>>end up storing just a single snapshot delta which might be a tiny
>>part of that full-size.. hopefully if discard is working across the
>>who
--- Begin Message ---
>>Maybe it could even be a bug then?
Yes, it's a bug. I just think that libvirt currently only implement
block-commit with disk blockdev on topnode.
throttle group are not currently implement in libvirt (but I have seen
some commit to add support recently), they still use
--- Begin Message ---
> @@ -710,11 +715,15 @@ sub filesystem_path {
> # Note: qcow2/qed has internal snapshot, so path is always
> # the same (with or without snapshot => same file).
> die "can't snapshot this image format\n"
> - if defined($snapname) && $format !~ m/^(qcow2|qed)$/;
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion
Cc: Alexandre Derumier , Fiona
Ebner
Objet: Re: [pve-devel] [PATCH v1 pve-qemu 1/1] add block-commit-
replaces option patch
Date: 08/01/2025 14:27:02
> Alexandre Derumier via pve
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion
Cc: Alexandre Derumier
Objet: Re: [pve-devel] [PATCH-SERIES v3 pve-storage/qemu-server/pve-
qemu] add external qcow2 snapshot support
Date: 09/01/2025 15:13:14
> Alexandre Derumie
--- Begin Message ---
> No related, but could it be possible to implement it, for simple
> vm/template full cloning with source+target are both rbd ? It's
> really
> faster with 'qemu-img convert'
>>Hmm, we could shift offline copy of images to the storage layer (at
>>least in some cases). We just
--- Begin Message ---
>>Am 18.12.24 um 15:20 schrieb Daniel Kral:
> >>- When exporting with "pvesm export ...", the volume has the same
> checksum as with "rbd export ..." with the size header prepended
>>Well, I totally missed the existence of "rbd export" in my hurry to
>>get
>>this working. See
--- Begin Message ---
Hi,
I don't remember exactly when I have done the code (because dhcp range
has been added after my initial implementation, where I was looking
only the full prefix)
but shouldn't theses ranges be added when the dhcp ranges are submitted
on subnet create/update api call ? (I
--- Begin Message ---
Hi Fiona,
Thanks for your tests ! Indeed,don't seem to be a silver bullet for
the classic usecase.
>>We could also think about only using it for linked clones by default
>>initially.
>>
>>Independently, you can make it the default for LvmQcow2Plugin of
>>course,
s>>ince you
--- Begin Message ---
Hi Fiona,
I'm really sorry, I didn't see your reponse, lost in the flood of email
:(
>>How does read performance compare for you (with 128 KiB cluster
size)?
>>I don't see any noticeable difference in my testing with an ext4
>>directory storage on an SSD, attaching the qco
--- Begin Message ---
Message initial
De: Fabian Grünbichler
À: Proxmox VE development discussion ,
"DERUMIER, Alexandre"
Cc: Giotta Simon RUAGH
Objet: Re: [pve-devel] [PATCH v2 pve-storage 1/2] add external snasphot
support
Date: 24/10/2024 11:48:03
> Giotta Simon RUAGH via
--- Begin Message ---
>
>
> But even with that, you can still have performance impact.
> So yes, I think they are really usecase for workload when you only
> need
> snapshot time to time (before an upgrade for example), but max
> performance with no snaphot exist.
>>my main point here is - all o
--- Begin Message ---
Hi,
any news about this patch series ?
I think it's still not applied ? (I see a lot of request about it on
the forum and on the bugzilla)
Regards,
Alexandre
Message initial
De: "DERUMIER, Alexandre"
À: pve-devel@lists.proxmox.com ,
s.hanre...@proxmo
--- Begin Message ---
Thanks Fabian for your time !
I have tried to respond as much as possible. (I'm going to Holiday for
1 week tomorrow, so sorry if I don't reply to all your comments)
Message initial
De: Fabian Grünbichler
À: "DERUMIER, Alexandre" , pve-
de...@lists.prox
--- Begin Message ---
ok, I think it could be possible to use blockdev-reopen to rename
current filename
https://lists.gnu.org/archive/html/qemu-devel/2021-05/msg04455.html
example: take snapshot : snap1 on vm-disk-100-disk-0.qcow2
- create a hardlink: ln vm-disk-100-disk-0.qcow2 vm-disk-100-disk
--- Begin Message ---
>>if we want the current volume to keep its name, and the snapshot
>>volume to actually contain *that* snapshot's data, we need some sort
>>of rename dance here as well.. i.e., rename the current volume to
>>have the snapshot volume name, then snapshot it back into the
>>"curr
--- Begin Message ---
>>I am not yet convinced this is somehow a good idea, but maybe you can
>>convince me otherwise ;)
>>
>>variant A: this is just useful for very short-lived snapshots
>>variant B: these snapshots are supposed to be long-lived
Can you defined "short "/ "long" ? and the differe
--- Begin Message ---
Hi Fabian,
thanks for the review !
>> Message initial
>>De: Fabian Grünbichler
>>À: Proxmox VE development discussion
>>Cc: Alexandre Derumier
>>Objet: Re: [pve-devel] [PATCH v2 pve-storage 1/2] add external
>>snasphot support
>>Date: 23/10/2024 12:12:46
--- Begin Message ---
Message initial
De: Esi Y via pve-devel
Répondre à: Proxmox VE development discussion
À: Proxmox VE development discussion
Cc: Esi Y
Objet: Re: [pve-devel] [PATCH SERIES v2 pve-storage/qemu-server] add
external qcow2 snapshot support
Date: 22/10/2024 11:5
--- Begin Message ---
Hi,
Any comment about this patch series ?
I really think that external snapshot could be a great feature (as I
still see report on the forum about freeze on snasphot deletion),
and support for lvm and shared san is really a feature than enterprise
users are waiting. (To be
--- Begin Message ---
Personnally, I'm ok with your patch
>>Ultimately I disagreed with the solution to use a separate parameter
>>for IPv6, for the following reasons:
>>- We can only have one local tunnel IP, so having two parameters
>>means we need to check if the other one has been set (since s
--- Begin Message ---
Try to look at ifupdown2 github, their are 2 old pull request about
this (never merged/ never completed)
https://github.com/CumulusNetworks/ifupdown2/pull/172
"
For this we would need a new attribute vxlan-local-tunnelip6, we don't
want to reuse the same attribute for ipv6
--- Begin Message ---
patch logic seem to be ok for me. (I don't have tested it)
>>
>> for my $address (@peers) {
>>- next if $address eq $ifaceip;
>>- push @iface_config, "vxlan_remoteip $address";
>>+ push @iface_config, "vxlan_local_tunnelip $address" if $address eq
>>$ifaceip;
>>+
--- Begin Message ---
Hi,
Thanks for the patch !
Do you have submitted it also on the ifupdown2 github ?
Message initial
De: apalrd via pve-devel
Répondre à: Proxmox VE development discussion
À: pve-devel@lists.proxmox.com
Cc: apalrd
Objet: [pve-devel] [PATCH ifupdown2 0/1
--- Begin Message ---
I'm also interested. I have already seen the time drift when take
snapshots, not cool on a database where time transaction is important.
ceph rbd support group snapshot too.
Message initial
De: Ivaylo Markov via pve-devel
Répondre à: Proxmox VE developme
--- Begin Message ---
Looking for this feature too for my production :)
Message initial
De: Thomas Skinner
Répondre à: Proxmox VE development discussion
À: pve-devel@lists.proxmox.com
Objet: Re: [pve-devel] [PATCH SERIES openid/access-
control/docs/manager] fix #4411: add suppo
--- Begin Message ---
>>The pmxcfs filesystem has limits, and I do not really want to waste
>>space for such things. I would prefer the run-length encoded list.
>>@Alexandre: Why do you want to keep a backup of old config files?
Don't need content of old config. ( I have backup anyway).
I was mo
--- Begin Message ---
Hi,
I'm very interested by this patch series too.
My 2 cents:
Couldn't we simply move the deleted vm config file
to a trash/tombstone directory ?
/etc/pve/.deleted/.conf ?
(I could be great to be able to mass delete vms in // without having a
big lock on a file)
I'm no
--- Begin Message ---
>>Hi,
Hi Fiona,
sorry I didn't see your response.
>>Is the performance drop also this big with the local storage?
yes. (The result is even worst on nfs, or gfs/ocfs2)
>>A performance drop is of course expected, because AFAIU it needs to
>>do
>>COW for the sectors that
--- Begin Message ---
Hi,
I was doing tests with gfs2 && ocfs2,
and I have notice 4k randwrite iops going from 2 iops to
100~200iops when a snapshot is present.
I thinked it was related to gfs2 && ocfs2 allocation,
but I can reproduce too with a simple qcow2 file on
a local ssd drive.
is i
--- Begin Message ---
>
Hi,
I have done more tests
> > * there is no cluster locking?
> > you only mention
> >
> > ---8<---
> >
> > #don't·use·global·cluster·lock·here,·use·on·native·local·lvm·lock
> > --->8---
> >
> > but don't configure any lock? (AFAIR lvm cluster locking need
--- Begin Message ---
>>just my personal opinion, maybe you also want to wait for more
>>feedback from somebody else...
>>(also i just glanced over the patches, so correct me if I'm wrong)
Hi Dominik !
i see some problems with this approach (some are maybe fixable, some
probably not?)
>>* as
--- Begin Message ---
>>I'm talking about virtio-scsi. Our virtio-network device is working
>>fine
Yes, sorry, I wanted to said virtio-scsi.
All pci devices excluding passthrough devices (with pcie=on flag) are
actually plugged to pci bridge
sub get_pci_addr_map {
$pci_addr_map = {
--- Begin Message ---
>>Alexandre,
>>
>>the statement below is not true for our case. The OpenVMS guest OS is
>>using a PCIE bus, so the virtio-scsi device should be exposed as
>>"modern", but is not. Not sure why not at this point
See Fiona response,
the pci express bridge is present, but the vi
--- Begin Message ---
Hi,
I didn't see the responde of Fiona but indeed:
https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg01567.html
"virtio devices can be exposed in upto three ways
- Legacy - follows virtio 0.9 specification. always uses PCI
ID range 0x1000-0x103F
- Tran
--- Begin Message ---
Hi,
Currently they are no way to add custom options for virtio-devices
command line from vm config, so it should be patched to add support for
openvms os and add special tuning.
for
example:https://git.proxmox.com/?p=qemu-server.git;a=blob_plain;f=PVE/QemuServer.pm;hb=HEAD
--- Begin Message ---
>
> and for virtio-mem, dimmsize can be replace by chunk size
>>Sorry about the very late response!
>>
>>When calculating from the DIMM size with 1 GiB step size, we can only
>>get to max values N * 64 GiB. We could still a have a separate max
>>field
>>with smaller step size
--- Begin Message ---
Hi!
>>Hi! I gave this a quick test on my machine and everything worked
well.
>>Would we maybe want to expose this setting on the NIC level as well?
I don't think it can work, because a port not isolated, have access to
all other ports,including isolated ports.
"
isolated
--- Begin Message ---
Hi,
A lot of user are complaining about dist-upgrade trying to remove
pve-manager
https://forum.proxmox.com/threads/upgrading-pve-tries-to-remove-
proxmox-ve-package.149101/page-3
It seem that's because of a missing proxmox-backup-client_3.2.5-1
version depend . (It
--- Begin Message ---
Hi,
Could it be possible to apply this patch series ? (or a review if it
need cleanup)
(I see a lot of users requesting for it)
Thanks !
Alexandre
BTW: I'm a little bit off currently, I'm working on vm luks encryption,
I'll send a patch series soon.
Message
/04/2024 14:25:40
On 4/3/24 14:03, DERUMIER, Alexandre via pve-devel wrote:
> maybe revert the kernel patch ? ^_^
> https://antiphishing.vadesecure.com/v4?f=YVdEbUdjdUhGSnlic1ZwZTZs5NC0
> IK5UpoMm-JbYmH0g8SvUq6T2pULKyCWNdAtigmuEY2RK_MUtsxeEJS-
> hxg&i=VG1PWDJGYXFXcTREa3RZRfomnJARQ
--- Begin Message ---
> Maybe it is time to disable dynamic mac-learning by default ?
> The code is already here and works fine.
>
> AFAIK, other hypervisor like vmware disable port flooding by default
> with static mac registration too.
>>Might be a good idea, although it still wouldn't solve
--- Begin Message ---
>>## Known Issues
>>There is currently one major issue that we still need to solve:
>>REJECTing
>>packets from the guest firewalls is currently not possible for
>>incoming traffic
>>(it will instead be dropped).
That's remember me this old Hetzner bug (Hetzner flooding bad p
--- Begin Message ---
>>
>>## Known Issues
>>There is currently one major issue that we still need to solve:
>>REJECTing
>>packets from the guest firewalls is currently not possible for
>>incoming traffic
>>(it will instead be dropped).
That remember me this Hetzner bug
--- End Message ---
_
--- Begin Message ---
Hi Stefan,
I'll really take time to test it (I was super busy theses last month
with a datacenter migration), as I wait for nftables since a while.
Can't help too much with rust, but I really appriciate it, as I had
some servers with a lot of vms && rules, take more than 10s
--- Begin Message ---
Hi,
could it be possible to merge this patch ?
I have see another report about it on the forum:
https://forum.proxmox.com/threads/bugfix-for-evpn-sdn-multiple-exit-nodes.137784/post-649071
Message initial
De: Stefan Hanreich
Répondre à: Proxmox VE devel
--- Begin Message ---
Hi,
a critical bug in evpn with multiple nodes is fixed in git,
https://git.proxmox.com/?p=pve-network.git;a=commit;h=e614da43f13e3c61f9b78ee9984364495eff91b6
but package is still not released
I see a lot of user bug report since 4 months about this, like this
recent one:
h
--- Begin Message ---
you can have a look at this old storage plugin I have wrote for netapp
10year ago
https://github.com/odiso/proxmox-pve-storage-netapp/blob/master/PVE/Storage/NetappPlugin.pm
It don't think it's still working, but the concepts should be the same
create volume, list volume, a
--- Begin Message ---
Hi,
In general, the qemu processi have access to filepath
(/mnt/.../*.raw|.qcow) or block dev (/dev/).
in:
/usr/share/perl5/PVE/QemuServer.pm
sub print_drivedevice_full {
my ($storecfg, $conf, $vmid, $drive, $bridges, $arch,
$machine_type) = @_;
$path = PVE::Sto
--- Begin Message ---
hi,I think you should limit to 8 characters like for sdn vnet,
as we need to space to vlan tag for example (vmbrY.), or other sdn
construct.
Message initial
De: Stefan Hanreich
Répondre à: Proxmox VE development discussion
À: pve-devel@lists.proxmo
--- Begin Message ---
Message initial
De: Stefan Hanreich
>>This was broken when adding a new EVPN zone and there's an easier way
>>built-in to our widget toolkit. I've taken the liberty of sending a
v2
>>and mentioning you [1].
Oh, great, thanks ! I was banging my head to fi
--- Begin Message ---
> what is the output of "ifreload -a -d" ?
>>nothing mentioning ip-forward sadly, I had already looked at
>>/var/log/ifupdown2 to get an idea of what's going wrong but I
>>couldn't
>>find anything mentioned there as well (I think the output is the
>>same..). I think it might
--- Begin Message ---
Hi Stefan !
I don't known the roadmap for dhcp, but I'll have time to help in
March. I don't have looked at qinq yet.
>>I've had another look at this patch series and I think I found the
>>reason for the issue(s) I encountered during my testing.
>>
>>One issue is relat
--- Begin Message ---
>>It might make sense to check for any possible conflicts with the SDN
>>config (running & staged).
Technically, ifupdown2 will try to merge config options, if the
interface is defined in both /etc/network/interfaces &&
/etc/network/interfaces.d/
I have seen user doing som
79 matches
Mail list logo