Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Am 31.05.23 um 16:34 schrieb DERUMIER, Alexandre: > Le mercredi 31 mai 2023 à 13:36 +0200, Fiona Ebner a écrit : >> Am 22.05.23 um 12:25 schrieb Alexandre Derumier: >>> In addition to theses model, I have enabled aes too. >>> I think it's really important, because a lot of users use default >>> values and have >>> bad performance with ssl and other crypto stuffs. >>> >> >> So there is the answer to my aes question :) But shouldn't we rather >> set >> it via the UI as a default than change the CPU definition itself? >> That >> feels cleaner as we'd not diverge from how they defined the ABI. > > I don't have looked pve-manager code yet, but do you think it's easy > to auto enable/disable the aes flag in the grid when we choose theses > models ? I also haven't looked at the code, but yeah, it is an issue that it's in the advanced part and we shouldn't hide it from the user that it's on. > Maybe could it be better to have 2 differents models, with/without aes > (like some qemu models versions like -IBRS, > here we could have > > x86-64-v2 > x86-64-v2-aes (default) > x86-64-v3 > x86-64-v3-aes That might work, but if we do that, please only in the UI. Also not ideal, because how would interaction with the flag in the grid work? E.g. don't show it, force it on if an -aes model is selected? Maybe the easiest would be to extract the aes flag out of the grid into the non-advanced part? ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Le jeudi 01 juin 2023 à 10:34 +0200, Fiona Ebner a écrit : > Am 31.05.23 um 16:34 schrieb DERUMIER, Alexandre: > > Le mercredi 31 mai 2023 à 13:36 +0200, Fiona Ebner a écrit : > > > Am 22.05.23 um 12:25 schrieb Alexandre Derumier: > > > > In addition to theses model, I have enabled aes too. > > > > I think it's really important, because a lot of users use > > > > default > > > > values and have > > > > bad performance with ssl and other crypto stuffs. > > > > > > > > > > So there is the answer to my aes question :) But shouldn't we > > > rather > > > set > > > it via the UI as a default than change the CPU definition itself? > > > That > > > feels cleaner as we'd not diverge from how they defined the ABI. > > > > I don't have looked pve-manager code yet, but do you think it's > > easy > > to auto enable/disable the aes flag in the grid when we choose > > theses > > models ? > > I also haven't looked at the code, but yeah, it is an issue that it's > in > the advanced part and we shouldn't hide it from the user that it's > on. > > > Maybe could it be better to have 2 differents models, with/without > > aes > > (like some qemu models versions like -IBRS, > > here we could have > > > > x86-64-v2 > > x86-64-v2-aes (default) > > x86-64-v3 > > x86-64-v3-aes > > That might work, but if we do that, please only in the UI. Also not > ideal, because how would interaction with the flag in the grid work? > E.g. don't show it, force it on if an -aes model is selected? > mmm, yes, maybe it'll be confusing. (But note that for example we don't hide -ibrs model, if user disable spectre flag for example) > Maybe the easiest would be to extract the aes flag out of the grid > into > the non-advanced part? > Couldn't be easier to keep aes enable by default in a single model (even if it's doesn't match the x86-64 spec). and allow user to optin disable it. The only server where you need to disable aes if for nahelem, and I don't think that a lot of users still have this cpu in production. (so keeping the aes flag in advanced section make sense). Also, user with really old servers, could keep to use kvm64 model, where aes is not enabled. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v3 qemu-server 1/7] cpuconfig: add new x86-64-vX models
Am 31.05.23 um 17:08 schrieb DERUMIER, Alexandre: >>> >>> +my $builtin_models = { >>> + 'x86-64-v1' => { >>> + 'reported-model' => 'Opteron_G1', >> >> It's unfortunate that we'll report this model and hence also AMD as >> vendor even on Intel hosts and vice versa for the other models. We >> could >> set the vendor to the host's vendor (in get_cpu_options() handle >> getting >> the vendor for the built-in models differently), > I think it'll break if you migrate between intel/amd host anyway ? That's true :) >> but that's also >> strange, because then it would be Opteron_G1 with vendor GenuineIntel >> :/ >> So maybe better to just leave it? > Well, kvm64 guest have vendor Authentic amd (even on intel host;), with > modelname "common kvm processor") > cat /proc/cpuinfo > vendor_id : AuthenticAmd > model name: "Common KVM processor" Are you sure? Or was this a migrated machine? We have this comment > # generic types, use vendor from host node > host => 'default', > kvm32 => 'default', > kvm64 => 'default', and for a colleague, it is GenuineIntel with kvm64 on an Intel host. > If we don't want to expose the original modelname from where we > derivate, afaik, the only way is to patch qemu directly (like in my > v1). We can actually just use the model-id option for -cpu and I think we should for these built-in models. I.e. set the vendor to the one from the host and the model-id to something generic too. Maybe "Common x86_64-abi1-compatible processor", but that feels involved, or maybe just "Common KVM processor" again? >> >>> + flags => "-vme;-svm;-vmx", >> >> Why remove the svm and vmx flags? They are not exposed by us, so a >> user >> cannot even enable them back if needed, but needs to switch to a >> different CPU type. > yes, that's was the idea to forbid user to enable them, as it's > breaking livemigration, so it don't make any sense to use this model > instead host model. > > But I can remove them, no problem. Oh, I missed the following in the referenced mail: > None of the CPU models declare any VMX/SVM capability features. > IOW, even if a "vmx"/"svm" flag is added, it will still be unsafe > to attempt to live migrate the L1 guest if there are any L2 > guests running with hardware virtualization. Please keep them off then. >>> @@ -96,6 +115,9 @@ my $cpu_vendor_list = { >>> kvm64 => 'default', >>> qemu32 => 'default', >>> qemu64 => 'default', >>> + 'x86-64-v1' => 'default', >>> + 'x86-64-v2' => 'default', >>> + 'x86-64-v3' => 'default', >> >> >> Currently all of the others are actual models we can pass directly to >> QEMU/KVM. I'd rather not add these custom built-in ones here. You'll >> need to adapt validate_vm_cpu_conf() of course, to also accept the >> built-in ones. >> >> Because of adding them here, I can also set them as the 'reported- >> model' >> for a custom CPU in /etc/pve/virtual-guest/cpu-models.conf and >> parsing >> the file will work, but then starting a VM with that custom CPU will >> fail with kvm: unable to find CPU model 'x86-64-v1'. >> >> If we'd like to enable using the built-in ones as base for custom CPU >> models, we'll need to handle them differently, but I'm not sure we >> should until there is enough user demand. >> > Maybe it could be simplier to really add true build-model in qemu ? > (The qemu patch is pretty small, and shouldn't be difficult to > maintain) > > I'm not sure, but maybe user will think that it's strange than x86-64- > v2 will display nahelem in guest && in qemu command line ? > Yes, for this it would be easier, but I also don't think we need to allow these as a base for custom models (at least not until there is enough user demand). And we can still switch later to make them true QEMU models if we really need to. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Am 22.05.23 um 12:25 schrieb Alexandre Derumier:> Nahelemn is a 2008 cpu, so I think it's ok, we are in 2013 ;) > (and user can disable aes flag in gui too) > > That mean than the minimum compatible cpu for v2 is Intel Westmere (2010) > and Amd Bulldozer (2011). > Unfortunately, it doesn't seem to be rock stable on AMD hosts :( When using Nehalem on my AMD Ryzen Threadripper 2920X, I get hangs during installation of Debian 11 and for an already existing one, crashes during boot. This is with or without AES flag. Colleagues with AMD CPUs also ran into similar issues when using CPU type Nehalem. Note that these were not server CPUs, Aaron said, he'll test those later (please reply to here with the results :)). It would be bad if too many AMD CPUs aren't happy with the new default :/ In the worst case, we'd only be able to switch to the new default for Intel hosts. What AMD CPUs did you try it on? ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH v3 qemu-server 1/7] cpuconfig: add new x86-64-vX models
Le jeudi 01 juin 2023 à 11:17 +0200, Fiona Ebner a écrit : > Am 31.05.23 um 17:08 schrieb DERUMIER, Alexandre: > > > > > > > > +my $builtin_models = { > > > > + 'x86-64-v1' => { > > > > + 'reported-model' => 'Opteron_G1', > > > > > > It's unfortunate that we'll report this model and hence also AMD > > > as > > > vendor even on Intel hosts and vice versa for the other models. > > > We > > > could > > > set the vendor to the host's vendor (in get_cpu_options() handle > > > getting > > > the vendor for the built-in models differently), > > I think it'll break if you migrate between intel/amd host anyway ? > > That's true :) > > > > but that's also > > > strange, because then it would be Opteron_G1 with vendor > > > GenuineIntel > > > :/ > > > So maybe better to just leave it? > > Well, kvm64 guest have vendor Authentic amd (even on intel host;), > > with > > modelname "common kvm processor") > > cat /proc/cpuinfo > > vendor_id : AuthenticAmd > > model name : "Common KVM processor" > > Are you sure? Or was this a migrated machine? > > We have this comment > > > # generic types, use vendor from host node > > host => 'default', > > kvm32 => 'default', > > kvm64 => 'default', > > and for a colleague, it is GenuineIntel with kvm64 on an Intel host. > oh, you are right, it's indeed inherit the vendorid from the host (tested with kvm64 && qemu64). Maybe they are some specific trick for theses model in qemu (because in cpu definition, the vendor is really intel for kvm64 && amd for qemu64. Maybe they are some other part in code to inherit from the host vendor) https://github.com/qemu/qemu/blob/master/target/i386/cpu.c > > If we don't want to expose the original modelname from where we > > derivate, afaik, the only way is to patch qemu directly (like in my > > v1). > > We can actually just use the model-id option for -cpu and I think we > should for these built-in models. I.e. set the vendor to the one from > the host and the model-id to something generic too. Maybe "Common > x86_64-abi1-compatible processor", but that feels involved, or maybe > just "Common KVM processor" again? ah ok, i wasn't aware of model-id. don't have preference, can be "Common KVM processor" or "specific version". just tested it, vendor can also be specified ",model-id="Common KVM processor",vendor=GenuineIntel" (I think it shouldn't break live migration if it's working with kvm64, I think that vendor is not changing until the guest is restart.) > > > > > > > > + flags => "-vme;-svm;-vmx", > > > > > > Why remove the svm and vmx flags? They are not exposed by us, so > > > a > > > user > > > cannot even enable them back if needed, but needs to switch to a > > > different CPU type. > > yes, that's was the idea to forbid user to enable them, as it's > > breaking livemigration, so it don't make any sense to use this > > model > > instead host model. > > > > But I can remove them, no problem. > > Oh, I missed the following in the referenced mail: > > > None of the CPU models declare any VMX/SVM capability features. > > IOW, even if a "vmx"/"svm" flag is added, it will still be unsafe > > to attempt to live migrate the L1 guest if there are any L2 > > guests running with hardware virtualization. > > Please keep them off then. > ok, no problem > > > > @@ -96,6 +115,9 @@ my $cpu_vendor_list = { > > > > kvm64 => 'default', > > > > qemu32 => 'default', > > > > qemu64 => 'default', > > > > + 'x86-64-v1' => 'default', > > > > + 'x86-64-v2' => 'default', > > > > + 'x86-64-v3' => 'default', > > > > > > > > > Currently all of the others are actual models we can pass > > > directly to > > > QEMU/KVM. I'd rather not add these custom built-in ones here. > > > You'll > > > need to adapt validate_vm_cpu_conf() of course, to also accept > > > the > > > built-in ones. > > > > > > Because of adding them here, I can also set them as the > > > 'reported- > > > model' > > > for a custom CPU in /etc/pve/virtual-guest/cpu-models.conf and > > > parsing > > > the file will work, but then starting a VM with that custom CPU > > > will > > > fail with kvm: unable to find CPU model 'x86-64-v1'. > > > > > > If we'd like to enable using the built-in ones as base for custom > > > CPU > > > models, we'll need to handle them differently, but I'm not sure > > > we > > > should until there is enough user demand. > > > > > Maybe it could be simplier to really add true build-model in qemu ? > > (The qemu patch is pretty small, and shouldn't be difficult to > > maintain) > > > > I'm not sure, but maybe user will think that it's strange than x86- > > 64- > > v2 will display nahelem in guest && in qemu command line ? > > > > Yes, for this it would be easier, but I also don't think we need to > allow these as a base for custom models (at least not until there is > enough user demand). And we can still switch later to make them true > QEMU models if we really need to. > ok,no problem, I'll re
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Le jeudi 01 juin 2023 à 11:34 +0200, Fiona Ebner a écrit : > Am 22.05.23 um 12:25 schrieb Alexandre Derumier:> Nahelemn is a 2008 > cpu, so I think it's ok, we are in 2013 ;) > > (and user can disable aes flag in gui too) > > > > That mean than the minimum compatible cpu for v2 is Intel Westmere > > (2010) > > and Amd Bulldozer (2011). > > > Unfortunately, it doesn't seem to be rock stable on AMD hosts :( When > using Nehalem on my AMD Ryzen Threadripper 2920X, I get hangs during > installation of Debian 11 and for an already existing one, crashes > during boot. This is with or without AES flag. Colleagues with AMD > CPUs > also ran into similar issues when using CPU type Nehalem. > damned :( . Do you known which true qemu amd cpumodel is working without any problem ? I only have epyc v2/v3 to test on my side, and my don't have my older opteron g4/g5 anymore. > Note that these were not server CPUs, Aaron said, he'll test those > later > (please reply to here with the results :)). > > It would be bad if too many AMD CPUs aren't happy with the new > default > :/ In the worst case, we'd only be able to switch to the new default > for > Intel hosts. > > What AMD CPUs did you try it on? > epyc v2/v3 with last microcodes does it work with x86-64-v3 ? (because the other patch of the series could autofind the best new model if it's working) ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
--- Begin Message --- Hi, We have Ryzen 1700, 2600X, 3700 and 5950X machines here, I can test on them if that helps (please detail tests to perform). Thanks El 1/6/23 a las 13:37, DERUMIER, Alexandre escribió: Le jeudi 01 juin 2023 à 11:34 +0200, Fiona Ebner a écrit : Am 22.05.23 um 12:25 schrieb Alexandre Derumier:> Nahelemn is a 2008 cpu, so I think it's ok, we are in 2013 ;) (and user can disable aes flag in gui too) That mean than the minimum compatible cpu for v2 is Intel Westmere (2010) and Amd Bulldozer (2011). Unfortunately, it doesn't seem to be rock stable on AMD hosts :( When using Nehalem on my AMD Ryzen Threadripper 2920X, I get hangs during installation of Debian 11 and for an already existing one, crashes during boot. This is with or without AES flag. Colleagues with AMD CPUs also ran into similar issues when using CPU type Nehalem. damned :( . Do you known which true qemu amd cpumodel is working without any problem ? I only have epyc v2/v3 to test on my side, and my don't have my older opteron g4/g5 anymore. Note that these were not server CPUs, Aaron said, he'll test those later (please reply to here with the results :)). It would be bad if too many AMD CPUs aren't happy with the new default :/ In the worst case, we'd only be able to switch to the new default for Intel hosts. What AMD CPUs did you try it on? epyc v2/v3 with last microcodes does it work with x86-64-v3 ? (because the other patch of the series could autofind the best new model if it's working) ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel Eneko Lacunza Zuzendari teknikoa | Director técnico Binovo IT Human Project Tel. +34 943 569 206 | https://www.binovo.es Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun https://www.youtube.com/user/CANALBINOVO https://www.linkedin.com/company/37269706/ --- End Message --- ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server, container, docs 0/7] migration: don't scan all storages, fail on aliases
This patch series changes the behavior during guest migrations: Don't scan all storages for potential images belonging to the guest. Only migrate images referenced in the config. This made it necessary to handle pending changes explicitly which had been covered by the storage scan. We also added checks for any aliased volids and fail the migration if detected. The patches in qemu-server and pve-container are split int two. The first ones handle the change from scanning the storages to handle only referenced images. The second one adds the alias check. There is also a small patch for the documentation to add a hint that aliased storages should be avoided. changes since v2: - style fixes - qemu-server: mainly change the handling of pending changes - container: use NEW_DISK_RE to check if the volume is a not yet created pending volume More in each patch. qemu-server: Aaron Lauterer (4): migration: only migrate disks used by the guest tests: add migration test for pending disk migration: fail when aliased volume is detected tests: add migration alias check PVE/QemuMigrate.pm| 98 ++--- PVE/QemuServer.pm | 9 +- test/MigrationTest/QemuMigrateMock.pm | 9 ++ test/run_qemu_migrate_tests.pl| 151 +- 4 files changed, 198 insertions(+), 69 deletions(-) container: Aaron Lauterer (2): migration: only migrate volumes used by the guest migration: fail when aliased volume is detected src/PVE/LXC/Migrate.pm | 51 +- 1 file changed, 21 insertions(+), 30 deletions(-) docs: Aaron Lauterer (1): storage: add hint to avoid storage aliasing pvesm.adoc | 3 +++ 1 file changed, 3 insertions(+) -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 3/7] migration: fail when aliased volume is detected
Aliased volids can lead to unexpected behavior in a migration. An aliased volid can happen if we have two storage configurations, pointing to the same place. The resulting 'path' for a disk image will be the same. Therefore, stop the migration in such a case. The check works by comparing the path returned by the storage plugin. We decided against checking the storages themselves being aliased. It is not possible to infer that reliably from just the storage configuration options alone. Signed-off-by: Aaron Lauterer --- We did discuss moving the check into its own function so that we can call it in other places as well, for example during the VM start to be able to show a warning. I haven't gotten around to it yet, but wanted to get this version of the sries out so we can review the other parts of the series. Therefore, we could not apply this and the next patch (maching tests) for now. PVE/QemuMigrate.pm | 10 +- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 163a721..6148cf5 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -299,12 +299,12 @@ sub scan_local_volumes { my $storecfg = $self->{storecfg}; eval { - # found local volumes and their origin my $local_volumes = $self->{local_volumes}; my $local_volumes_errors = {}; my $other_errors = []; my $abort = 0; + my $path_to_volid = {}; my $log_error = sub { my ($msg, $volid) = @_; @@ -391,6 +391,8 @@ sub scan_local_volumes { die "owned by other VM (owner = VM $owner)\n" if !$owner || ($owner != $vmid); + $path_to_volid->{$path}->{$volid} = 1; + return if $attr->{is_vmstate}; if (defined($snaprefs)) { @@ -424,6 +426,12 @@ sub scan_local_volumes { } }); + for my $path (keys %$path_to_volid) { + my @volids = keys $path_to_volid->{$path}->%*; + die "detected not supported aliased volumes: '" . join("', '", @volids) . "'" + if (scalar(@volids) > 1); + } + foreach my $vol (sort keys %$local_volumes) { my $type = $replicatable_volumes->{$vol} ? 'local, replicated' : 'local'; my $ref = $local_volumes->{$vol}->{ref}; -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
> > > > What AMD CPUs did you try it on? > > > epyc v2/v3 with last microcodes > > > does it work with x86-64-v3 ? (because the other patch of the series > could autofind the best new model if it's working) Looking at linux kernel code, they have some quirks based on cpu model number && vendor nehalem is .family = 6, .model = 26, kvm64 is .family = 15, .model = 6 qemu64 is.family = 15, .model = 107, maybe it could be interesting to test with adding flags from qemu64/kvm4 to see if it's the same behaviour ? from kvm64, xf86-64-v2 is '+lahf_lm,+popcnt, +sse4.1,sse4.2, +ssse3' (and optionnal +aes) (and we already enable lahf_lm by default in QemuServer/CPUConfig.pm ) ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 docs 7/7] storage: add hint to avoid storage aliasing
Signed-off-by: Aaron Lauterer --- I am happy for suggestions on how to improve the phrasing if it is not clear enough. pvesm.adoc | 3 +++ 1 file changed, 3 insertions(+) diff --git a/pvesm.adoc b/pvesm.adoc index 6ade1a4..7e91c50 100644 --- a/pvesm.adoc +++ b/pvesm.adoc @@ -174,6 +174,9 @@ zfspool: local-zfs content images,rootdir +CAUTION: It is not advisable to have multiple storage configurations pointing +to the exact same underlying storage. Such an _aliased_ storage configuration +can lead to unexpected behavior. Common Storage Properties ~ -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 2/7] tests: add migration test for pending disk
Signed-off-by: Aaron Lauterer --- changes since v2: removed autovivified 'snapshots => {}' test/run_qemu_migrate_tests.pl | 64 ++ 1 file changed, 64 insertions(+) diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl index fedbc32..3d5eb8d 100755 --- a/test/run_qemu_migrate_tests.pl +++ b/test/run_qemu_migrate_tests.pl @@ -130,6 +130,25 @@ my $vm_configs = { 'startup' => 'order=2', 'vmgenid' => '4eb1d535-9381-4ddc-a8aa-af50c4d9177b', }, +111 => { + 'bootdisk' => 'scsi0', + 'cores' => 1, + 'ide0' => 'local-lvm:vm-111-disk-0,size=4096M', + 'ide2' => 'none,media=cdrom', + 'memory' => 512, + 'name' => 'pending-test', + 'net0' => 'virtio=4A:A3:E4:4C:CF:F0,bridge=vmbr0,firewall=1', + 'numa' => 0, + 'ostype' => 'l26', + 'pending' => { + 'scsi0' => 'local-zfs:vm-111-disk-0,size=103M', + }, + 'scsihw' => 'virtio-scsi-pci', + 'snapshots' => {}, + 'smbios1' => 'uuid=5ad71d4d-8f73-4377-853e-2d22c10c96a5', + 'sockets' => 1, + 'vmgenid' => '2c00c030-0b5b-4988-a371-6ab259893f22', +}, 149 => { 'agent' => '0', 'bootdisk' => 'scsi0', @@ -306,6 +325,13 @@ my $source_vdisks = { 'vmid' => '341', 'volid' => 'local-lvm:vm-341-disk-0', }, + { + 'ctime' => '1589277334', + 'format' => 'raw', + 'size' => 4294967296, + 'vmid' => '111', + 'volid' => 'local-lvm:vm-111-disk-0', + }, ], 'local-zfs' => [ { @@ -322,6 +348,13 @@ my $source_vdisks = { 'vmid' => '105', 'volid' => 'local-zfs:vm-105-disk-1', }, + { + 'ctime' => '1589277334', + 'format' => 'raw', + 'size' => 108003328, + 'vmid' => '111', + 'volid' => 'local-zfs:vm-111-disk-0', + }, { 'format' => 'raw', 'name' => 'vm-4567-disk-0', @@ -1528,6 +1561,37 @@ my $tests = [ }, }, }, +{ + name => '111_running_pending', + target => 'pve1', + vmid => 111, + vm_status => { + running => 1, + runningmachine => 'pc-q35-5.0+pve0', + }, + opts => { + online => 1, + 'with-local-disks' => 1, + }, + expected_calls => $default_expected_calls_online, + expected => { + source_volids => {}, + target_volids => { + 'local-zfs:vm-111-disk-0' => 1, + 'local-lvm:vm-111-disk-10' => 1, + }, + vm_config => get_patched_config(111, { + ide0 => 'local-lvm:vm-111-disk-10,format=raw,size=4G', + pending => { + scsi0 => 'local-zfs:vm-111-disk-0,size=103M', + }, + }), + vm_status => { + running => 1, + runningmachine => 'pc-q35-5.0+pve0', + }, + }, +}, ]; my $single_test_name = shift; -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 4/7] tests: add migration alias check
Signed-off-by: Aaron Lauterer --- changes since v2: - changed mock storages and disk images, now there is 'alias-zfs' and 'alias-zfs-2' with the same disk image present to mimick an aliased storage config. test/run_qemu_migrate_tests.pl | 81 -- 1 file changed, 78 insertions(+), 3 deletions(-) diff --git a/test/run_qemu_migrate_tests.pl b/test/run_qemu_migrate_tests.pl index 3d5eb8d..ace557f 100755 --- a/test/run_qemu_migrate_tests.pl +++ b/test/run_qemu_migrate_tests.pl @@ -63,6 +63,24 @@ my $storage_config = { sparse => 1, type => "zfspool", }, + "alias-zfs" => { + content => { + images => 1, + rootdir => 1, + }, + pool => "aliaspool", + sparse => 1, + type => "zfspool", + }, + "alias-zfs-2" => { + content => { + images => 1, + rootdir => 1, + }, + pool => "aliaspool", + sparse => 1, + type => "zfspool", + }, "rbd-store" => { monhost => "127.0.0.42,127.0.0.21,::1", fsid => 'fc4181a6-56eb-4f68-b452-8ba1f381ca2a', @@ -149,6 +167,24 @@ my $vm_configs = { 'sockets' => 1, 'vmgenid' => '2c00c030-0b5b-4988-a371-6ab259893f22', }, +123 => { + 'bootdisk' => 'scsi0', + 'cores' => 1, + 'scsi0' => 'alias-zfs:vm-123-disk-0,size=4096M', + 'scsi1' => 'alias-zfs-2:vm-123-disk-0,size=4096M', + 'ide2' => 'none,media=cdrom', + 'memory' => 512, + 'name' => 'alias-test', + 'net0' => 'virtio=4A:A3:E4:4C:CF:F0,bridge=vmbr0,firewall=1', + 'numa' => 0, + 'ostype' => 'l26', + 'pending' => {}, + 'scsihw' => 'virtio-scsi-pci', + 'snapshots' => {}, + 'smbios1' => 'uuid=5ad71d4d-8f73-4377-853e-2d22c10c96a5', + 'sockets' => 1, + 'vmgenid' => '2c00c030-0b5b-4988-a371-6ab259893f22', +}, 149 => { 'agent' => '0', 'bootdisk' => 'scsi0', @@ -351,9 +387,9 @@ my $source_vdisks = { { 'ctime' => '1589277334', 'format' => 'raw', - 'size' => 108003328, - 'vmid' => '111', - 'volid' => 'local-zfs:vm-111-disk-0', + 'size' => 4294967296, + 'vmid' => '123', + 'volid' => 'local-zfs:vm-123-disk-0', }, { 'format' => 'raw', @@ -364,6 +400,24 @@ my $source_vdisks = { 'volid' => 'local-zfs:vm-4567-disk-0', }, ], +'alias-zfs' => [ + { + 'ctime' => '1589277334', + 'format' => 'raw', + 'size' => 108003328, + 'vmid' => '111', + 'volid' => 'local-zfs:vm-111-disk-0', + }, +], +'alias-zfs-2' => [ + { + 'ctime' => '1589277334', + 'format' => 'raw', + 'size' => 108003328, + 'vmid' => '111', + 'volid' => 'local-zfs:vm-111-disk-0', + }, +], 'rbd-store' => [ { 'ctime' => '1589277334', @@ -1592,6 +1646,27 @@ my $tests = [ }, }, }, +{ + name => '123_alias_fail', + target => 'pve1', + vmid => 123, + vm_status => { + running => 0, + }, + opts => { + 'with-local-disks' => 1, + }, + expected_calls => {}, + expect_die => "detected not supported aliased volumes", + expected => { + source_volids => local_volids_for_vm(123), + target_volids => {}, + vm_config => $vm_configs->{123}, + vm_status => { + running => 0, + }, + }, +}, ]; my $single_test_name = shift; -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 qemu-server 1/7] migration: only migrate disks used by the guest
When scanning all configured storages for disk images belonging to the VM, the migration could easily fail if a storage is not available, but enabled. That storage might not even be used by the VM at all. By not doing that and only looking at the disk images referenced in the VM config, we can avoid that. Extra handling is needed for disk images currently in the 'pending' section of the VM config. These disk images used to be detected by scanning all storages before. It is also necessary to fetch some information (size, format) about the disk images explicitly that used to be provided by the initial scan of all storages. The big change regarding behavior is that disk images not referenced in the VM config file will be ignored. They are already orphans that used to be migrated as well, but are now left where they are. The tests have been adapted to that changed behavior. Signed-off-by: Aaron Lauterer --- changes sind v2: - move handling of pending changes into QemuSerer::foreach_volid Seems to not have any bad side-effects - style fixes - use 'volume_size_info()' to get format and size of the image PVE/QemuMigrate.pm| 88 --- PVE/QemuServer.pm | 9 ++- test/MigrationTest/QemuMigrateMock.pm | 9 +++ test/run_qemu_migrate_tests.pl| 12 ++-- 4 files changed, 50 insertions(+), 68 deletions(-) diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm index 09cc1d8..163a721 100644 --- a/PVE/QemuMigrate.pm +++ b/PVE/QemuMigrate.pm @@ -149,6 +149,22 @@ sub lock_vm { return PVE::QemuConfig->lock_config($vmid, $code, @param); } +sub target_storage_check_available { +my ($self, $storecfg, $targetsid, $volid) = @_; + +if (!$self->{opts}->{remote}) { + # check if storage is available on target node + my $target_scfg = PVE::Storage::storage_check_enabled( + $storecfg, + $targetsid, + $self->{node}, + ); + my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid); + die "$volid: content type '$vtype' is not available on storage '$targetsid'\n" + if !$target_scfg->{content}->{$vtype}; +} +} + sub prepare { my ($self, $vmid) = @_; @@ -236,18 +252,7 @@ sub prepare { $storages->{$targetsid} = 1; - if (!$self->{opts}->{remote}) { - # check if storage is available on target node - my $target_scfg = PVE::Storage::storage_check_enabled( - $storecfg, - $targetsid, - $self->{node}, - ); - my ($vtype) = PVE::Storage::parse_volname($storecfg, $volid); - - die "$volid: content type '$vtype' is not available on storage '$targetsid'\n" - if !$target_scfg->{content}->{$vtype}; - } + $self->target_storage_check_available($storecfg, $targetsid, $volid); if ($scfg->{shared}) { # PVE::Storage::activate_storage checks this for non-shared storages @@ -312,49 +317,6 @@ sub scan_local_volumes { $abort = 1; }; - my @sids = PVE::Storage::storage_ids($storecfg); - foreach my $storeid (@sids) { - my $scfg = PVE::Storage::storage_config($storecfg, $storeid); - next if $scfg->{shared} && !$self->{opts}->{remote}; - next if !PVE::Storage::storage_check_enabled($storecfg, $storeid, undef, 1); - - # get list from PVE::Storage (for unused volumes) - my $dl = PVE::Storage::vdisk_list($storecfg, $storeid, $vmid, undef, 'images'); - - next if @{$dl->{$storeid}} == 0; - - my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid); - if (!$self->{opts}->{remote}) { - # check if storage is available on target node - my $target_scfg = PVE::Storage::storage_check_enabled( - $storecfg, - $targetsid, - $self->{node}, - ); - - die "content type 'images' is not available on storage '$targetsid'\n" - if !$target_scfg->{content}->{images}; - - } - - my $bwlimit = $self->get_bwlimit($storeid, $targetsid); - - PVE::Storage::foreach_volid($dl, sub { - my ($volid, $sid, $volinfo) = @_; - - $local_volumes->{$volid}->{ref} = 'storage'; - $local_volumes->{$volid}->{size} = $volinfo->{size}; - $local_volumes->{$volid}->{targetsid} = $targetsid; - $local_volumes->{$volid}->{bwlimit} = $bwlimit; - - # If with_snapshots is not set for storage migrate, it tries to use - # a raw+size stream, but on-the-fly conversion from qcow2 to raw+size - # back to qcow2 is currently not possible. - $local_volumes->{$volid}->{snapshots} = ($volinfo->{format} =~ /^(?:qcow2|vmdk)$/); - $local_volumes->{$volid}->{format} = $volinfo
[pve-devel] [PATCH v3 container 6/7] migration: fail when aliased volume is detected
Aliased volumes (referencing the same volume multiple times) can lead to unexpected behavior in a migration. Therefore, stop the migration in such a case. The check works by comparing the path returned by the storage plugin. This means that we should be able to catch the common situations where it can happen: * by referencing the same volid multiple times * having a different volid due to an aliased storage: different storage name but pointing to the same location. We decided against checking the storages themselves being aliased. It is not possible to infer that reliably from just the storage configuration options alone. Signed-off-by: Aaron Lauterer --- src/PVE/LXC/Migrate.pm | 9 + 1 file changed, 9 insertions(+) diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm index c5bca7a..7405600 100644 --- a/src/PVE/LXC/Migrate.pm +++ b/src/PVE/LXC/Migrate.pm @@ -182,6 +182,7 @@ sub phase1 { my $volhash = {}; # 'config', 'snapshot' or 'storage' for local volumes my $volhash_errors = {}; my $abort = 0; +my $path_to_volid = {}; my $log_error = sub { my ($msg, $volid) = @_; @@ -231,6 +232,8 @@ sub phase1 { die "owned by other guest (owner = $owner)\n" if !$owner || ($owner != $self->{vmid}); + $path_to_volid->{$path}->{$volid} = 1; + if (defined($snapname)) { # we cannot migrate shapshots on local storage # exceptions: 'zfspool', 'btrfs' @@ -277,6 +280,12 @@ sub phase1 { # finally all current volumes PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, $test_mp); +for my $path (keys %$path_to_volid) { + my @volids = keys $path_to_volid->{$path}->%*; + die "detected not supported aliased volumes: '" . join("', '", @volids) . "'" + if (scalar @volids > 1); +} + # additional checks for local storage foreach my $volid (keys %$volhash) { eval { -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH v3 container 5/7] migration: only migrate volumes used by the guest
When scanning all configured storages for volumes belonging to the container, the migration could easily fail if a storage is not available, but enabled. That storage might not even be used by the container at all. By not doing that and only looking at the disk images referenced in the config, we can avoid that. We need to add additional steps for pending volumes with checks if they actually exist. Changing an existing mountpoint to a new volume will only create the volume on the next start of the container. The big change regarding behavior is that volumes not referenced in the container config will be ignored. They are already orphans that used to be migrated as well, but are now left where they are. Signed-off-by: Aaron Lauterer --- changes since v2: - use PVE::LCX::NEW_DISK_RE to check for a not yet created pending volume - style fixe src/PVE/LXC/Migrate.pm | 42 -- 1 file changed, 12 insertions(+), 30 deletions(-) diff --git a/src/PVE/LXC/Migrate.pm b/src/PVE/LXC/Migrate.pm index ccf5157..c5bca7a 100644 --- a/src/PVE/LXC/Migrate.pm +++ b/src/PVE/LXC/Migrate.pm @@ -195,6 +195,12 @@ sub phase1 { return if !$volid; + # check if volume exists, might be a pending new one + if ($volid =~ $PVE::LXC::NEW_DISK_RE) { + $self->log('info', "volume '$volid' does not exist (pending change?)"); + return; + } + my ($sid, $volname) = PVE::Storage::parse_volume_id($volid); # check if storage is available on source node @@ -256,42 +262,18 @@ sub phase1 { &$log_error($@, $volid) if $@; }; -# first unused / lost volumes owned by this container -my @sids = PVE::Storage::storage_ids($self->{storecfg}); -foreach my $storeid (@sids) { - my $scfg = PVE::Storage::storage_config($self->{storecfg}, $storeid); - next if $scfg->{shared} && !$remote; - next if !PVE::Storage::storage_check_enabled($self->{storecfg}, $storeid, undef, 1); - - # get list from PVE::Storage (for unreferenced volumes) - my $dl = PVE::Storage::vdisk_list($self->{storecfg}, $storeid, $vmid, undef, 'rootdir'); - - next if @{$dl->{$storeid}} == 0; - - # check if storage is available on target node - my $targetsid = PVE::JSONSchema::map_id($self->{opts}->{storagemap}, $storeid); - if (!$remote) { - my $target_scfg = PVE::Storage::storage_check_enabled($self->{storecfg}, $targetsid, $self->{node}); - - die "content type 'rootdir' is not available on storage '$targetsid'\n" - if !$target_scfg->{content}->{rootdir}; - } - - PVE::Storage::foreach_volid($dl, sub { - my ($volid, $sid, $volname) = @_; - - $volhash->{$volid}->{ref} = 'storage'; - $volhash->{$volid}->{targetsid} = $targetsid; - }); -} - -# then all volumes referenced in snapshots +# first all volumes referenced in snapshots foreach my $snapname (keys %{$conf->{snapshots}}) { &$test_volid($conf->{snapshots}->{$snapname}->{'vmstate'}, 0, undef) if defined($conf->{snapshots}->{$snapname}->{'vmstate'}); PVE::LXC::Config->foreach_volume($conf->{snapshots}->{$snapname}, $test_mp, $snapname); } +# then all pending volumes +if (defined $conf->{pending} && %{$conf->{pending}}) { + PVE::LXC::Config->foreach_volume($conf->{pending}, $test_mp); +} + # finally all current volumes PVE::LXC::Config->foreach_volume_full($conf, { include_unused => 1 }, $test_mp); -- 2.30.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH proxmox-widget-toolkit v2 1/1] fix #4551: ui: use gettext on hardcoded byte units
Used wrong tag, this is for manager, not widget-toolkit... Am 06/04/2023 um 13:38 schrieb Noel Ullreich: > Since some languages translate byte units like 'GiB' or write them in their > own script, this patch wraps units in the `gettext` function. > > While most occurrences of byte strings can be translated within the > `format_size` function in `proxmox-widget-toolkit/src/Utils.js`, this patch > catches those instances that are not translated. > > Signed-off-by: Noel Ullreich > --- > www/manager6/ceph/OSD.js | 4 ++-- > www/manager6/form/DiskStorageSelector.js | 2 +- > www/manager6/lxc/MPResize.js | 2 +- > www/manager6/qemu/HDResize.js| 2 +- > www/manager6/qemu/HardwareView.js| 2 +- > 5 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/www/manager6/ceph/OSD.js b/www/manager6/ceph/OSD.js > index 2f12f94d..61fe5b06 100644 > --- a/www/manager6/ceph/OSD.js > +++ b/www/manager6/ceph/OSD.js > @@ -83,7 +83,7 @@ Ext.define('PVE.CephCreateOsd', { > { > xtype: 'numberfield', > name: 'db_dev_size', > - fieldLabel: gettext('DB size') + ' (GiB)', > + fieldLabel: `${gettext('DB size')} > (${gettext('GiB')})`, > minValue: 1, > maxValue: 128*1024, > decimalPrecision: 2, > @@ -137,7 +137,7 @@ Ext.define('PVE.CephCreateOsd', { > { > xtype: 'numberfield', > name: 'wal_dev_size', > - fieldLabel: gettext('WAL size') + ' (GiB)', > + fieldLabel: `${gettext('WAL size')} > (${gettext('GiB')})`, > minValue: 0.5, > maxValue: 128*1024, > decimalPrecision: 2, > diff --git a/www/manager6/form/DiskStorageSelector.js > b/www/manager6/form/DiskStorageSelector.js > index abd46deb..d408b815 100644 > --- a/www/manager6/form/DiskStorageSelector.js > +++ b/www/manager6/form/DiskStorageSelector.js > @@ -148,7 +148,7 @@ Ext.define('PVE.form.DiskStorageSelector', { > itemId: 'disksize', > reference: 'disksize', > name: 'disksize', > - fieldLabel: gettext('Disk size') + ' (GiB)', > + fieldLabel: `${gettext('Disk size')} (${gettext('GiB')})`, > hidden: me.hideSize, > disabled: me.hideSize, > minValue: 0.001, > diff --git a/www/manager6/lxc/MPResize.js b/www/manager6/lxc/MPResize.js > index 881c037b..d560b788 100644 > --- a/www/manager6/lxc/MPResize.js > +++ b/www/manager6/lxc/MPResize.js > @@ -52,7 +52,7 @@ Ext.define('PVE.window.MPResize', { > maxValue: 128*1024, > decimalPrecision: 3, > value: '0', > - fieldLabel: gettext('Size Increment') + ' (GiB)', > + fieldLabel: `${gettext('Size Increment')} (${gettext('GiB')})`, > allowBlank: false, > }); > > diff --git a/www/manager6/qemu/HDResize.js b/www/manager6/qemu/HDResize.js > index f9c7290d..29ff253b 100644 > --- a/www/manager6/qemu/HDResize.js > +++ b/www/manager6/qemu/HDResize.js > @@ -49,7 +49,7 @@ Ext.define('PVE.window.HDResize', { > maxValue: 128*1024, > decimalPrecision: 3, > value: '0', > - fieldLabel: gettext('Size Increment') + ' (GiB)', > + fieldLabel: `${gettext('Size Increment')} (${gettext('GiB')})`, > allowBlank: false, > }); this seems to be cut-off here, missign the full context, thus I get: Applying: fix #4551: ui: use gettext on hardcoded byte units error: corrupt patch at line 70 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] applied: [PATCH proxmox-widget-toolkit v2 1/1] fix 4551: ui: translate byte unit in `format_size`
Am 06/04/2023 um 13:38 schrieb Noel Ullreich: > Some languages translate byte units like 'GiB' or write them in their > own script. > > By `gettext`ing the units in the `format_size` function, we can > translate the units for (almost) all of the web interface. > > Signed-off-by: Noel Ullreich > --- -> add the v1 -> v2 changelog (filtered for the current patch) here too please > src/Utils.js | 14 -- > 1 file changed, 8 insertions(+), 6 deletions(-) > > applied, thanks! ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH widget-toolkit 2/2] utils: format_size: show negative size as NA
Am 19/04/2023 um 12:34 schrieb Aaron Lauterer: > Signed-off-by: Aaron Lauterer > --- > > AFAIK we do not have negative sizes anywhere, and if, it is an > indication that something is wrong. above belongs in the commit message, additionaly some background for why doing this now (i.e., did you run into this or what made you make this change?) > > src/Utils.js | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/src/Utils.js b/src/Utils.js > index ef72630..8cdbe86 100644 > --- a/src/Utils.js > +++ b/src/Utils.js > @@ -688,6 +688,9 @@ utilities: { > }, > > format_size: function(size, useSI) { > + if (size < 0) { > + return gettext("N/A"); catching this seems OK, but I'd rather just return the value then, as "N/A" (Not Applicable) doesn't really makes sense here and just hides a potential underlying problem. > + } > let units = ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y']; > let order = 0; > const baseValue = useSI ? 1000 : 1024; ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH widget-toolkit 3/3] fix #3892: NetworkEdit: add bridge vids field for bridge_vids
Don't we reuse that on PBS/PMG too, and if is it working there? The commit message isn't excactly telling... ;-) Am 13/04/2023 um 17:10 schrieb Aaron Lauterer: > Signed-off-by: Aaron Lauterer > --- > src/node/NetworkEdit.js | 16 > 1 file changed, 16 insertions(+) > > diff --git a/src/node/NetworkEdit.js b/src/node/NetworkEdit.js > index bb9add3..e4ab158 100644 > --- a/src/node/NetworkEdit.js > +++ b/src/node/NetworkEdit.js > @@ -57,11 +57,26 @@ Ext.define('Proxmox.node.NetworkEdit', { > } > > if (me.iftype === 'bridge') { > + let vids = Ext.create('Ext.form.field.Text', { > + fieldLabel: gettext('Bridge VIDS'), > + name: 'bridge_vids', > + emptyText: '2-4094', > + disabled: true, > + autoEl: { > + tag: 'div', > + 'data-qtip': gettext('Space-separated list of VLANs and > ranges, for example: 2 4 100-200'), > + }, > + }); > column2.push({ > xtype: 'proxmoxcheckbox', > fieldLabel: gettext('VLAN aware'), > name: 'bridge_vlan_aware', > deleteEmpty: !me.isCreate, > + listeners: { > + change: function(f, newVal) { > + vids.setDisabled(!newVal); > + }, > + }, > }); > column2.push({ > xtype: 'textfield', > @@ -72,6 +87,7 @@ Ext.define('Proxmox.node.NetworkEdit', { > 'data-qtip': gettext('Space-separated list of interfaces, > for example: enp0s0 enp1s0'), > }, > }); > + advancedColumn2.push(vids); > } else if (me.iftype === 'OVSBridge') { > column2.push({ > xtype: 'textfield', ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Am 01.06.23 um 15:53 schrieb DERUMIER, Alexandre: >>> >>> What AMD CPUs did you try it on? >>> >> epyc v2/v3 with last microcodes >> >> >> does it work with x86-64-v3 ? (because the other patch of the series >> could autofind the best new model if it's working) > > Looking at linux kernel code, they have some quirks based on cpu model > number && vendor > > nehalem is .family = 6, .model = 26, > > kvm64 is .family = 15, .model = 6 > > qemu64 is.family = 15, .model = 107, > Yeah, could be. I did try today with qm set 102 -args '-cpu Nehalem,+aes,-svm,-vmx,model-id="foo bar",vendor="AuthenticAMD"' and couldn't trigger the issue getting stuck during installation anymore. Switching back to manually selected Nehalem in the UI with the generated -cpu Nehalem,+aes,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=GenuineIntel I got the hang during my first installation attempt again. Then I tried around a bit more and now I'm suspecting it is something in combination with the +kvm_pv_unhalt flag. I got a hang with just qm set 102 -args '-cpu Nehalem,+kvm_pv_unhalt,vendor=GenuineIntel And I didn't ever get a hang without that flag yet. Aaron had also only had hangs with that flag. But it's not like it hangs every time with the flag either. > > maybe it could be interesting to test with adding flags from > qemu64/kvm4 to see if it's the same behaviour ? > > > from kvm64, xf86-64-v2 is > > '+lahf_lm,+popcnt, +sse4.1,sse4.2, +ssse3' (and optionnal +aes) Yes, let's do that! It does sound cleaner than to base it off Nehalem. FWIW, qm set 102 -args '-cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3' worked just now, but will need to test more tomorrow. > does it work with x86-64-v3 ? (because the other patch of the series > could autofind the best new model if it's working) I didn't test, but I'd be careful with bumping it even more. I'd like to focus on getting the new default in for the next major release. The best model selection needs more consideration and can always be added after the release. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Am 01.06.23 um 14:02 schrieb Eneko Lacunza: > Hi, > > We have Ryzen 1700, 2600X, 3700 and 5950X machines here, I can test on > them if that helps (please detail tests to perform). > > Thanks > Hi, thank you for the offer. It would be interesting to see if you have any issues with the following: > qm set -args '-cpu > kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+sep,+lahf_lm,+popcnt,+sse4.1,+sse4.2,+ssse3' If you like you can also test > qm set -args '-cpu > Nehalem,enforce,+aes,-svm,-vmx,+kvm_pv_eoi,+kvm_pv_unhalt,vendor="GenuineIntel"' After testing use > qm set --delete args to get rid of the modification again. Make sure to stop/start the VM fresh after each modification. As for what to test, installing Debian 11 would be nice just for comparison, but other than that, just do what you like, shouldn't really matter too much :) Best Regards, Fiona ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH proxmox 2/4] apt: split Ceph main repository into no-subscription and enterprise
The old 'main' component stays valid, pointing to no-subscription, which means the is_referenced_repository() check needs a special case for it. It will eventually go away, together with the handles for Quincy. Alternatively, the standard repository's info() could've been changed to return multiple possible components, similar to URLs, but as opposed to URLs, there could be a standard repository that wants to have multiple components and it feels a bit unnatural, because multiple components are usually not aliases of the same. And adapting is_referenced_repository() would be needed here too. So overall, the above alternative just felt better. Signed-off-by: Fiona Ebner --- Hope the URLs are correct like this. proxmox-apt/src/repositories/mod.rs| 3 +- proxmox-apt/src/repositories/repository.rs | 10 ++- proxmox-apt/src/repositories/standard.rs | 35 +++--- proxmox-apt/tests/repositories.rs | 3 +- 4 files changed, 38 insertions(+), 13 deletions(-) diff --git a/proxmox-apt/src/repositories/mod.rs b/proxmox-apt/src/repositories/mod.rs index 11f52dd..b2e83a0 100644 --- a/proxmox-apt/src/repositories/mod.rs +++ b/proxmox-apt/src/repositories/mod.rs @@ -90,7 +90,8 @@ pub fn standard_repositories( if product == "pve" { result.append(&mut vec![ -APTStandardRepository::from(APTRepositoryHandle::CephQuincy), + APTStandardRepository::from(APTRepositoryHandle::CephQuincyEnterprise), + APTStandardRepository::from(APTRepositoryHandle::CephQuincyNoSubscription), APTStandardRepository::from(APTRepositoryHandle::CephQuincyTest), ]); } diff --git a/proxmox-apt/src/repositories/repository.rs b/proxmox-apt/src/repositories/repository.rs index a5e3015..ba4e8b1 100644 --- a/proxmox-apt/src/repositories/repository.rs +++ b/proxmox-apt/src/repositories/repository.rs @@ -285,11 +285,19 @@ impl APTRepository { found_uri = found_uri || handle_uris.iter().any(|handle_uri| handle_uri == uri); } +// In the past it was main instead of enterprise/no-subscription, and main now maps to +// no-subscription. Note this only applies for Quincy. +let found_component = if handle == APTRepositoryHandle::CephQuincyNoSubscription { +self.components.contains(&component) || self.components.contains(&"main".to_string()) +} else { +self.components.contains(&component) +}; + self.types.contains(&package_type) && found_uri // using contains would require a &String && self.suites.iter().any(|self_suite| self_suite == suite) -&& self.components.contains(&component) +&& found_component } /// Guess the origin from the repository's URIs. diff --git a/proxmox-apt/src/repositories/standard.rs b/proxmox-apt/src/repositories/standard.rs index 33c4842..838a42b 100644 --- a/proxmox-apt/src/repositories/standard.rs +++ b/proxmox-apt/src/repositories/standard.rs @@ -46,8 +46,10 @@ pub enum APTRepositoryHandle { NoSubscription, /// The test repository. Test, -/// Ceph Quincy repository. -CephQuincy, +/// Ceph Quincy enterprise repository. +CephQuincyEnterprise, +/// Ceph Quincy no-subscription repository. +CephQuincyNoSubscription, /// Ceph Quincy test repository. CephQuincyTest, } @@ -71,7 +73,8 @@ impl TryFrom<&str> for APTRepositoryHandle { "enterprise" => Ok(APTRepositoryHandle::Enterprise), "no-subscription" => Ok(APTRepositoryHandle::NoSubscription), "test" => Ok(APTRepositoryHandle::Test), -"ceph-quincy" => Ok(APTRepositoryHandle::CephQuincy), +"ceph-quincy-enterprise" => Ok(APTRepositoryHandle::CephQuincyEnterprise), +"ceph-quincy-no-subscription" => Ok(APTRepositoryHandle::CephQuincyNoSubscription), "ceph-quincy-test" => Ok(APTRepositoryHandle::CephQuincyTest), _ => bail!("unknown repository handle '{}'", string), } @@ -84,7 +87,8 @@ impl Display for APTRepositoryHandle { APTRepositoryHandle::Enterprise => write!(f, "enterprise"), APTRepositoryHandle::NoSubscription => write!(f, "no-subscription"), APTRepositoryHandle::Test => write!(f, "test"), -APTRepositoryHandle::CephQuincy => write!(f, "ceph-quincy"), +APTRepositoryHandle::CephQuincyEnterprise => write!(f, "ceph-quincy-enterprise"), +APTRepositoryHandle::CephQuincyNoSubscription => write!(f, "ceph-quincy-no-subscription"), APTRepositoryHandle::CephQuincyTest => write!(f, "ceph-quincy-test"), } } @@ -107,8 +111,12 @@ impl APTRepositoryHandle { "This repository contains the latest packages and is primarily used for test labs \ and by developers to test new features." } -APTRepository
[pve-devel] [PATCH proxmox 3/4] apt: tests: code cleanup to avoid useless vector
Signed-off-by: Fiona Ebner --- proxmox-apt/tests/repositories.rs | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/proxmox-apt/tests/repositories.rs b/proxmox-apt/tests/repositories.rs index 4b3c9de..710e2f7 100644 --- a/proxmox-apt/tests/repositories.rs +++ b/proxmox-apt/tests/repositories.rs @@ -396,13 +396,11 @@ fn test_standard_repositories() -> Result<(), Error> { let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap(); file.parse()?; -let file_vec = vec![file]; - expected[0].status = Some(true); expected[1].status = Some(true); expected[2].status = Some(false); -let std_repos = standard_repositories(&file_vec, "pve", DebianCodename::Bullseye); +let std_repos = standard_repositories(&[file], "pve", DebianCodename::Bullseye); assert_eq!(std_repos, expected); -- 2.39.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH proxmox 1/4] apt: drop older Ceph standard repositories
On Proxmox VE 8, only Quincy and newer will be supported. Signed-off-by: Fiona Ebner --- proxmox-apt/src/repositories/mod.rs | 4 -- proxmox-apt/src/repositories/standard.rs | 58 proxmox-apt/tests/repositories.rs| 4 -- 3 files changed, 66 deletions(-) diff --git a/proxmox-apt/src/repositories/mod.rs b/proxmox-apt/src/repositories/mod.rs index d8848b8..11f52dd 100644 --- a/proxmox-apt/src/repositories/mod.rs +++ b/proxmox-apt/src/repositories/mod.rs @@ -92,10 +92,6 @@ pub fn standard_repositories( result.append(&mut vec![ APTStandardRepository::from(APTRepositoryHandle::CephQuincy), APTStandardRepository::from(APTRepositoryHandle::CephQuincyTest), -APTStandardRepository::from(APTRepositoryHandle::CephPacific), -APTStandardRepository::from(APTRepositoryHandle::CephPacificTest), -APTStandardRepository::from(APTRepositoryHandle::CephOctopus), -APTStandardRepository::from(APTRepositoryHandle::CephOctopusTest), ]); } diff --git a/proxmox-apt/src/repositories/standard.rs b/proxmox-apt/src/repositories/standard.rs index 5af3117..33c4842 100644 --- a/proxmox-apt/src/repositories/standard.rs +++ b/proxmox-apt/src/repositories/standard.rs @@ -50,14 +50,6 @@ pub enum APTRepositoryHandle { CephQuincy, /// Ceph Quincy test repository. CephQuincyTest, -/// Ceph Pacific repository. -CephPacific, -/// Ceph Pacific test repository. -CephPacificTest, -/// Ceph Octoput repository. -CephOctopus, -/// Ceph Octoput test repository. -CephOctopusTest, } impl From for APTStandardRepository { @@ -81,10 +73,6 @@ impl TryFrom<&str> for APTRepositoryHandle { "test" => Ok(APTRepositoryHandle::Test), "ceph-quincy" => Ok(APTRepositoryHandle::CephQuincy), "ceph-quincy-test" => Ok(APTRepositoryHandle::CephQuincyTest), -"ceph-pacific" => Ok(APTRepositoryHandle::CephPacific), -"ceph-pacific-test" => Ok(APTRepositoryHandle::CephPacificTest), -"ceph-octopus" => Ok(APTRepositoryHandle::CephOctopus), -"ceph-octopus-test" => Ok(APTRepositoryHandle::CephOctopusTest), _ => bail!("unknown repository handle '{}'", string), } } @@ -98,10 +86,6 @@ impl Display for APTRepositoryHandle { APTRepositoryHandle::Test => write!(f, "test"), APTRepositoryHandle::CephQuincy => write!(f, "ceph-quincy"), APTRepositoryHandle::CephQuincyTest => write!(f, "ceph-quincy-test"), -APTRepositoryHandle::CephPacific => write!(f, "ceph-pacific"), -APTRepositoryHandle::CephPacificTest => write!(f, "ceph-pacific-test"), -APTRepositoryHandle::CephOctopus => write!(f, "ceph-octopus"), -APTRepositoryHandle::CephOctopusTest => write!(f, "ceph-octopus-test"), } } } @@ -130,20 +114,6 @@ impl APTRepositoryHandle { "This repository contains the Ceph Quincy packages before they are moved to the \ main repository." } -APTRepositoryHandle::CephPacific => { -"This repository holds the main Proxmox Ceph Pacific packages." -} -APTRepositoryHandle::CephPacificTest => { -"This repository contains the Ceph Pacific packages before they are moved to the \ -main repository." -} -APTRepositoryHandle::CephOctopus => { -"This repository holds the main Proxmox Ceph Octopus packages." -} -APTRepositoryHandle::CephOctopusTest => { -"This repository contains the Ceph Octopus packages before they are moved to the \ -main repository." -} } .to_string() } @@ -156,10 +126,6 @@ impl APTRepositoryHandle { APTRepositoryHandle::Test => "Test", APTRepositoryHandle::CephQuincy => "Ceph Quincy", APTRepositoryHandle::CephQuincyTest => "Ceph Quincy Test", -APTRepositoryHandle::CephPacific => "Ceph Pacific", -APTRepositoryHandle::CephPacificTest => "Ceph Pacific Test", -APTRepositoryHandle::CephOctopus => "Ceph Octopus", -APTRepositoryHandle::CephOctopusTest => "Ceph Octopus Test", } .to_string() } @@ -174,10 +140,6 @@ impl APTRepositoryHandle { APTRepositoryHandle::Test => "/etc/apt/sources.list".to_string(), APTRepositoryHandle::CephQuincy => "/etc/apt/sources.list.d/ceph.list".to_string(), APTRepositoryHandle::CephQuincyTest => "/etc/apt/sources.list.d/ceph.list".to_string(), -APTRepositoryHandle::CephPacific => "/etc/apt/sources.list.d/ceph.list".to_string(), -APTRepositoryHandle::CephPacificTest => "/etc/apt/sources.list.d/ceph.list".to_string(), -APTReposito
[pve-devel] [PATCH proxmox 4/4] apt: tests: add tests for Ceph Quincy repository detection on Bookworm
Signed-off-by: Fiona Ebner --- proxmox-apt/tests/repositories.rs | 30 +++ .../ceph-quincy-bookworm.list | 6 .../ceph-quincy-nosub-bookworm.list | 2 ++ .../sources.list.d/ceph-quincy-bookworm.list | 4 +++ .../ceph-quincy-nosub-bookworm.list | 2 ++ 5 files changed, 44 insertions(+) create mode 100644 proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list create mode 100644 proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list create mode 100644 proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list create mode 100644 proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list diff --git a/proxmox-apt/tests/repositories.rs b/proxmox-apt/tests/repositories.rs index 710e2f7..4cfb264 100644 --- a/proxmox-apt/tests/repositories.rs +++ b/proxmox-apt/tests/repositories.rs @@ -404,6 +404,36 @@ fn test_standard_repositories() -> Result<(), Error> { assert_eq!(std_repos, expected); +let pve_alt_list = read_dir.join("ceph-quincy-bookworm.list"); +let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap(); +file.parse()?; + +expected[0].status = None; +expected[1].status = None; +expected[2].status = None; +expected[3].status = Some(true); +expected[4].status = Some(true); +expected[5].status = Some(true); + +let std_repos = standard_repositories(&[file], "pve", DebianCodename::Bookworm); + +assert_eq!(std_repos, expected); + +let pve_alt_list = read_dir.join("ceph-quincy-nosub-bookworm.list"); +let mut file = APTRepositoryFile::new(&pve_alt_list)?.unwrap(); +file.parse()?; + +expected[0].status = None; +expected[1].status = None; +expected[2].status = None; +expected[3].status = None; +expected[4].status = Some(true); +expected[5].status = None; + +let std_repos = standard_repositories(&[file], "pve", DebianCodename::Bookworm); + +assert_eq!(std_repos, expected); + Ok(()) } diff --git a/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list new file mode 100644 index 000..9095e27 --- /dev/null +++ b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-bookworm.list @@ -0,0 +1,6 @@ +deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise + +deb http://download.proxmox.com/debian/ceph-quincy bookworm main + +deb http://download.proxmox.com/debian/ceph-quincy bookworm test + diff --git a/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list new file mode 100644 index 000..b60fa98 --- /dev/null +++ b/proxmox-apt/tests/sources.list.d.expected/ceph-quincy-nosub-bookworm.list @@ -0,0 +1,2 @@ +deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription + diff --git a/proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list b/proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list new file mode 100644 index 000..fde8eba --- /dev/null +++ b/proxmox-apt/tests/sources.list.d/ceph-quincy-bookworm.list @@ -0,0 +1,4 @@ +deb https://enterprise.proxmox.com/debian/ceph-quincy bookworm enterprise +deb http://download.proxmox.com/debian/ceph-quincy bookworm main +deb http://download.proxmox.com/debian/ceph-quincy bookworm test + diff --git a/proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list b/proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list new file mode 100644 index 000..b60fa98 --- /dev/null +++ b/proxmox-apt/tests/sources.list.d/ceph-quincy-nosub-bookworm.list @@ -0,0 +1,2 @@ +deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription + -- 2.39.2 ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH-SERIES v3 qemu-server/manager/common] add and set x86-64-v2 as default model for new vms and detect best cpumodel
Hi, I found an interesting thread on the forum about kvm_pv_unhalt https://forum.proxmox.com/threads/live-migration-between-intel-xeon-and-amd-epyc2-linux-guests.68663/ Sounds good. Please also take a look at the default flag "kvm_pv_unhalt". As I mentioned, it would cause a kernel crash in paravirtualized unhalt code sooner or later in a migrated VM (started on Intel, migrated to AMD). Please note that according to our tests simply leaving the CPU type empty in the GUI (leading to the qemu command line argument of -cpu kvm64,+sep,+lahf_lm,+kvm_pv_unhalt,+kvm_pv_eoi,enforce), while seemingly working at first, will after some load and idle time in the VM result in a crash involving kvm_kick_cpu function somewhere inside of the paravirtualized halt/unhalt code. Linux kernels tested ranged from Debian's 4.9.210-1 to Ubuntu's 5.3.0-46 (and some in between). Therefore the Proxmox default seems to be unsafe and apparently the very minimum working command line probably would be args: -cpu kvm64,+sep,+lahf_lm,+kvm_pv_eoi. So,it sound like it's crash if it's defined with a cpu vendor not matching the real hardware ? as it's break migration between intel && amd, maybe we shouldn't add it to the new x86-64-vx model ? a discussion on qemu-devel mailing is talking about performance with/witout it https://lists.nongnu.org/archive/html/qemu-devel/2017-10/msg01816.html and it's seem to help when you have a lot of cores/numa nodes in guest, but can slowdown small vms. ___ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel