Am 8/12/19 um 10:37 AM schrieb Dominik Csapak: > we want to notify the api that there is a new qemu-binary, since > the version will be cached in qemu-server and instead of > checking the version every time, just restart pveproxy/pvedaemon > whenever there is a qemu update > > this fixes a rare theoretical issue when only updating qemu, that > the pvedaemon starts a vm with a new version but with the > defaults of an old version because of the version cache, > breaking live migration > > Signed-off-by: Dominik Csapak <d.csa...@proxmox.com> > --- > i do not know if that issue was ever triggered, but it seems > very unlikely, so this is just to be safe > > the other alternative, either dont cache the version, or caching > and checking the file timestamp would also work, but this is the 'cheaper' > solution overall, since we do not update pve-qemu-kvm that often
This also triggers a full HA LRM/CRM restart, not really cheap either. Is the LRM with the direct access to the API module (in a always newly forked worker) also affected by the caching "issue" from a call to the "QemuServer::kvm_user_version" method? Else I'd either just: * restart pve- proxy/daemon "manually" in the configure step * improve the caching detection by doing your proposed "/usr/sbin/kvm" stat call * removing caching entirely Maybe you could benchmark if the removal of the caching is really a big performance "hit", I'd guess so as fork+exec is not too cheap.. If it's not just a few percent points I'd to the stat thing, I really want to avoid LRM/CRM restarts if possible. > > debian/triggers | 1 + > 1 file changed, 1 insertion(+) > create mode 100644 debian/triggers > > diff --git a/debian/triggers b/debian/triggers > new file mode 100644 > index 0000000..59dd688 > --- /dev/null > +++ b/debian/triggers > @@ -0,0 +1 @@ > +activate-noawait pve-api-updates > _______________________________________________ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel