On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller
> ---
> Introduces a pve-common dependency bump.
>
> PVE/API2/AccessControl.pm | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
applied, thanks!
___
pve-de
On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller
> ---
> Introduces a pve-access-control dependency bump.
yeah, we can't have cluster-wide dependencies (yet), so another node
can still have the old pve-access-control, but fortunately we only
support setups where
On 10/28/19 12:20 PM, Wolfgang Bumiller wrote:
> Signed-off-by: Wolfgang Bumiller
> ---
> www/manager6/dc/TFAEdit.js | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/www/manager6/dc/TFAEdit.js b/www/manager6/dc/TFAEdit.js
> index 7d19127d..8f3017f6 100644
> --- a/www/mana
On 10/29/19 7:10 AM, Thomas Lamprecht wrote:
On 10/28/19 12:59 PM, Stefan Reiter wrote:
The current version had only one user in LXC, so move the LXC-specific
code there to reuse this in QemuServer.
Also cache, since the host's architecture can't change during runtime.
Signed-off-by: Stefan Re
On 10/29/19 7:27 AM, Thomas Lamprecht wrote:
On 10/28/19 12:59 PM, Stefan Reiter wrote:
The current version had only one user in LXC, so move the LXC-specific
code there to reuse this in QemuServer.
Also cache, since the host's architecture can't change during runtime.
Signed-off-by: Stefan Re
On 10/29/19 9:54 AM, Stefan Reiter wrote:
> On 10/29/19 7:27 AM, Thomas Lamprecht wrote:
>> On 10/28/19 12:59 PM, Stefan Reiter wrote:
>>> The current version had only one user in LXC, so move the LXC-specific
>>> code there to reuse this in QemuServer.
>>>
>>> Also cache, since the host's architec
We currently have a 5 seconds timeout for zfs_request for non-workers and
that is too low for some use cases of pvesr. If we can set the WORKER_FLAG
manually, we can work around the issue.
Signed-off-by: Fabian Ebner
---
User report where zfs destroy and zfs snapshot time out [0].
Previous discu
We currently have a 5 seconds timeout for zfs_request for non-workers and
that is too low for some use cases of pvesr. As a workaround we create
fake workers doing the storage operations and use our own timeouts.
Signed-off-by: Fabian Ebner
---
Is 60 a good value for the timeout?
Should we make
Signed-off-by: Dominik Csapak
---
www/manager6/node/Config.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/node/Config.js b/www/manager6/node/Config.js
index 054ced64..91a999e1 100644
--- a/www/manager6/node/Config.js
+++ b/www/manager6/node/Config.js
@@ -169,6 +169,7 @@ Ext.d
when the host has ifupdown2 installed, we can hot apply the config
add a button to do this
if the user does not meet the requirements, the api call
will show why and throw an error (without changing anything)
the button has to be enabled via 'showApplyBtn', because for now,
we do not want it for
add a button to the network view to allow hot applying of the network
config. this requires ifupdown2 to be installed
show the button always but let the api call error out, so that the
user know what needs to be done to get it to work
we could also somehow hide the button if we want, but this wil
instead of writing the config after every change, we can do it once for
all the changes at the end to avoid redundant i/o.
we also don't need to load_config after writing fastplug changes.
Signed-off-by: Oguz Bektas
---
PVE/QemuServer.pm | 9 ++---
1 file changed, 2 insertions(+), 7 deletion
the global variable is now called QEMU_FASTPLUG_OPTIONS.
we can also check them earlier during the pending delete loop to speed
up the change.
Signed-off-by: Oguz Bektas
---
PVE/QemuServer.pm | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer.pm b/P
also rename $add_error to $add_hotplug_error to differentiate between
apply_error (for vmconfig_apply_pending) and hotplug_error
Signed-off-by: Oguz Bektas
---
PVE/QemuServer.pm | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
inde
* add $errors parameter and error handling code to vmconfig_apply_pending.
* replace redundant write/load config calls with a single write_config
at the end
Signed-off-by: Oguz Bektas
---
PVE/API2/Qemu.pm | 6 ++---
PVE/QemuServer.pm | 60 +--
2 fil
looks mostly good to me, after a bit of search
i found that the reason for the delayed update
is that we/you queried the wrong store
the rstore in the context is the 'real store' which
triggered the update of the 'diffstore' that showed the
changes
the sequence was:
* api call
* update of rstor
adds the pending button for Resources, Options and DNS screens.
Co-developed-by: Dominik Csapak
Signed-off-by: Oguz Bektas
---
v2->v3:
* use getStore() instead of rstore while checking for datachanged, in
light of Dominik's debugging (thanks!)
* add missing startUpdate to DNS.js
* remove FIXME
as we noticed at the lxc side, we should use diffStore in order to
update the button status without delay.
Co-developed-by: Dominik Csapak
Signed-off-by: Oguz Bektas
---
www/manager6/qemu/HardwareView.js | 4 ++--
www/manager6/qemu/Options.js | 2 +-
2 files changed, 3 insertions(+), 3 del
On 10/28/19 12:59 PM, Stefan Reiter wrote:
> ...now that it no longer does LXC-specific stuff. Removes a FIXME.
>
> Signed-off-by: Stefan Reiter
> ---
> PVE/QemuServer.pm | 8 +---
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
applied, with dependency version bump in d/control for lib
On 10/28/19 2:30 PM, Stefan Reiter wrote:
> The codepath for "any" hugepages did not check if memory size was even,
> leading to the code below trying to allocate half a hugepage (e.g. VM
> with 2049MiB RAM would lead to 1024.5 2kB hugepages).
>
> Also improve error message for systems with only 1
On 10/28/19 12:47 PM, Dominic Jäger wrote:
> This function has been used in one place only into which we inlined its
> functionality. Removing it avoids confusion between vm_destroy and vm_destroy.
>
> The whole $importfn is executed in a lock_config_full.
> As a consequence, for the inlined code:
On 10/28/19 12:47 PM, Dominic Jäger wrote:
> Previously a VMID conflict was possible when creating a VM on another node
> between locking the config with lock_config_full and writing to it for the
> first time with write_config.
>
> Using create_and_lock_config eliminates this possibility. This me
On 10/28/19 12:47 PM, Dominic Jäger wrote:
> Functions like qm importovf can now set the "lock" property in a config file
> before calling do_import.
>
> Signed-off-by: Dominic Jäger
> ---
> v1->v2: Edited only the commit message ("parameter lock" -> "lock property")
>
> PVE/CLI/qm.pm
On 10/28/19 10:57 AM, Fabian Ebner wrote:
> When doing an online migration with --targetstorage unused disks get migrated
> to the specified target storage as well.
> With this patch we keep track of those volumes and update the VM config with
> their new locations. Unused volumes of the VM previou
On 10/22/19 2:48 PM, Dominik Csapak wrote:
> otherwise, having multiple ipconfigX entries, can lead to different
> instance-ids on different startups, which is not desired
>
> Signed-off-by: Dominik Csapak
> ---
> 2 issues i have with this:
> * we have a cyclic dependency between PVE::QemuServer
25 matches
Mail list logo