llooning sections of the docs.
Patch #2 implements a warning on VM start if PCI(e) passthrough and
ballooning are both enabled.
[0] https://forum.proxmox.com/threads/134202/
docs:
Friedrich Weber (1):
pci passthrough: mention incompatibility with ballooning
qm-pci-passthrough.adoc | 10
When using PCI(e) passthrough, setting a minimum amount of memory does
not have any effect, which may be surprising to users [0]. Add a note
to the PCI(e) passthrough section, and reference it in the ballooning
section.
[0] https://forum.proxmox.com/threads/134202/
Signed-off-by: Friedrich Weber
: Friedrich Weber
---
Notes:
I did not test this on a "real" PCI passthrough setup as I don't have
one at hand, but Markus tested (an earlier version) of this patch on
his machine.
PVE/QemuServer.pm | 10 ++
1 file changed, 10 insertions(+)
diff --git a/PVE/Qemu
On 14/11/2023 09:30, Fiona Ebner wrote:
> Am 13.11.23 um 18:09 schrieb Friedrich Weber:
>>
>> +xref:qm_ballooning[Automatic memory allocation (ballooning)] is not possible
>> +when using PCI(e) passthrough. As the PCI device may use DMA (Direct Memory
>> +Access), QEM
Thanks for the review! I'll send a v2.
On 14/11/2023 10:13, Fiona Ebner wrote:
> Am 13.11.23 um 18:09 schrieb Friedrich Weber:
>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>> index dbcd568..70983a4 100644
>> --- a/PVE/QemuServer.pm
>> +++ b/PVE/QemuServer.
looning is not possible", "VM will use maximum configured
memory", "QEMU needs to map" ...). I'll have to take another look at
this to see how we can phrase this correctly (and hopefully somewhat
precisely) for v2.
On 14/11/2023 11:20, Friedrich Weber wrote:
> On
Tested with an OPNsense VM: With pve-qemu-kvm 8.1.2-2, it did not boot
from SATA ("Root mount waiting for: CAM"). virtio worked though.
With the patched pve-qemu-kvm package I got from Fiona, the VM booted
from SATA again. virtio still works too.
Tested-by: Friedrich Weber
On 20/11/
0.0-2. Without the patch, the editor froze for
a few seconds and nothing was pasted. With the patch, pasting works again.
Would be great if we could get this in, as the VNC clipboard is
half-broken without it.
Tested-by: Friedrich Weber
On 22/11/2023 13:41, Fiona Ebner wrote:
> This fixes th
On 17/11/2023 13:53, Wolfgang Bumiller wrote:
> Patch itself LGTM, just a note on sending patch series in general:
>
> If you number patches throughout a whole series rather than the
> individual repositories (as in, this one is labeled 4/4 instead of 1/1),
> it would be nice if the order also hel
Thanks for the review! I'll prepare a v2 that incorporates the UI
changes I suggested earlier. I do have some questions regarding the
concurrent tasks scenario in patch #2, see my separate mail.
On 17/11/2023 13:31, Wolfgang Bumiller wrote:
[...]
>> On 26/01/2023 09:32, Friedrich
Thanks for looking into this!
On 17/11/2023 14:09, Wolfgang Bumiller wrote:
[...]
>> return PVE::LXC::Config->lock_config($vmid, $lockcmd);
>
> ^ Here we lock first, then fork the worker, then do `vm_stop` with the
> config lock inherited.
>
> This means that creating multiple shutdown
Already discussed with Stefan off-list yesterday, posting here for the
record:
There is one problem when upgrading from < 8.1.4 with a custom LVM
config where global_filter spans multiple lines, e.g.:
devices {
# added by pve-manager to avoid scanning ZFS zvols
global_filter=["r|/dev/zd
On 14/12/2023 10:56, Stefan Hanreich wrote:> On 12/14/23 10:55, Stefan
Hanreich wrote:
>> Yes, at this point I'm also not sure there is a sane way to handle this.
>
> doing it for new installations should be possible though
Yeah, I'd agree that it's probably the safest to not rewrite existing
glo
/pull/274
Signed-off-by: Friedrich Weber
---
Notes:
Tested on PVE 8:
* installed ntpsec and ntpsec-ntpdate
reboot without `quiet` -> boot hangs at networking
* installed patched ifupdown2, ntpsec, ntpsec-ntpdate
reboot -> boot ok
debian/patches/
Tested-by: Friedrich Weber
Tried a couple of upgrades from PVE 7 to PVE 8 (including pve-manager
with this patch). When upgrading, dpkg asks (in most cases) whether to
keep local /etc/lvm/lvm.conf or install package maintainer version, so I
tried both answers. Results were as I'd expect
I started testing this and will send a complete mail later, just wanted
to mention one thing I've stumbled upon.
Consider this pre-upgrade lvm.conf:
devices {
# added by pve-manager to avoid scanning ZFS zvols
global_filter=[
"r|/dev/zd.*|"]
}
As `lvmconfig` normalizes the linebreak,
On 01/12/2023 10:57, Friedrich Weber wrote:
> On 17/11/2023 14:09, Wolfgang Bumiller wrote:
> [...]
>>> return PVE::LXC::Config->lock_config($vmid, $lockcmd);
>>
>> ^ Here we lock first, then fork the worker, then do `vm_stop` with the
>> config l
always provide it.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
Suggested-by: Fabian Grünbichler
Signed-off-by: Friedrich Weber
---
src/PVE/Storage/LVMPlugin.pm | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
7
Suggested-by: Fabian Grünbichler
Signed-off-by: Friedrich Weber
---
Notes:
Should only be applied close to the next major release, see cover
letter.
src/PVE/Storage/LVMPlugin.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/P
automatically sets the flag for all
existing (PVE-owned) LVs.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
storage:
Friedrich Weber (2):
lvm: ignore "activation skip" LV flag during LV activation
fix #4997: lvm: set "activation skip" flag for newly created LVs
sr
=5acf0c04a;hb=38e0c7a1#l222
Signed-off-by: Friedrich Weber
---
src/PVE/Storage/LVMPlugin.pm | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
index 4b951e7..557d602 100644
--- a/src/PVE/Storage/LVMPlugin.pm
+++ b/src/PVE/Storage
On 12/01/2024 09:57, Fiona Ebner wrote:
> Am 11.01.24 um 17:58 schrieb Friedrich Weber:
>>
>> [1]
>> https://sourceware.org/git/?p=lvm2.git;a=blob;f=lib/format_text/archive.c;h=5acf0c04a;hb=38e0c7a1#l222
>>
>
>> 222 log_print_unless_si
On 12/01/2024 10:22, Fabian Grünbichler wrote:
>> --- a/src/PVE/Storage/LVMPlugin.pm
>> +++ b/src/PVE/Storage/LVMPlugin.pm
>> @@ -130,6 +130,9 @@ sub lvm_vgs {
>>
>> my ($name, $size, $free, $lvcount, $pvname, $pvsize, $pvfree) =
>> split (':', $line);
>>
>> +# skip human-read
On 12/01/2024 11:28, Fabian Grünbichler wrote:
>> The vgs message is printed to stdout, so we could do something like
>>
>> warn $line if !defined($size);
>>
>> ?
>
> yep, that would be an option (warn+return ;))
Right, thanks. Thinking about this some more, printing a user-visible
warning sounds
...@google.com/
[8] https://lore.kernel.org/all/20240110012045.505046-1-sea...@google.com/
[9] https://lore.kernel.org/kvm/zaa654hwfkba_...@google.com/
[10] https://lore.kernel.org/all/20240110214723.695930-1-sea...@google.com/
Signed-off-by: Friedrich Weber
---
Notes:
This RFC is not meant to be
about the unexpected line.
[1]
https://sourceware.org/git/?p=lvm2.git;a=blob;f=lib/format_text/archive.c;h=5acf0c04a;hb=38e0c7a1#l222
Signed-off-by: Friedrich Weber
---
Notes:
changes from v1 [2]:
* warn about the unexpected line instead of simply ignoring it
[2] https
On 19/01/2024 12:31, Fiona Ebner wrote:
> Am 19.01.24 um 11:59 schrieb Fiona Ebner:
>> Am 18.01.24 um 12:11 schrieb Friedrich Weber:
>>> diff --git a/src/PVE/Storage/LVMPlugin.pm b/src/PVE/Storage/LVMPlugin.pm
>>> index 4b951e7..5377823 100644
>>> --- a/src/PV
se Net::IP::prefix to prepare a debug message,
but this returns undef if a range was specified. To avoid the warning,
use Net::IP::print to obtain a string representation instead.
Signed-off-by: Friedrich Weber
---
src/PVE/APIServer/AnyEvent.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletion
acquired).
To avoid blocking the API handler, immediately fork off a worker
process and try to acquire the config lock in that worker.
Patch best viewed with `git show -w`.
Suggested-by: Wolfgang Bumiller
Signed-off-by: Friedrich Weber
---
Notes:
The diff is somewhat messy without `-w
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `vzshutdown` tasks by the current user for the same
CT are aborted before attempting to stop the CT.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber
---
Notes
ature, both outcomes
seem bearable.
The confirmation message box is now always marked as dangerous (with a
warning sign icon), whereas previously it was only marked dangerous if
the stop issued from the guest panel, but not when issued from the
resource tree command menu.
Signed-off-by: Fr
This helper is used to abort any active qmshutdown/vzshutdown tasks
before attempting to stop a VM/CT (if requested).
Signed-off-by: Friedrich Weber
---
Notes:
no changes v1 -> v2
src/PVE/GuestHelpers.pm | 18 ++
1 file changed, 18 insertions(+)
diff --git a/src/
This way, it can be used to retrieve the current list of tasks.
Signed-off-by: Friedrich Weber
---
Notes:
new in v2:
* moved fix for pve-cluster-tasks store into its own patch
www/manager6/dc/Tasks.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/dc
: Friedrich Weber
---
Notes:
no changes v1 -> v2
PVE/API2/Qemu.pm | 16 +++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index cdc8f7a..e6a7657 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2964,7 +2964
; message box that has an optional checkbox offering to overrule shutdown
tasks
* split pve-manager patch in two
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=4474
guest-common:
Friedrich Weber (1):
guest helpers: add helper to overrule active tasks of a specific type
src/PVE/GuestHe
On 23/01/2024 11:01, Friedrich Weber wrote:
> On 19/01/2024 12:31, Fiona Ebner wrote:
>> Am 19.01.24 um 11:59 schrieb Fiona Ebner:
[...]
>>> Please use log_warn() from PVE::RESTEnvironment for new warnings, so
>>> they also show up in task logs.
>>
>> Sorry,
Thanks a lot for tackling this issue!
Gave this a quick spin on a pre-existing 3-node Quincy cluster on which
I provoked a few crashes with `kill -n11 $(pidof ceph-osd)`.
ceph-base with patch 2 applied (provided by Max off-list) correctly
changed the /var/lib/ceph/crash/posted permissions to ceph
Thanks for the review!
On 26/01/2024 12:14, Fiona Ebner wrote:
>> Some points to discuss:
>>
>> * Fabian and I discussed whether it may be better to pass `-K` and set the
>> "activation skip" flag only for LVs on a *shared* LVM storage. But this may
>> cause issues for users that incorrectly m
early if the checksums do not match.
Signed-off-by: Friedrich Weber
---
Notes:
- failing the build might be a bit drastic, but a simple warning seems
too easy to overlook
- strictly speaking we'll miss updates of `sysctl_pid_max` [1], but to
catch this, we'd need to
ctl settings
do not set `net.bridge.bridge-nf-call-iptables`, this will avoid the
temporary flip to 0 when installing/upgrading ceph-osd.
Signed-off-by: Friedrich Weber
---
...t-avoid-reloading-all-sysctl-setting.patch | 47 +++
patches/series| 1 +
2 fi
based on stable-quincy-8) adds the same patch our quincy build
- patch #3 (based on master) extends the Makefile with a reminder to adjust
the ceph-osd postinst patch if needed. This patch is optional.
ceph master:
Friedrich Weber (1):
fix #5213: ceph-osd postinst: add patch to avoid conne
ctl settings
do not set `net.bridge.bridge-nf-call-iptables`, this will avoid the
temporary flip to 0 when installing/upgrading ceph-osd.
Signed-off-by: Friedrich Weber
---
...t-avoid-reloading-all-sysctl-setting.patch | 47 +++
patches/series| 1 +
2 fi
On 15/02/2024 14:16, Thomas Lamprecht wrote:
[...]
>
> applied, thanks!
>
> as talked off-list, ceph is really not trying to reduce confusion potential
> doing things like:
>
> install -D -m 644 etc/sysctl/90-ceph-osd.conf
> $(DESTDIR)/etc/sysctl.d/30-ceph-osd.conf
>
> I.e., having it checked
Quickly tested the patch series on my existing Ceph Quincy cluster, did
not encounter major issues -- the keyring was created and the Ceph
config was rewritten accordingly. After a restart of `ceph-crash`, it
correctly posts crashes (produced with `kill -n11 $(pidof ceph-osd)`)
again and does not w
On 21/02/2024 12:07, Aaron Lauterer wrote:
> This patch series adds the possibility to do an automated / unattended
> installation of Proxmox VE.
Gave this a quick spin installing some virtual PVE hosts with a simple
static IP + ext4 setup. Generated an ISO from my `answer.toml` with:
$ mkisofs -
On 21/02/2024 14:15, Max Carrara wrote:
> On 2/21/24 12:55, Friedrich Weber wrote:
>> [...]
>>
>> - the `ceph-crash` service does not restart after installing the patched
>> ceph-base package, so the reordering done by patches 02+04 does not take
>> effect immedia
Thanks for tackling this! Can confirm this patch demotes the error to a
warning and lets the qmclone task succeed (with a warning). GUI shows
"Warnings: 1" and task log contains:
can't deactivate LV '/dev/foobar/vm-100-disk-0': Logical volume
foobar/vm-100-disk-0 in use.
WARN: volume deactivatio
On 06/03/2024 13:40, Fiona Ebner wrote:
> Am 06.03.24 um 11:47 schrieb Hannes Duerr:
>> @@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
>>
>> if ($target) {
>> # always deactivate volumes - avoid lvm LVs to be active on
>> several nodes
>> -PVE
Tested-by: Friedrich Weber
Can confirm the patch fixes the issue of parallel qmclones failing
occasionally due to a LVM deactivation error, and the extra \n in the
task log from v1 is gone.
One tiny comment inline:
On 06/03/2024 15:08, Hannes Duerr wrote:
> When a template with disks on LVM
Tested setting up a fresh Reef with patched packages, and tested
updating an existing Reef with the new packages. In both cases, crashes
are posted without noise in the journal and without having to manually
restart ceph-crash. Nice!
Also tested the case where [client.crash] already has a `key` (s
.
Fixes: cd731902b7a724b1ab747276f9c6343734f1d8cb
Signed-off-by: Friedrich Weber
---
Notes:
To check if we have this problem at other places, I did a quick search
for `extraRequestParams` in PVE+PBS: Seems like for all other usages,
the object is created fresh already.
www/manager6/grid
On 13/03/2024 09:44, Friedrich Weber wrote:
> Currently, after adding a storage to a pool, opening any edit window
> will send a GET request with a superfluous `poolid` parameter and
> cause a parameter verification error in the GUI. This breaks all edit
> windows of the curren
On 14/03/2024 15:43, Stefan Sterz wrote:
> On Wed Mar 13, 2024 at 9:44 AM CET, Friedrich Weber wrote:
>> Currently, after adding a storage to a pool, opening any edit window
>> will send a GET request with a superfluous `poolid` parameter and
>> cause a parameter verifica
ping -- still applies.
On 30/01/2024 18:10, Friedrich Weber wrote:
> As reported in #4474 [0], a user may attempt to shutdown a VM/CT,
> realize that it is unresponsive, and decide to stop it instead. If the
> shutdown task has not timed out yet, the stop task will fail. The user
&
.
Fixes: cd731902b7a724b1ab747276f9c6343734f1d8cb
Signed-off-by: Friedrich Weber
---
Notes:
changes since v1:
- remove unnecessary quotes (thx Stefan)
www/manager6/grid/PoolMembers.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/grid/PoolMembers.js b/www
less likely in the future.
Changes from v1:
- Patch 1/3: avoid unnecessary quotes
- Patch 2/3 + 3/3 are new
[1] https://lists.proxmox.com/pipermail/pve-devel/2024-March/062179.html
manager:
Friedrich Weber (2):
ui: pool members: avoid setting request parameter for all edit windows
ui: pool
shared object and
inadvertently modifies it, but at least they will be limited to that
particular subclass.
[1] https://lists.proxmox.com/pipermail/pve-devel/2024-March/062179.html
Signed-off-by: Friedrich Weber
---
Notes:
With patch 2/3 applied, I think all occurrences of
shared object
in the future, create a new `extraRequestParams` object for each
instance of `PVE.pool.AddVM`.
Signed-off-by: Friedrich Weber
---
Notes:
new in v2
www/manager6/grid/PoolMembers.js | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/www/manager6/grid
On 04/04/2024 10:22, Stefan Sterz wrote:
> On Wed Apr 3, 2024 at 11:10 AM CEST, Friedrich Weber wrote:
>> Currently, `Proxmox.window.Edit` initializes `extraRequestParams` to
>> an object that, if not overwritten, is shared between all instances of
>> subclasses. This bears th
On 04/04/2024 11:23, Stefan Sterz wrote:
> -- >8 snip 8< --
>>>
>>> i did a quick an dirty test and using a constructor like this seems to
>>> rule out this class of bug completelly:
>>>
>>> ```js
>>> constructor: function(conf) {
>>> let me = this;
>>> me.extraRequestParams = {
On 04/04/2024 12:59, Thomas Lamprecht wrote:
> Am 04/04/2024 um 12:10 schrieb Friedrich Weber:
>> Maybe we could do:
>>
>> ```js
>> extraRequestParams: {},
>>
>> constructor: function(conf) {
>> let me = this;
>> me.e
Thanks for the review!
On 04/04/2024 17:20, Thomas Lamprecht wrote:
> Am 30/01/2024 um 18:10 schrieb Friedrich Weber:
>
> Maybe start of with "Add a helper to abort all tasks from a specific
> (type, user, vmid) tuple. It will be used ...
Will do.
>> This helper is u
On 04/04/2024 17:26, Thomas Lamprecht wrote:
> Oh, and it might be worth mentioning explicitly in the next release notes,
> as it's a change in behavior that could theoretically throw up some
> tooling that depends on the $action not failing due to locking if the
> adapted endpoints returned – albe
On 06/04/2024 10:37, Thomas Lamprecht wrote:
>> Still, right now I think the primary motivation for this overruling
>> feature is to save GUI users some frustration and/or clicks. In this
>> scenario, a user will overrule only their own tasks, which is possible
>> with the current check. What do yo
On 06/04/2024 17:07, Thomas Lamprecht wrote:
> Am 30/01/2024 um 18:10 schrieb Friedrich Weber:
> [...]
>> +raise_param_exc({ 'overrule-shutdown' => "Not applicable for HA
>> resources." })
>> +if $overrule_shutdown;
>
> Thi
ations do not leak to other subclass instances.
Suggested-by: Stefan Sterz
Suggested-by: Thomas Lamprecht
Signed-off-by: Friedrich Weber
---
Notes:
@Thomas, I've added a Suggested-by, feel free to remove/keep as you
prefer.
Changes from v1+v2:
- As suggested by sterzy
On 08/04/2024 14:36, Thomas Lamprecht wrote:
> Am 08/04/2024 um 12:36 schrieb Stefan Sterz:
>> [...]
>> so, this seems like a fix bug a) creates bug b) type of situation...
>> this patch means that editing a pool allows changing the name suddenly,
>> but since we don't support that in the backend,
ations do not leak to other subclass instances.
Suggested-by: Stefan Sterz
Suggested-by: Thomas Lamprecht
Signed-off-by: Friedrich Weber
---
Notes:
@Thomas, I've added a Suggested-by, feel free to remove/keep as you
prefer.
Changes from v3:
- Fix broken pool edit wind
, but this seems preferable to the misery mode going
unnoticed.
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/x86/kernel/cpu/intel.c?id=727209376f49
Signed-off-by: Friedrich Weber
---
Notes:
With this patch applied, I see a risk that some users will
On 02/04/2024 16:55, Max Carrara wrote:
> Fix #4759: Configure Permissions for ceph-crash.service - Version 5
> ===
Thanks for the v4! Consider this
Tested-by: Friedrich Weber
Details:
- like Maximiliano, removed the v
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `qmshutdown` tasks for the same VM (which are
visible to the user/token) are aborted before attempting to stop the
VM.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `vzshutdown` tasks for the same CT (which are
visible to the user/token) are aborted before attempting to stop the
CT.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich
can abort any task started by themselves
or one of their API tokens.
The helper is used to overrule any active qmshutdown/vzshutdown tasks
when attempting to stop a VM/CT (if requested).
Signed-off-by: Friedrich Weber
---
Notes:
As the computation of `$can_abort_task` essentially
This way, it can be used to retrieve the current list of tasks.
Signed-off-by: Friedrich Weber
---
Notes:
changes v2 -> v3:
* no changes
new in v2:
* moved fix for pve-cluster-tasks store into its own patch
www/manager6/dc/Tasks.js | 2 +-
1 file changed, 1 insert
patch in two
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=4474
guest-common:
Friedrich Weber (1):
guest helpers: add helper to abort active guest tasks of a certain
type
src/PVE/GuestHelpers.pm | 35 +++
1 file changed, 35 insertions(+)
container:
Fr
convenience feature, both outcomes
seem bearable.
The confirmation message box is now always marked as dangerous (with a
warning sign icon), whereas previously it was only marked dangerous if
the stop issued from the guest panel, but not when issued from the
resource tree command menu.
Signe
On 17/04/2024 13:31, Maximiliano Sandoval wrote:
> [...]
> - $headers->{'Accept-Encoding'} = 'gzip' if ($reqstate->{accept_gzip} &&
> $self->{compression});
> + if ($self->{compression}) {
> + if ($reqstate->{accept_deflate} && $reqstate->{accept_gzip}) {
> + $headers->
ll visible tasks may be
aborted.
Also, add a full-stop that was previously missing.
Signed-off-by: Friedrich Weber
---
PVE/API2/Qemu.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 00cd907..2a349c8 100644
--- a/PVE/API2/Qemu.pm
+++
Add missing spaces and full-stops and wrap strings according to Perl
style guide.
Signed-off-by: Friedrich Weber
---
PVE/API2/Qemu.pm | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 6c9e883..00cd907 100644
--- a/PVE/API2
All patches are optional:
- 1/4 fixes spacing and punctuation in the qmshutdown/qmstop descriptions
- 2/4 rewords the overrule-shutdown description for VMs
- 3/4 is the same change for containers
- 4/4 adds a usage example for qm stop -overrule-shutdown to the docs
qemu-server:
Friedrich Weber
ll visible tasks may be
aborted.
Also, add a full-stop that was previously missing.
Signed-off-by: Friedrich Weber
---
src/PVE/API2/LXC/Status.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/API2/LXC/Status.pm b/src/PVE/API2/LXC/Status.pm
index 08e23b6..2eeecdf 10
Signed-off-by: Friedrich Weber
---
qm.adoc | 7 +++
1 file changed, 7 insertions(+)
diff --git a/qm.adoc b/qm.adoc
index 45e3a57..42c26db 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1839,6 +1839,13 @@ Same as above, but only wait for 40 seconds.
# qm shutdown 300 && qm wait 300 -ti
Linux 8 (almalinux-8-default_20210928_amd64.tar.xz)
- CentOS 7 (centos-7-default_20190926_amd64.tar.xz)
- CentOS 8 Stream (centos-8-stream-default_20220327_amd64.tar.xz)
- Rocky Linux 8 (rockylinux-8-default_20210929_amd64.tar.xz)
Signed-off-by: Friedrich Weber
---
Question: This will cause Setup/CentOS.pm to
When trying to shutdown a hung container with `forceStop=0` (e.g. via
the Web UI), the shutdown task may run indefinitely while holding a
lock on the container config. The reason is that the shutdown
subroutine waits for the LXC command socket to close, even if the
`lxc-stop` command has failed due
On 25/01/2023 09:25, Wolfgang Bumiller wrote:
The general approach is fine, but `run_with_timeout` uses SIGALRM and
messes with signal handlers which is rather inelegant for such a thing,
we should limit its use to when we have no other option (mainly
file-locking).
For this case we can just use
When trying to shutdown a hung container with `forceStop=0` (e.g. via
the Web UI), the shutdown task may run indefinitely while holding a lock
on the container config. The reason is that the shutdown subroutine
waits for the LXC command socket to close, even if the `lxc-stop`
command has failed due
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `qmshutdown` tasks by the current user for the same
VM are aborted before attempting to stop the VM.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber
---
PVE
The new `overrule-shutdown` parameter is boolean and defaults to 0. If
it is 1, all active `vzshutdown` tasks by the current user for the same
CT are aborted before attempting to stop the CT.
Passing `overrule-shutdown=1` is forbidden for HA resources.
Signed-off-by: Friedrich Weber
---
src
t in a surviving shutdown task that the user
still needs to abort manually, or a superfluous `override-shutdown=1`
parameter that does not actually abort any tasks. Since "stop
overrules shutdown" is merely a convenience feature, both outcomes
seem bearable.
Signed-off-by: Friedric
e any suggestions?
Since this is my first patch with more than a few lines, I'm especially
happy about feedback regarding coding style, naming, anything. :)
[0]: https://bugzilla.proxmox.com/show_bug.cgi?id=4474
pve-manager:
Friedrich Weber (1):
fix #4474: ui: vm stop: ask if active sh
This helper is used to abort any active qmshutdown/vzshutdown tasks
before attempting to stop a VM/CT (if requested).
Signed-off-by: Friedrich Weber
---
src/PVE/GuestHelpers.pm | 18 ++
1 file changed, 18 insertions(+)
diff --git a/src/PVE/GuestHelpers.pm b/src/PVE
Can confirm this patch fixes the issue, so
Tested-by: Friedrich Weber
Steps to reproduce:
1) Create LDAP realm with default sync settings
2) Edit LDAP realm, go to sync settings tab, enter values in all text
boxes except one (e.g. all except "Group filter")
3) Click OK
On curr
be carried out on a read-only mountpoint.
Hence, exclude bind mointpoints and read-only mountpoints
from trimming.
Signed-off-by: Friedrich Weber
---
src/PVE/CLI/pct.pm | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/src/PVE/CLI/pct.pm b/src/PVE/CLI/pct.pm
index 3ade2ba
As I also missed that feature, I applied the patches to my PVE instance
with pre-existing containers -- all interfaces stayed up as expected,
and disconnecting/reconnecting interfaces for running and stopped
containers via the Web UI worked nicely.
Tested-by: Friedrich Weber
On 22/02/2023
ould be added in the future.
Signed-off-by: Friedrich Weber
---
The warning could of course be even more detailed, e.g., "container uid range
[1000...1009] is already mapped to [101000...101009] by entry 'u 0 10
65536'". But this would require a more sophisticated algorithm, an
logged to the browser console.
Note that this patch only concerns components that use `setProxy` for
changing API endpoints. Other components (e.g. those using
`proxy.setURL` for the same purpose) may be open to similar race
conditions.
Signed-off-by: Friedrich Weber
---
The original report only
lid /etc/pve,
potentially leading to confusing "transport endpoint not connected"
messages in future interactions.
To avoid this, require the user to chdir out of /etc/pve before
running `pvecm add`.
Signed-off-by: Friedrich Weber
---
data/PVE/CLI/pvecm.pm | 6 ++
1 file changed, 6 inser
Tested-by: Friedrich Weber
I think that would be nice to have, e.g. to set noserverino [1] or
actimeo [2] without having to mount manually.
[1]
https://forum.proxmox.com/threads/proxmox-backup-problem.123560/#post-537586
[2]
https://forum.proxmox.com/threads/pve-cifs-connection-timed-out
-by-put-api-return-500.124099/
Signed-off-by: Friedrich Weber
---
To see if we have the same problem for other API endpoints, I ran:
grep -r "['\"]perm['\"][^[]*]" .
in my locally checked-out repos, but found only this single occurrence.
PVE/API2/Qemu.pm | 2 +
Thanks for the review!
On 16/03/2023 14:59, Wolfgang Bumiller wrote:
Both seem a bit excessive to me.
Let's look at the data:
We have a set of ranges consisting of a type, 2 starts and a count.
The types are uids and gids, so we can view those as 2 separate
instances of sets of [ct_start, host_
1 - 100 of 199 matches
Mail list logo