On 16/03/2023 17:09, Wolfgang Bumiller wrote:
> Since it's about helping out users, even better would be to collect all
> the errors together and than die() with a message containing all of
> them.
> And then the order doesn't matter again ;-)
Agree. :) So I'll prepare a v2 that
* is based on the
This prevents strange interactions in case the same content directory
is used for multiple content types.
Signed-off-by: Friedrich Weber
---
I guess technically this is a breaking change, as users may have an
iso+vztmpl storage that symlinks 'templates/iso' to 'templates/cache
The config option is currently not exposed in the GUI.
Signed-off-by: Friedrich Weber
---
qm.adoc | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/qm.adoc b/qm.adoc
index 15ac2fc..0726b29 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -1089,9 +1089,8 @@ Writer VSS module in a
Tested-by: Friedrich Weber
Tested the following:
* PVE 7.3: setup LDAP realms
realm #1 with `base_dn ou=Foo- und Bar,dc=example,dc=com`
realm #2 with `base_dn ou=Users,dc=example,dc=com`
both work, i.e., sync is possible and users can log in
* Update to 7.4:
realm #1: users cannot login
modifies the override such that it has no effect in the
mobile UI.
Fixes: 51083ee54aa98af5a711622e4ed240840dcbbabe
Suggested-by: Dominik Csapak
Signed-off-by: Friedrich Weber
---
src/Utils.js | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/Utils.js b/src/Utils.js
index
Tested-by: Friedrich Weber
Checked the following:
* PVE 7.3: create VM with a passed-through disk, backup works
* Upgrade to PVE 7.4: backup fails with "no storage ID specified" error
* Applied this patch: backup works again
Tested with backup to local directory as w
: 3e3faddb4a3792557351f1a6e9f2685e4713eb24
Link: https://forum.proxmox.com/threads/125411/
Signed-off-by: Friedrich Weber
---
src/PVE/APIServer/AnyEvent.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/APIServer/AnyEvent.pm b/src/PVE/APIServer/AnyEvent.pm
index 1f9952f
I tested this (also discussed off-list, posting here for the record):
Creating a new VM with the default display (no `vga` config entry),
then enabling the "Use noVNC clipboard" option via the Web UI results in
a "vga type is not compatible with clipboard (500)" error. It works if I
explicitly set
Can confirm the "vga type is not compatible ..." issue from v5 is gone,
and I do like the new `vnc_clipboard` name -- thanks!
I noticed two small glitches, though I'm not sure if they are that
important.
Glitch #1:
1) Create VM with default VGA options: Hardware->Display shows "Default"
2) Enable
On 20/04/2023 18:12, Friedrich Weber wrote:
> Glitch #2:
> 1) Debian VM with XFCE and spice-vdagent 0.20.0, also install xclip
> 2) Type "x" in noVNC clipboard window
> 3) Inside the VM, run:
> ```
> $ xclip -o -selection clipboard | xxd
> : 7800
Can confirm the UI glitch from v6 (the empty "Display" option) is gone
and the noVNC clipboard works (browser->VM and vice-versa) -- modulo the
additional null byte issue which cannot be addressed in this patch
series (see my v6 review for more details).
Tested-by: Fr
if misconfigured, could still prevent the container from
starting with an error like
"newuidmap: uid range [1000-1010) -> [1000-1010) not allowed"
If needed, validating /etc/sub{uid,gid} could be added in the future.
Signed-off-by: Friedrich Weber
---
Notes:
Changes from v1:
26398/post-552807
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3502
Signed-off-by: Friedrich Weber
---
Notes:
An alternative workaround is offered by an unapplied patch series [3]
of bug #3502 [2] that makes it possible to set VM-specific timeouts
(also in the GUI). Users could use t
Thanks for the review!
On 12/05/2023 11:19, Wolfgang Bumiller wrote:
>> So to report *all* conflicts, we'd need an algorithm that keeps track
>> of all currently active intervals while iterating. I'm open for
>
> ^ Only the end really.
I think you're right, we only need to track the endp
if misconfigured, could still prevent the container from
starting with an error like
"newuidmap: uid range [1000-1010) -> [1000-1010) not allowed"
If needed, validating /etc/sub{uid,gid} could be added in the future.
Signed-off-by: Friedrich Weber
---
Notes:
Changes from v2:
Ping -- I think this would be quite useful.
On 13/03/2023 10:38, Friedrich Weber wrote:
> Tested-by: Friedrich Weber
>
> I think that would be nice to have, e.g. to set noserverino [1] or
> actimeo [2] without having to mount manually.
>
> [1]
> https://forum.proxmox.com/t
On 06/06/2023 17:28, Thomas Lamprecht wrote:
> Well, then lets apply this for upcoming 8, maybe you can add a note to
> our breaking changes list so that users are aware.
Done!
> Also, checking this upfront and erroring in pve7to8 checker script should
> not be that hard and def. help some to avo
Using a directory for multiple content types will throw an error in
PVE 8 (see 5f4b5bd1 in pve-storage). Hence, detect this in pve7to8 for
active storages and warn if needed.
Signed-off-by: Friedrich Weber
---
PVE/CLI/pve7to8.pm | 39 +++
1 file changed, 39
On 07/06/2023 12:01, Fiona Ebner wrote:
> Should we also check in the create/update API calls for syntactic
> duplicates and fail the call? E.g. I can successfully issue:
> pvesh set /storage/foo --content-dirs backup=bar,iso=bar
> and only get the error later during activation.
Not allowing users
the directory during the inequality check.
Signed-off-by: Friedrich Weber
---
src/PVE/Storage/Plugin.pm | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
index ab6b675..3f9f1ec 100644
--- a/src/PVE/Storage/Plugin.pm
the behavior of the actual content-dirs check in PVE 8 [0].
[0]: https://git.proxmox.com/?p=pve-storage.git;a=commit;h=09f1f847a
Fixes: ea0a4f1943ffafe94282afc800d5720db45df198
Signed-off-by: Friedrich Weber
---
PVE/CLI/pve7to8.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/PVE/CLI
Started testing, I'll continue on Monday.
Noticed so far:
* Small UI glitch:
On node 1, click PCI->Add Devices
Choose a PCI device that also exists on node 2
Select node 2 in the upper-right corner
Click "Create"
-> Mapping entry is added for node 1, where I would have expected node 2
*
behavior is kept for now to avoid issues in cluster
upgrade scenarios, where some nodes that still rely on the "x" marker
could allow logins without a second factor.
[1] https://forum.proxmox.com/threads/130440/
Suggested-by: Wolfgang Bumiller
Signed-off-by: Friedrich Weber
---
src/PVE/
The 7->8 upgrade guide [1] mentions that cgroup v1 will be deprecated
starting from PVE 9.0, so also mention this in the docs.
[1] https://pve.proxmox.com/wiki/Upgrade_from_7_to_8#cgroup_V1_Deprecation
Signed-off-by: Friedrich Weber
---
pct.adoc | 5 ++---
1 file changed, 2 insertions(+)
Tested the following:
* created a mapping on a 3-node cluster, added mapping to PVE8 VM,
offline-migrated VM between cluster nodes, checked that `mount` inside
the VM mounts the correct host directory
* checked that `xattr=1` makes xattrs available in the guest, and
`acl=1` makes acls available i
.
Suggested-by: Wolfgang Bumiller
Signed-off-by: Friedrich Weber
---
Notes:
The challenge generation step could still be improved by making sure
that the generated challenge matches the realm's TFA type (TOTP or
Yubico), while also taking recovery keys into account. However, th
Tested against slapd 2.4.47+dfsg-3+deb10u6. I quite like the connection
check when creating/updating the realm, and also, it seems sensible to
delegate DN validation to Net::LDAP.
I noticed one bug: Weirdly, updating the realm via CLI or manually via
API now errors out for me (the connection detai
cloudinit pending`,
as it also relies on $cloudinitoptions.
This issue was originally reported in the forum [0].
Also add a comment to avoid similar issues when adding new options in
the future.
[0]: https://forum.proxmox.com/threads/131043/
Signed-off-by: Friedrich Weber
---
Notes:
Not sure if
ping (patch still applies)
On 13/03/2023 13:56, Friedrich Weber wrote:
> Trying to regenerate a cloudinit drive as a non-root user via the API
> currently throws a Perl error, as reported in the forum [1]. This is
> due to a type mismatch in the permission check, where a string is
> p
Tested against slapd 2.4.47+dfsg-3+deb10u6 again, also with a base DN
with escaped UTF-8 -- connection check and authentication worked fine.
Also tested that updating the realm via API/pveum works now, thanks for
fixing this!
Tested-by: Friedrich Weber
On 24/07/2023 11:03, Christoph Heiss
Signed-off-by: Friedrich Weber
---
www/manager6/Utils.js | 9 ++---
www/manager6/dc/BackupJobDetail.js | 1 +
www/manager6/dc/PCIMapView.js | 2 +-
www/manager6/dc/USBMapView.js | 2 +-
www/manager6/form/PCIMapSelector.js | 1 +
www/manager6/form/USBMapSelector.js
These two patches add some `htmlEncode` calls/renderers that
had been missing to proxmox-widget-toolkit and pve-manager.
Each patch can be individually applied.
widget-toolkit:
Friedrich Weber (1):
ui: add some missing `htmlEncode`s
src/form/NetworkSelector.js | 1 +
src/node
Signed-off-by: Friedrich Weber
---
src/form/NetworkSelector.js | 1 +
src/node/APTRepositories.js | 1 +
2 files changed, 2 insertions(+)
diff --git a/src/form/NetworkSelector.js b/src/form/NetworkSelector.js
index 86d394d..ed3a02b 100644
--- a/src/form/NetworkSelector.js
+++ b/src/form
On 25/07/2023 11:54, Thomas Lamprecht wrote:
> How about a get_vm_user_cloudinit_options helper located directly
> below the format definition, filtering out those keys that do not
> make sense, or are off limits, and use that?
The helper sounds good! AFAICT, $cloudinitoptions contains exactly all
happens for quite some users during the
upgrade from PVE 7 to 8.
Further, describe how to pin a specific naming scheme version and how
to override interface names using systemd.link files.
Also, make some formatting fixes to the existing text.
Signed-off-by: Friedrich Weber
---
pve-network.adoc
for "custom" names.
So to maybe start a discussion on this idea, this series comes with two
additional RFC patches: Patch #2 changes the documentation from patch #1
accordingly, and patch #3 adds the `c*` pattern.
docs:
Friedrich Weber (2):
fix #4847: network: extend section on interf
w users to choose meaningful interface names and still
configure these NICs via the GUI, recognize interfaces matching `c*`
(for "custom") as physical NICs.
Suggested-by: Aaron Lauterer
Signed-off-by: Friedrich Weber
---
src/PVE/Network.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(
Signed-off-by: Friedrich Weber
---
pve-network.adoc | 20 +---
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/pve-network.adoc b/pve-network.adoc
index b2202cc..bd6a15d 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -71,6 +71,9 @@ We currently use the
As described, make a PBS backup of an existing VM and then took a
snapshot. Rolling back to the snapshot failed with "qemu:
qemu_mutex_unlock_impl: Operation not permitted". With this patch
applied, rolling back worked.
Tested-by: Friedrich Weber
On 28/07/2023 11:44, Fiona Ebner wr
On 21/08/2023 10:33, Fiona Ebner wrote:
> Would it make sense to instead add a constant multiplier to the memory
> timeout heuristic in presence of PCI passthrough? The user says 65 GiB
> takes about 3 min 30 s, so assuming it's more or less linear, the 5 min
> from this patch would not be enough f
Lost track of this a bit, reviving due to user interest [1].
As the series does not apply anymore, I'll send a new version in any
case, but wanted to ask for feedback before I do.
My questions from the cover letter still apply:
On 26/01/2023 09:32, Friedrich Weber wrote:
> * Does it ma
Is it possible that this pve-docs patch got lost and was not actually
applied? At least I don't see it here:
https://git.proxmox.com/?p=pve-docs.git;a=history;f=pve-storage-cifs.adoc;h=df63b58d6eefc5c7e2ee302e4ac57fa52c8c372e;hb=0aa61e04787ca6ac791fe6bce28686c9a9fc9ade
... whereas the pve-storage
Tested-by: Friedrich Weber
Tested patched ISO provided by Stoiko:
* installed in legacy VM
** checked that `grub-pc` is installed
** re-installing it prints "Installing for i386-pc platform"
* installed in UEFI VM
** checked that `grub-efi-amd64` is installed
** re-installing
Ping due to user interest [1]. The patch still applies (though the repo
in the subject should be `pve-docs`), and AFAICT the feedback for v8 has
been addressed.
[1] https://forum.proxmox.com/threads/134202/#post-592682
On 20/07/2023 11:32, Noel Ullreich wrote:
> A little update to the PCI(e) docs
reads/83765/post-552071
[1]: https://forum.proxmox.com/threads/126398/post-592826
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3502
Suggested-by: Fiona Ebner
Signed-off-by: Friedrich Weber
---
Notes:
changes since v1 (was called "vm start: set minimum timeout of 300s if
On 04/10/2023 14:05, Stoiko Ivanov wrote:
> diff --git a/src/proxmox-boot/zz-proxmox-boot
> b/src/proxmox-boot/zz-proxmox-boot
> index 1adc1b1..0d08dbf 100755
> --- a/src/proxmox-boot/zz-proxmox-boot
> +++ b/src/proxmox-boot/zz-proxmox-boot
> @@ -215,6 +215,23 @@ disable_systemd_boot_hook() {
>
On 04/10/2023 14:05, Stoiko Ivanov wrote:
> +} elsif ( ! -f "/usr/share/doc/grub-efi-amd64/changelog.Debian.gz" ) {
> + log_warn(
> + "System booted in uefi mode but grub-efi-amd64 meta-package not
> installed"
> + . " new grub versions will not be installed to /boot/efi -"
potential for confusion as much as possible.
The exact phrasing aside, consider this:
Tested-by: Friedrich Weber
Can confirm that with this patch,
* pve7to8 prints the warning on UEFI-booted system with root on LVM and
grub-pc installed
* pve7to8 does *not* print the warning on
** the same syst
Tested-by: Friedrich Weber
Can confirm that with this patch,
* the warning appears after installing a new kernel on a UEFI-booted
system with root on LVM
* the warning does *not* appear after installing a new kernel on
** a UEFI-booted system with root on ZFS (using systemd-boot)
** a legacy
On 11/10/2023 11:39, Friedrich Weber wrote:
> Can confirm that with this patch,
>
> * the warning appears after installing a new kernel on a UEFI-booted
> system with root on LVM
Just to be clear: This is if grub-pc is installed. If i install
grub-efi-amd64 instead, the warning doe
Looks good to me, thanks a lot!
Tested-by: Friedrich Weber
Reviewed-by: Friedrich Weber
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
-656188
Fixes: 96c0261 ("fix #4847: network: extend section on interface naming scheme")
Signed-off-by: Friedrich Weber
---
pve-network.adoc | 4
1 file changed, 4 insertions(+)
diff --git a/pve-network.adoc b/pve-network.adoc
index ef586ec..8e5fa1c 100644
--- a/pve-network.adoc
forum [0] and in #5429 [1].
[0] https://forum.proxmox.com/threads/144557/post-656188
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5429
Fixes: 96c0261 ("fix #4847: network: extend section on interface naming scheme")
Signed-off-by: Friedrich Weber
---
Notes:
Change
interface B to the same name X (it will fail with "File
exists").
To avoid this confusion, mention the link files are copied to the
initramfs, and suggest updating the initramfs after making changes to
the link files.
Suggested-by: Hannes Laimer
Signed-off-by: Friedrich Weber
an
[2]
https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/sendkey
Suggested-by: Fabian Grünbichler
Signed-off-by: Friedrich Weber
---
src/PVE/APIServer/AnyEvent.pm | 19 ++-
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/src/PVE/APIServer/A
patch as RFC already.
On 17/06/2024 18:03, Friedrich Weber wrote:
> The API server proxies HTTP requests in two cases:
>
> - between cluster nodes (pveproxy->pveproxy)
> - between daemons on one node for protected API endpoints
> (pveproxy->pvedaemon)
>
> The AP
ues with the networking of upgraded hosts.
Thanks for tackling this! Consider this
Tested-by: Friedrich Weber
I tested the following:
- Set up a PVE8 VM with an active-backup bond, from /etc/network/interfaces:
> auto bond0
> iface bond0 inet manual
> bond-slaves ens18 ens19
>
On 09/07/2024 17:12, Mira Limbeck wrote:
> cloudbase-init, a cloud-init reimplementation for Windows, supports only
> a subset of the configuration options of cloud-init. Some features
> depend on support by the Metadata Service (ConfigDrive2 here) and have
> further limitations [0].
>
> To suppor
Igor sent a v2:
https://lore.proxmox.com/pve-devel/mailman.73.1722428094.302.pve-de...@lists.proxmox.com/T/#u
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
M.Config.Disk in the frontend
instead of Sys.Console.
Reported in enterprise support.
Signed-off-by: Friedrich Weber
---
www/manager6/qemu/HardwareView.js | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/www/manager6/qemu/HardwareView.js
b/www/manager6/qemu/HardwareView.js
i
discovery.
With this setting, one login attempt should take at most 15 seconds.
This is still higher than pvestatd's iteration time of 10 seconds, but
more tolerable. Logins will still be continuously retried by pvestatd
in every iteration until there is a session to each discovered portal.
discovery.
With this setting, one login attempt should take at most 15 seconds.
This is still higher than pvestatd's iteration time of 10 seconds, but
more tolerable. Logins will still be continuously retried by pvestatd
in every iteration until there is a session to each discovered portal.
S
Thanks for the test and review!
On 11/10/2024 13:20, Mira Limbeck wrote:
> [...]
>> --- a/src/PVE/Storage/ISCSIPlugin.pm
>> +++ b/src/PVE/Storage/ISCSIPlugin.pm
>> @@ -132,6 +132,14 @@ sub iscsi_login {
>> eval { iscsi_discovery($portals); };
>> warn $@ if $@;
>>
>> +# Disable retr
On 20/06/2024 09:45, Thomas Lamprecht wrote:
> Nice work and write up!
>
> Acked-by: Thomas Lamprecht
>
> But yeah, seeing some benchmarking for before/after this patch would still be
> great, that's also the main reason for me not applying this now already.
sent a v2 with some benchmarking:
h
world workloads I'd expect the response time
for non-idempotent requests to be dominated by other factors.
[1] https://metacpan.org/pod/AnyEvent::HTTP#persistent-=%3E-$boolean
[2]
https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/sendkey
[3] https://github.com
I think having recent boot timestamps and kernel versions in the report
would be nice, I can think of some situations where having this info
available upfront would have sped things up.
I just checked, the patch still applies cleanly.
On 19/04/2024 10:56, Mira Limbeck wrote:
> [...]
> The kernel
Hi Ivan,
On 03/10/2024 10:52, Ivan Dimitrov wrote:
> - It should be possible to override not only shutdown but also restart tasks
> - The same option should be available for the Reset dialog as well.
> - The Reset should also override shutdown and restart tasks
The primary place to keep track of
On 29/10/2024 14:58, Aaron Lauterer wrote:
> Does what it claims to do, setting the parameter `rxbounce` when mapping
> the RBD disk.
>
> Therefore:
>
> Tested-By: Aaron Lauterer
Thanks for testing!
>> diff --git a/src/PVE/Storage/Plugin.pm b/src/PVE/Storage/Plugin.pm
>> index 8cc693c..02be257
e, `iscsi_portals` returns an empty array of portals, so the
connectivity check fails and node 2 never performs discovery for B.
To fix this, let `iscsi_portals` also return the portal from B's
storage config if iscsiadm exited cleanly but its output contained no
matching portal.
Signed-off-b
On 30/10/2024 09:41, Thomas Lamprecht wrote:
> Am 25/10/2024 um 13:13 schrieb Friedrich Weber:
>> When KRBD is enabled for an RBD storage, the storage plugin calls out
>> to `rbd map` to map an RBD image as a block device on the host.
>> Sometimes it might be necessary to p
support (yet), they can alternatively set the RBD config option
`rbd_default_map_options` [2].
[1] https://forum.proxmox.com/threads/155741/
[2]
https://github.com/ceph/ceph/blob/b2a4bd840/src/common/options/rbd.yaml.in#L507
Signed-off-by: Friedrich Weber
---
src/PVE/Storage/Plugin.pm| 10
in the GUI.
See patch #1 for more details. Patch #2 adds a section to the docs.
storage:
Friedrich Weber (1):
fix #5779: rbd: allow to pass custom krbd map options
src/PVE/Storage/Plugin.pm| 10 ++
src/PVE/Storage/RBDPlugin.pm | 14 +-
2 files changed, 23 insertions
Describe the new `krbd-map-options` property, and mention under which
circumstances the `rxbounce` option may be necessary.
Signed-off-by: Friedrich Weber
---
pve-storage-rbd.adoc | 10 ++
1 file changed, 10 insertions(+)
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index
On 23/09/2024 11:17, Dominik Csapak wrote:
> [...]
> so i did some benchmarks (mostly disk writes) and wrote the short script
> below
> (maybe we can reuse that?)
>
> 8<
> use strict;
> use warnings;
>
> use PVE::Tools;
>
> my $size = shift;
>
> sub get_bytes_written {
> my $fh
On 16/10/2024 18:47, Daniel Kral wrote:
> Reported in the community forum [0].
>
> This fixes an issue with read/write operations done on ocfs2 with
> io_uring. This has caused QEMU guests to be unable to determine the file
> format at [1] because of an unsuccessful read and therefore could not
>
The two lists were missing the initial empty line and were
consequently rendered as inline text, which made them hard to read.
Signed-off-by: Friedrich Weber
---
pveceph.adoc | 2 ++
1 file changed, 2 insertions(+)
diff --git a/pveceph.adoc b/pveceph.adoc
index a828834..da39e7f 100644
--- a
On 01/02/2024 09:26, Fiona Ebner wrote:
> Am 31.01.24 um 16:07 schrieb Friedrich Weber:
>> Thanks for the review!
>>
>> On 26/01/2024 12:14, Fiona Ebner wrote:
>>>> Some points to discuss:
>>>>
>>>> * Fabian and I discussed whether it may b
On 11/01/2024 16:03, Friedrich Weber wrote:
> By default, LVM autoactivates LVs after boot. In a cluster with VM disks on a
> shared LVM VG (e.g. on top of iSCSI), this can indirectly cause guest creation
> or VM live-migration to fail. See bug #4997 [1] and patch #2 for details.
>
&
On 30/10/2024 17:49, Friedrich Weber wrote:
> [...]
>
> Yeah, I see the point.
>
> Of course, another alternative is enabling `rxbounce` unconditionally,
> as initially requested in [1]. I'm a hesitant to do that because from
> reading its description I'd exp
Hi, I have two small things that I noticed skimming the series (inline).
On 13/01/2025 09:56, Daniel Herzig wrote:
> Eject by setting file to none.
>
> Signed-off-by: Daniel Herzig
> ---
> www/manager6/qemu/HardwareView.js | 43 +++
> 1 file changed, 43 insertions(+)
OVS
developer submitted a kernel patch which is now included 6.13 and some
stable kernels. With this patch, the reproducer does not seem to
trigger the issue anymore. Hence, backport the patch.
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2025-January/053423.html
Signed-off-by: Friedrich We
OVS
developer submitted a kernel patch which is now included 6.13 and some
stable kernels. With this patch, the reproducer does not seem to
trigger the issue anymore. Hence, backport the patch.
[1] https://mail.openvswitch.org/pipermail/ovs-discuss/2025-January/053423.html
Signed-off-by: Friedrich We
on B
- navigated to GUI of A
- verified I can access the console of C via xterm.js and noVNC.
Without this patch, both show a SSH key verification prompt.
Consider this:
Tested-by: Friedrich Weber
___
pve-devel mailing list
pve-devel@lis
Patch looks good to me too, ran a few tests and didn't see anything
unexpected, hence:
Tested-by: Friedrich Weber
Reviewed-by: Friedrich Weber
Thank you!
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cg
on-right" and
also its positioning on the right side, as it's unobtrusive and also
signifies that clicking it will reveal "more" information.
Tested-by: Friedrich Weber
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://list
On 25/02/2025 16:47, Aaron Lauterer wrote:
> Some users configure their VMs to use serial as their display. The big
> benefit is that in combination with the xtermjs remote console, copy &
> paste works a lot better than via novnc.
I agree that defaulting to xterm.js in the serial terminal case ma
Hi, thanks for the new version! I think this is shaping up nicely. Some
comments inline below, but only minor ones. So it may make sense to wait
a couple of days for additional comments from others before sending a
new version. I'll also run a few more tests this week and report back.
> don't chec
On 20/03/2025 11:15, Mira Limbeck wrote:
> [...]
>>
>>> + # check session state instead if available
>>> + my $sessions = iscsi_session($cache, $target);
>>> + for my $session ($sessions->@*) {
>>> + next if $session->{portal} ne $portal;
>>> + return iscsi_test_session($session->
On 03/04/2025 16:03, Stefan Hanreich wrote:
>
>
> On 4/3/25 15:44, Friedrich Weber wrote:
>>>> - when removing a fabric, the IP addresses defined on the interfaces
>>>> remain until the next reboot. I guess the reason is that ifupdown2
>>>> doesn
//git.proxmox.com/?p=proxmox.git;a=blob;f=proxmox-notify/src/endpoints/webhook.rs;h=34dbac5488;hb=7abd2da759d#l266
[8]
https://lore.proxmox.com/pve-devel/20240308123535.1500-1-h.lai...@proxmox.com/
Co-authored-by: Maximiliano Sandoval
storage:
Friedrich Weber (1):
fix #3716: api: download from u
respected for
https:// URLs. For example, setups that have a proxy for external
connections, but download e.g. ISO files (only) via https from an
internal repository that the proxy doesn't serve.
Signed-off-by: Friedrich Weber
---
PVE/API2/Nodes.pm | 2 +-
1 file changed, 1 insertion(+), 1 del
rely on http_proxy not being respected for
https:// URLs. For example, setups that have a proxy for external
connections, but download e.g. ISO files (only) via https from an
internal repository that the proxy doesn't serve.
Signed-off-by: Friedrich Weber
---
src/PVE/API2/Storage/Status.p
Sorry, of course I only noticed I messed up the references after hitting
Send:
On 26/03/2025 11:51, Friedrich Weber wrote:
> [...]
>
> Other places in our stack also use the `http_proxy` datacenter option for
> https
> connections, e.g. the ones that use proxmox_http::
On 08/04/2025 18:38, Friedrich Weber wrote:
> Currently, as an unprivileged user with role PVEVMUser the GUI breaks
> with an error after navigating to a VM's hardware tab. The reason is
> that the frontend checks the GUI capabilites via `caps.mapping.hwrng`,
> but `caps.mapping`
On 04/04/2025 18:28, Gabriel Goller wrote:
> This series allows the user to add fabrics such as OpenFabric and OSPF over
> their clusters.
>
> This series relies on:
> https://lore.proxmox.com/pve-devel/20250404135522.2603272-1-s.hanre...@proxmox.com/T/#mf4cf46c066d856cea819ac3e79d115a290f47466
oks good to me. I was wondering whether enabling frr may have
any side effects on existing setups, but since we only enable if we
restart anyway, I don't think this should cause any issues.
Consider this
Tested-by: Friedrich Weber
___
pve-de
firmware did not change.
To avoid that, quote metacharacters in the directory name.
Signed-off-by: Friedrich Weber
---
debian/scripts/find-firmware.pl | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/scripts/find-firmware.pl b/debian/scripts/find-firmware.pl
index a53223c
)
Signed-off-by: Friedrich Weber
---
Notes:
I wasn't actually sure whether `caps` may have such a 2-level structure
in some cases, but it doesn't seem like it. After applying this patch
to pve-manager:
% ag 'caps\.[^\[.]+\.' | wc -l
0
www/manager6/qemu
Makes the definition more amenable for future additions.
No functional change intended.
Signed-off-by: Friedrich Weber
---
Notes:
changes since v2: none
new in v2
src/PVE/Storage/LVMPlugin.pm | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/src/PVE
el/2024050332.733635-1-f.we...@proxmox.com/
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=4997
[2]
https://pve.proxmox.com/mediawiki/index.php?title=Multipath&oldid=12039#%22Device_mismatch_detected%22_warnings
[3]
https://lore.proxmox.com/pve-devel/ad4c806c-234a-4949-885d-8bb369860...@proxm
101 - 200 of 309 matches
Mail list logo