The check is rather straight forward - and might help users who
passthrough devices to their containers.
Reported in our community forum:
https://forum.proxmox.com/threads/pve-7-0-lxc-intel-quick-sync-passtrough-not-working-anymore.92025/
Signed-off-by: Stoiko Ivanov
---
Tested quickly by pastin
we incorrectly used 'total' as 100% of the to recovered objects here,
but that contains the total number of *bytes*.
rename 'toRecover' to better reflect its meaning and use that as
100% of the objects.
reported by a user:
https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-percen
so the frontend has the information readily available.
Suggested-by: Thomas Lamprecht
Signed-off-by: Fabian Ebner
---
PVE/API2/Cluster.pm | 12
PVE/Service/pvestatd.pm | 11 +++
2 files changed, 23 insertions(+)
diff --git a/PVE/API2/Cluster.pm b/PVE/API2/Cluster.pm
in
Hi,
On 07.07.21 11:44, Victor Hooi wrote:
> I recently upgraded from the Proxmox 7.0 beta, to the latest 7.0-8 release.
>
> However, when I try to start a Windows VM that I created before, I now get
> the following error:
>
> Argument "cgroup v1: 1024, cgroup v2: 100" isn't numeric in numeric ge
On 07.07.21 10:47, Dominik Csapak wrote:
> we incorrectly used 'total' as 100% of the to recovered objects here,
> but that contains the total number of *bytes*.
>
> rename 'toRecover' to better reflect its meaning and use that as
> 100% of the objects.
>
> reported by a user:
> https://forum.pro
Shared storages are not scanned for migration either, so they cannot
be problematic in this context. This could lead to false positives
where it actually is completely unproblematic:
https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/post-401165
Signed-off-by: Fabian Ebner
---
PVE/
If the same local storage is configured twice with content type
separation, migration in PVE 6 would lead to the volumes being
duplicated. As that would happen for every migration, such an issue
would likely be noticed already, and in PVE 7 such configuration is
not problematic for migration anymor
On 07.07.21 10:44, Stoiko Ivanov wrote:
> The check is rather straight forward - and might help users who
> passthrough devices to their containers.
>
> Reported in our community forum:
> https://forum.proxmox.com/threads/pve-7-0-lxc-intel-quick-sync-passtrough-not-working-anymore.92025/
>
> Sign
On 7/7/21 12:19 PM, Thomas Lamprecht wrote:
On 07.07.21 10:47, Dominik Csapak wrote:
we incorrectly used 'total' as 100% of the to recovered objects here,
but that contains the total number of *bytes*.
rename 'toRecover' to better reflect its meaning and use that as
100% of the objects.
report
Reported in the community forum[0]. Also tried with LVM-thin, but it
doesn't seem to be affected.
See also 628937f53acde52f7257ca79f574c87a45f392e7 for the same fix for
krbd.
[0]:
https://forum.proxmox.com/threads/after-upgrade-to-7-0-all-vms-dont-boot.92019/post-401017
Signed-off-by: Fabian Eb
It needs to be a 'proxmoxButton' to get activated when selecting a HA
ressource. This was lost during the last code cleanup, commit a69e943.
Signed-off-by: Aaron Lauterer
---
www/manager6/ha/Resources.js | 1 +
1 file changed, 1 insertion(+)
diff --git a/www/manager6/ha/Resources.js b/www/manag
we filtered out devices which belong into the 'Generic System Peripheral'
category, but this can contain actual useful pci devices
users want to pass through, so simply do not filter it by default.
Signed-off-by: Dominik Csapak
---
PVE/API2/Hardware/PCI.pm | 5 ++---
1 file changed, 2 insertions
we do not have a 'verify' field here, so the onGetValues override
falsely sent 'delete: verify' on every edit
while our api is ok with that, it's better to remove it
Signed-off-by: Dominik Csapak
---
www/manager6/dc/AuthEditOpenId.js | 13 -
1 file changed, 13 deletions(-)
diff --g
On 07.07.21 13:36, Aaron Lauterer wrote:
> It needs to be a 'proxmoxButton' to get activated when selecting a HA
> ressource. This was lost during the last code cleanup, commit a69e943.
>
> Signed-off-by: Aaron Lauterer
> ---
> www/manager6/ha/Resources.js | 1 +
> 1 file changed, 1 insertion(+)
On 07.07.21 13:23, Dominik Csapak wrote:
> On 7/7/21 12:19 PM, Thomas Lamprecht wrote:
>> On 07.07.21 10:47, Dominik Csapak wrote:
>>> diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
>>> index e92c698b..52563605 100644
>>> --- a/www/manager6/ceph/Status.js
>>> +++ b/www/manag
On 7/7/21 2:24 PM, Thomas Lamprecht wrote:
On 07.07.21 13:23, Dominik Csapak wrote:
On 7/7/21 12:19 PM, Thomas Lamprecht wrote:
On 07.07.21 10:47, Dominik Csapak wrote:
diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index e92c698b..52563605 100644
--- a/www/manager6/cep
On 07.07.21 14:30, Dominik Csapak wrote:
> On 7/7/21 2:24 PM, Thomas Lamprecht wrote:
>> On 07.07.21 13:23, Dominik Csapak wrote:
>>> On 7/7/21 12:19 PM, Thomas Lamprecht wrote:
On 07.07.21 10:47, Dominik Csapak wrote:
> diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.j
On 07.07.21 13:34, Victor Hooi wrote:
> Do you know roughly how long that will take to hit the repositories?
should have been already available at time I wrote my reply.
off-topic: We switched all mailing lists over from pve.proxmox.com to their own
host at lists.proxmox.com a bit ago, while mail
we incorrectly used 'total' as 100% of the to recovered objects here,
but that containst the total number of *bytes*.
rename 'toRecover' to better reflect that the unit is 'objects' and
use that as total
reported by a user:
https://forum.proxmox.com/threads/bug-ceph-recovery-bar-not-showing-perce
On 07.07.21 13:28, Fabian Ebner wrote:
> Reported in the community forum[0]. Also tried with LVM-thin, but it
> doesn't seem to be affected.
>
> See also 628937f53acde52f7257ca79f574c87a45f392e7 for the same fix for
> krbd.
>
> [0]:
> https://forum.proxmox.com/threads/after-upgrade-to-7-0-all-vm
Hi,
I recently upgraded from the Proxmox 7.0 beta, to the latest 7.0-8 release.
However, when I try to start a Windows VM that I created before, I now get
the following error:
Argument "cgroup v1: 1024, cgroup v2: 100" isn't numeric in numeric ge (>=)
> at /usr/share/perl5/PVE/QemuServer.pm line
Gotcha - thanks for the quick fix!
I am using the pvetest repository.
Do you know roughly how long that will take to hit the repositories?
(I just did an apt update, and it doesn't seem to have picked up a new
qemu-server version yet).
On Wed, Jul 7, 2021 at 8:10 PM Thomas Lamprecht
wrote:
>
On 07.07.21 14:49, Dominik Csapak wrote:
> we incorrectly used 'total' as 100% of the to recovered objects here,
> but that containst the total number of *bytes*.
>
> rename 'toRecover' to better reflect that the unit is 'objects' and
> use that as total
>
> reported by a user:
> https://forum.pr
On 06.07.21 14:31, Fabian Ebner wrote:
> since the pattern for the suite changed.
>
> Signed-off-by: Fabian Ebner
> ---
> PVE/CLI/pve6to7.pm | 71 ++
> 1 file changed, 71 insertions(+)
>
>
applied, thanks!
__
On 07.07.21 13:41, Dominik Csapak wrote:
> we filtered out devices which belong into the 'Generic System Peripheral'
> category, but this can contain actual useful pci devices
> users want to pass through, so simply do not filter it by default.
>
> Signed-off-by: Dominik Csapak
> ---
> PVE/API2/
On 06.07.21 14:04, Fabian Grünbichler wrote:
> Signed-off-by: Fabian Grünbichler
> ---
> debian/changelog | 6 ++
> debian/proxmox-archive-keyring.install | 1 -
> debian/proxmox-archive-keyring.maintscript | 1 +
> debian/proxmox-release-stretch.gpg
in certain cases the postinst script of grub-pc runs grub-install on
the disks it gets from debconf. Simply warn and exit with 0 if
grub-install is called by dpkg and from a grub related package
Signed-off-by: Stoiko Ivanov
---
bin/grub-install-wrapper | 6 ++
1 file changed, 6 insertions(+)
The following patchset addresses a few small issues reported during the PVE
7.0 beta and after the 7.0 stable release.
* patches 1+2 deal with grub-install being called during a distribution
upgrade on some systems (I did not manage to get a VM installed with PVE
6.4 to run into the issue)
* p
most support questions w.r.t. proxmox-boot-tool do have us
asking for `stat /sys/firmware/efi` output anyways
Signed-off-by: Stoiko Ivanov
---
bin/proxmox-boot-tool | 5 +
1 file changed, 5 insertions(+)
diff --git a/bin/proxmox-boot-tool b/bin/proxmox-boot-tool
index 079fa26..1e984d6 10075
gives a better overview in case the system was switched at one time
from uefi to legacy (or the other way around).
Signed-off-by: Stoiko Ivanov
---
bin/proxmox-boot-tool | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/bin/proxmox-boot-tool b/bin/proxmox-boot-tool
inde
This way all ESPs (in case of a legacy booted system) get an
updated grub installation.
running only once between reboots (the markerfile is in /tmp) should
be enough. Sadly the environment does not provide a hint which version
grub is installed to.
Signed-off-by: Stoiko Ivanov
---
bin/grub-ins
Deciding whether or not to add the diversion based on the version
alone fails quite hard in case pve-kernel-helper is in dpkg-state 'rc'
(removed not purged) as reported in our community forum[0]:
* removing pve-kernel-helper removes the diversion of grub-install
* if config-files are still present
32 matches
Mail list logo