mostly - drop all patches we had queued up to get kernel 6.8
supported.
Signed-off-by: Stoiko Ivanov
---
...md-unit-for-importing-specific-pools.patch | 4 +-
...-move-manpage-arcstat-1-to-arcstat-8.patch | 2 +-
...-guard-access-to-l2arc-MFU-MRU-stats.patch | 12 +-
...hten-bounds-for-noal
ZFS 2.2.4 added new kstats for speculative prefetch in:
026fe796465e3da7b27d06ef5338634ee6dd30d8
Adapt our patch introduced with ZFS 2.1 (for the then added MFU/MRU
stats), to also deal with the now introduced values not being present
(because an old kernel-module does not offer them).
Signed-off
v1->v2:
Patch 2/2 (adaptation of arc_summary/arcstat patch) modified:
* right after sending the v1 I saw a report where pinning kernel 6.2 (thus
ZFS 2.1) leads to a similar traceback - which I seem to have overlooked
when packaging 2.2.0 ...
adapted the patch by booting a VM with kernel 6.2 a
10 minutes after sending this - I saw a report about pvereport ending in a
Python stacktrace - took me a while to see that a similar issue is present
between 2.1 and 2.2 - will send the series again with those changes also
added (this time the method was changing the source until no more
stacktrace
mostly - drop all patches we had queued up to get kernel 6.8
supported.
Signed-off-by: Stoiko Ivanov
---
...md-unit-for-importing-specific-pools.patch | 4 +-
...-move-manpage-arcstat-1-to-arcstat-8.patch | 2 +-
...-guard-access-to-l2arc-MFU-MRU-stats.patch | 12 +-
...hten-bounds-for-noal
ZFS 2.2.4 added new kstats for speculative prefetch in:
026fe796465e3da7b27d06ef5338634ee6dd30d8
Adapt our patch introduced with ZFS 2.1 (for the then added MFU/MRU
stats), to also deal with the now introduced values not being present
(because an old kernel-module does not offer them).
Signed-off
This patchset updates ZFS to the recently released 2.2.4
We had about half of the patches already in 2.2.3-2, due to the needed
support for kernel 6.8.
Compared to the last 2.2 point releases this one compares quite a few
potential performance improvments:
* for ZVOL workloads (relevant for qemu
udev properties are very easy to parse and can be done by doing a
line-based scan and matching the prefix, splitting once for properties.
Avoids the use of regexes and signicantly reduces binary size by about
-38%(!).
Tested by comparing the output of `proxmox-auto-install-assistant
device-info`,
The proxmox-auto-install-assistant uses
- glob patterns for disk matching, which can be pre-compiled for
efficiency
- regexes for udev property matching, which can be simplified by some
simple prefix matching & splitting on =
The latter also significantly reduces binary size due to the
No functional changes.
Signed-off-by: Christoph Heiss
---
proxmox-auto-install-assistant/Cargo.toml | 1 -
1 file changed, 1 deletion(-)
diff --git a/proxmox-auto-install-assistant/Cargo.toml
b/proxmox-auto-install-assistant/Cargo.toml
index eaca7f8..0286c80 100644
--- a/proxmox-auto-install-a
No functional changes.
Signed-off-by: Christoph Heiss
---
proxmox-auto-installer/tests/parse-answer.rs | 14 +++---
.../src/fetch_plugins/partition.rs | 10 +-
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/proxmox-auto-installer/tests/pars
No functional changes.
Signed-off-by: Christoph Heiss
---
proxmox-auto-install-assistant/src/main.rs | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/proxmox-auto-install-assistant/src/main.rs
b/proxmox-auto-install-assistant/src/main.rs
index 0debd29..906f
Ran this on an Intel(R) Core(TM) i7-7700K CPU at Markus' request to see
how this behaves on an Intel processor. This results in the following
being written to /run/qemu-server/host-hw-capabilities.json:
{ "amd-sev": { "cbitpos": 0, "reduced-phys-bits": 0, "sev-support":
false, "sev-support-es":
13 matches
Mail list logo