The current shutdown logic in smp_send_stop() will disable the APs while
having interrupts enabled on the BSP or possibly other APs. On AMD systems
this can lead to local APIC errors:
APIC error on CPU0: 00(08), Receive accept error
Such error message can be printed in a loop, thus blocking the s
also
make kexec more reliable.
Thanks, Roger.
Roger Pau Monne (5):
x86/shutdown: offline APs with interrupts disabled on all CPUs
x86/irq: drop fixup_irqs() parameters
x86/smp: perform disabling on interrupts ahead of AP shutdown
x86/pci: disable MSI(-X) on all devices at shutdown
x86/iomm
Attempt to disable MSI(-X) capabilities on all PCI devices know by Xen at
shutdown. Doing such disabling should facilitate kexec chained kernel from
booting more reliably, as device MSI(-X) interrupt generation should be
quiesced.
It would also prevent "Receive accept error" being raised as a res
The solely remaining caller always passes the same globally available
parameters. Drop the parameters and modify fixup_irqs() to use
cpu_online_map in place of the input mask parameter, and always be verbose
in its output printing.
While there remove some of the checks given the single context wh
Add a new hook to inhibit interrupt generation by the IOMMU(s). Note the
hook is currently only implemented for x86 IOMMUs. The purpose is to
disable interrupt generation at shutdown so any kexec chained image finds
the IOMMU(s) in a quiesced state.
It would also prevent "Receive accept error" b
Move the disabling of interrupt sources so it's done ahead of the offlining
of APs. This is to prevent AMD systems triggering "Receive accept error"
when interrupts target CPUs that are no longer online.
Signed-off-by: Roger Pau Monné
---
Changes since v1:
- New in this version.
---
xen/arch/x
The solely remaining caller always passes the same globally available
parameters. Drop the parameters and modify fixup_irqs() to use
cpu_online_map in place of the input mask parameter, and always be verbose
in its output printing.
While there remove some of the checks given the single context wh
considered for 4.20, as it fixes a
real issue on AMD boxes that prevents rebooting them.
Thanks, Roger.
Roger Pau Monne (2):
x86/shutdown: quiesce devices ahead of AP shutdown
x86/irq: drop fixup_irqs() parameters
xen/arch/x86/crash.c | 1 +
xen/arch/x86/include/asm/irq.h | 4 ++--
The current shutdown logic in smp_send_stop() will first disable the APs,
and then attempt to disable (some) of the interrupt sources.
There are two issues with this approach; the first one being that MSI
interrupt sources are not disabled, the second one is the APs are stopped
before interrupts a
From: Teddy Astie
As CX16 support is mandatory for IOMMU usage, the checks for CX16 in the
interrupt remapping code are stale. Remove them together with the
associated code introduced in case CX16 was not available.
Note that AMD-Vi support for atomically updating a 128bit IRTE entry is
still n
From: Teddy Astie
This flag was only used in case cx16 is not available, as those code paths no
longer exist, this flag now does basically nothing.
Signed-off-by: Teddy Astie
Signed-off-by: Roger Pau Monné
---
xen/drivers/passthrough/vtd/iommu.c | 12 +++-
xen/drivers/passthrough/vtd/
.
Thanks, Roger.
Roger Pau Monne (1):
iommu/amd: atomically update IRTE
Teddy Astie (4):
x86/iommu: check for CMPXCHG16B when enabling IOMMU
iommu/vtd: remove non-CX16 logic from interrupt remapping
x86/iommu: remove non-CX16 logic from DMA remapping
iommu/vtd: cleanup MAP_SINGLE_DEVICE
Either when using a 32bit Interrupt Remapping Entry or a 128bit one update
the entry atomically, by using cmpxchg unconditionally as IOMMU depends on
it. No longer disable the entry by setting RemapEn = 0 ahead of updating
it. As a consequence of not toggling RemapEn ahead of the update the
Inter
From: Teddy Astie
As CX16 support is mandatory for IOMMU usage, the checks for CX16 in the
DMA remapping code are stale. Remove them together with the associated
code introduced in case CX16 was not available.
Suggested-by: Andrew Cooper
Signed-off-by: Teddy Astie
Signed-off-by: Roger Pau Mon
From: Teddy Astie
All hardware with VT-d/AMD-Vi has CMPXCHG16B support. Check this at
initialisation time, and remove the effectively-dead logic for the
non-cx16 case.
If the local APICs support x2APIC mode the IOMMU support for interrupt
remapping will be checked earlier using a specific helper
If using a 32bit Interrupt Remapping Entry or a 128bit one and the CPU
supports 128bit cmpxchg don't disable the entry by setting RemapEn = 0
ahead of updating it. As a consequence of not toggling RemapEn ahead of
the update the Interrupt Remapping Table needs to be flushed after the
entry update.
From: Artem Savkov
According to gcc8's man pages gcc can put functions into .text.unlikely
or .text.hot subfunctions during optimization. Add ".text.hot" to the
list of bundleable functions in is_bundleable().
Signed-off-by: Artem Savkov
Signed-off-by: Roger Pau Monné
---
common.c | 5 +
From: Artem Savkov
Propagate child symbol changes to it's parent.
Signed-off-by: Artem Savkov
Signed-off-by: Roger Pau Monné
---
create-diff-object.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/create-diff-object.c b/create-diff-object.c
index b041d94d9723..
From: Artem Savkov
While building a gcc-consprop patch from integration tests gcc8 would place a
__timekeeping_inject_sleeptime.constprop.18.cold.27 symbol into
.text.unlikely.__timekeeping_inject_sleeptime.constprop.18 section. Because
section name doesn't have the '.cold.27' suffix this symbol
From: Artem Savkov
create-build-diff expects .cold functions to be suffixed by an id, which
is not always the case. Drop the trailing '.' when searching for cold
functions.
Fixes: #1160
Signed-off-by: Artem Savkov
Signed-off-by: Roger Pau Monné
---
common.c | 2 +-
create-diff-ob
From: Artem Savkov
Add child symbols to .kpatch.ignore.functions in case their parents are
added to the list.
Signed-off-by: Artem Savkov
Signed-off-by: Roger Pau Monné
---
create-diff-object.c | 4
1 file changed, 4 insertions(+)
diff --git a/create-diff-object.c b/create-diff-object.c
From: Artem Savkov
Add a function that would detect parent/child symbol relations. So far
it only supports .cold.* symbols as children.
Signed-off-by: Artem Savkov
Signed-off-by: Roger Pau Monné
---
common.h | 2 ++
create-diff-object.c | 35 +++
2
Hello,
Fixes picked from kpatch to deal with .cold and .hot sub-functions
sections generated by gcc.
Thanks, Roger.
Artem Savkov (7):
create-diff-object: ignore .cold.* suffixes in is_bundleable()
create-diff-object: add symbol relations
create-diff-object: propagate child symbol changes
From: Artem Savkov
gcc8 can place functions to .text.unlikely and .text.hot subsections
during optimizations. Allow symbols to change subsections instead of
failing.
Signed-off-by: Artem Savkov
Signed-off-by: Roger Pau Monné
---
create-diff-object.c | 29 +++--
1 file
Add a new randconfig job for each FreeBSD version. This requires some
rework of the template so common parts can be shared between the full and
the randconfig builds. Such randconfig builds are relevant because FreeBSD
is the only tested system that has a full non-GNU toolchain.
While there repl
Add a new randconfig job for each FreeBSD version. This requires some
rework of the template so common parts can be shared between the full and
the randconfig builds. Such randconfig builds are relevant because FreeBSD
is the only tested system that has a full non-GNU toolchain.
While there remo
Signed-off-by: Roger Pau Monné
---
.cirrus.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.cirrus.yml b/.cirrus.yml
index 4a120fad41b2..ee80152890f2 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -18,7 +18,7 @@ freebsd_template: &FREEBSD_TEMPLATE
task:
name: 'FreeBSD 1
If randconfig enables coverage support the build times out due to GNU LD
taking too long. For the time being prevent coverage from being enabled in
clang randconfig job.
Signed-off-by: Roger Pau Monné
---
Cc: Oleksii Kurochko
---
I will fix the orphaned section stuff separately, as I'm consider
MSI remapping bypass (directly configuring MSI entries for devices on the
VMD bus) won't work under Xen, as Xen is not aware of devices in such bus,
and hence cannot configure the entries using the pIRQ interface in the PV
case, and in the PVH case traps won't be setup for MSI entries for such
devi
two patches to be problematic, the last patch
is likely to be more controversial. I've tested it internally and
didn't see any issues, but my testing of PV mode is mostly limited to
dom0.
Thanks, Roger.
Roger Pau Monne (3):
xen/pci: do not register devices with segments >= 0x1
Setting pci_msi_ignore_mask inhibits the toggling of the mask bit for both
MSI and MSI-X entries globally, regardless of the IRQ chip they are using.
Only Xen sets the pci_msi_ignore_mask when routing physical interrupts over
event channels, to prevent PCI code from attempting to toggle the maskbit
The current hypercall interface for doing PCI device operations always uses
a segment field that has a 16 bit width. However on Linux there are buses
like VMD that hook up devices into the PCI hierarchy at segment >= 0x1,
after the maximum possible segment enumerated in ACPI.
Attempting to re
Setting pci_msi_ignore_mask inhibits the toggling of the mask bit for both MSI
and MSI-X entries globally, regardless of the IRQ chip they are using. Only
Xen sets the pci_msi_ignore_mask when routing physical interrupts over event
channels, to prevent PCI code from attempting to toggle the maskbi
The PCI segment value is limited to 16 bits, however there are buses like VMD
that fake being part of the PCI topology by adding segment with a number
outside the scope of the PCI firmware specification range (>= 0x1). The
MCFG ACPI Table "PCI Segment Group Number" field is defined as having a
two patches to be problematic, the last patch
is likely to be more controversial. I've tested it internally and
didn't see any issues, but my testing of PV mode is mostly limited to
dom0.
Thanks, Roger.
Roger Pau Monne (3):
xen/pci: do not register devices outside of PCI segment
MSI remapping bypass (directly configuring MSI entries for devices on the VMD
bus) won't work under Xen, as Xen is not aware of devices in such bus, and
hence cannot configure the entries using the pIRQ interface in the PV case, and
in the PVH case traps won't be setup for MSI entries for such devi
Hello,
First patch from David introduces a new helper to fetch xenstore nodes,
while second patch removes the usage of scanf related functions with the
"%ms" format specifier, as it's not supported by the FreeBSD scanf libc
implementation.
Thanks, Roger.
David Woodhouse (1):
hw/xen: Add xs_nod
The 'm' parameter used to request auto-allocation of the destination variable
is not supported on FreeBSD, and as such leads to failures to parse.
What's more, the current usage of '%ms' with xs_node_scanf() is pointless, as
it just leads to a double allocation of the same string. Instead use
xs_
From: David Woodhouse
This returns the full contents of the node, having created the node path
from the printf-style format string provided in its arguments.
This will save various callers from having to do so for themselves (and
from using xs_node_scanf() with the non-portable %ms format string
The 'm' parameter used to request auto-allocation of the destination variable
is not supported on FreeBSD, and as such leads to failures to parse.
What's more, the current usage of '%ms' with xs_node_scanf() is pointless, as
it just leads to a double allocation of the same string. Instead introdu
The pv_{set,destroy}_gdt() functions rely on the L1 table(s) that contain such
mappings being stashed in the domain structure, and thus such mappings being
modified by merely updating the L1 entries.
Switch both pv_{set,destroy}_gdt() to instead use
{populate,destory}_perdomain_mapping().
Signed-
Move the handling of FLUSH_ROOT_PGTBL in flush_area_local() ahead of the logic
that does the TLB flushing, in preparation for further changes requiring the
TLB flush to be strictly done after having handled FLUSH_ROOT_PGTBL.
No functional change intended.
Signed-off-by: Roger Pau Monné
---
xen/
When running PV guests it's possible for the guest to use the same root page
table (L4) for all vCPUs, which in turn will result in Xen also using the same
root page table on all pCPUs that are running any domain vCPU.
When using XPTI Xen switches to a per-CPU shadow L4 when running in guest
conte
No functional change, as the option is not used.
Introduced new so newly added functionality is keyed on the option being
enabled, even if the feature is non-functional.
When ASI is enabled for PV domains, printing the usage of XPTI might be
omitted if it must be uniformly disabled given the usag
With the stack mapped on a per-CPU basis there's no risk of other CPUs being
able to read the stack contents, but vCPUs running on the current pCPU could
read stack rubble from operations of previous vCPUs.
The #DF stack is not zeroed because handling of #DF results in a panic.
The contents of th
When using ASI the CPU stack is mapped using a range of fixmap entries in the
per-CPU region. This ensures the stack is only accessible by the current CPU.
Note however there's further work required in order to allocate the stack from
domheap instead of xenheap, and ensure the stack is not part o
When using a unique per-vCPU root page table the per-domain region becomes
per-vCPU, and hence the mapcache is no longer shared between all vCPUs of a
domain. Introduce per-vCPU mapcache structures, and modify map_domain_page()
to create per-vCPU mappings when possible. Note the lock is also not
Such table is to be used in the per-domain slot when running with Address Space
Isolation enabled for the domain.
Signed-off-by: Roger Pau Monné
---
xen/arch/x86/include/asm/domain.h | 3 +++
xen/arch/x86/include/asm/mm.h | 2 +-
xen/arch/x86/mm.c | 45 +
There are no remaining callers of pv_gdt_ptes() or pv_ldt_ptes() that use the
stashed L1 page-tables in the domain structure. As such, the helpers and the
fields can now be removed.
No functional change intended, as the removed logic is not used.
Signed-off-by: Roger Pau Monné
---
xen/arch/x86
The pv_{set,destroy}_gdt() functions rely on the L1 table(s) that contain such
mappings being stashed in the domain structure, and thus such mappings being
modified by merely updating the L1 entries.
Switch both pv_{set,destroy}_gdt() to instead use
{populate,destory}_perdomain_mapping().
Note th
There are no longer any callers of create_perdomain_mapping() that request a
reference to the used L1 tables, and hence the only difference between them is
whether the caller wants the region to be populated, or just the paging
structures to be allocated.
Simplify the arguments to create_perdomain
n XenRT, but that doesn't cover
all possible use-cases, so it's likely to still have some rough edges,
handle with care.
Thanks, Roger.
Roger Pau Monne (18):
x86/mm: purge unneeded destroy_perdomain_mapping()
x86/domain: limit window where curr_vcpu != current on context switch
x86
In preparation for the per-domain area being populated with per-vCPU mappings
change the parameter of destroy_perdomain_mapping() to be a vCPU instead of a
domain, and also update the function logic to allow manipulation of per-domain
mappings using the linear page table mappings.
Signed-off-by: R
In preparation for the per-domain area being per-vCPU. This requires moving
some of the {create,destroy}_perdomain_mapping() calls to the domain
initialization and tear down paths into vCPU initialization and tear down.
Signed-off-by: Roger Pau Monné
---
xen/arch/x86/domain.c | 12 +
The destroy_perdomain_mapping() call in the hvm_domain_initialise() fail path
is useless. destroy_perdomain_mapping() called with nr == 0 is effectively a
no op, as there are not entries torn down. Remove the call, as
arch_domain_create() already calls free_perdomain_mappings() on failure.
There
The current code to update the Xen part of the GDT when running a PV guest
relies on caching the direct map address of all the L1 tables used to map the
GDT and LDT, so that entries can be modified.
Introduce a new function that populates the per-domain region, either using the
recursive linear ma
The current logic gates issuing flush TLB requests with the FLUSH_ROOT_PGTBL
flag to XPTI being enabled.
In preparation for FLUSH_ROOT_PGTBL also being needed when not using XPTI,
untie it from the xpti domain boolean and instead introduce a new flush_root_pt
field.
No functional change intended,
The pv_map_ldt_shadow_page() and pv_destroy_ldt() functions rely on the L1
table(s) that contain such mappings being stashed in the domain structure, and
thus such mappings being modified by merely updating the require L1 entries.
Switch pv_map_ldt_shadow_page() to unconditionally use the linear r
L1 present entries that require the underlying page to be freed have the
_PAGE_AVAIL0 bit set, introduce a helper to unify the checking logic into a
single place.
No functional change intended.
Signed-off-by: Roger Pau Monné
---
xen/arch/x86/mm.c | 14 --
1 file changed, 8 insertion
On x86 Xen will perform lazy context switches to the idle vCPU, where the
previously running vCPU context is not overwritten, and only current is updated
to point to the idle vCPU. The state is then disjunct between current and
curr_vcpu: current points to the idle vCPU, while curr_vcpu points to
The 'm' parameter used to request auto-allocation of the destination variable
is not supported on FreeBSD, and as such leads to failures to parse.
What's more, the current usage of '%ms' with xs_node_scanf() is pointless, as
it just leads to a double allocation of the same string. Instead use
qem
Hello,
First patch fixes some error handling paths that incorrectly used
error_prepend() in the Xen console driver. Second patch removes usage
of the 'm' character in scanf directives, as it's not supported on
FreeBSD (see usages of "%ms").
Thanks, Roger.
Roger Pau Monné (2):
xen/console: fix
The usage of error_prepend() in some of the error contexts of
xen_console_device_create() is incorrect, as `errp` hasn't been initialized.
This leads to the following segmentation fault on error paths resulting from
xenstore reads:
Program terminated with signal SIGSEGV, Segmentation fault.
Addres
Avoid exiting early from the loop when a pin that could be connected to the
i8259 is found, as such early exit would leave the EOI handler translation
array only partially allocated and/or initialized.
Otherwise on systems with multiple IO-APICs and an unmasked ExtINT pin on
any IO-APIC that's no
The current guards to select whether user accesses should be speculative
hardened violate Misra rule 20.7, as the UA_KEEP() macro doesn't (and can't)
parenthesize the 'args' argument.
Change the logic so the guard is implemented inside the assembly block using
the .if assembly directive. This res
There are no violations left, make the rule globally blocking for both x86 and
ARM.
Signed-off-by: Roger Pau Monné
Reviewed-by: Andrew Cooper
---
automation/eclair_analysis/ECLAIR/tagging.ecl | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/automation/eclair_analysis/ECLAIR
Hello,
This series fixes the remaining violation of rule 20.7, and marks the
rule a blocking for x86 also on the Eclair scan.
An example gitlab job with the rule enabled can be seen at:
https://gitlab.com/xen-project/people/royger/xen/-/jobs/8470641011
Thanks, Roger.
Roger Pau Monne (2
The current code in pv_domain_initialise() populates the L3 slot used for the
GDT/LDT, however that's not needed, since the create_perdomain_mapping() in
pv_create_gdt_ldt_l1tab() will already take care of allocating an L2 and
populating the L3 entry if not present.
No functional change intended.
The allocation of the paging structures in the per-domain area for mapping the
guest GDT and LDT can be limited to the maximum number of vCPUs the guest can
have. The maximum number of vCPUs is available at domain creation since commit
4737fa52ce86.
Limiting to the actual number of vCPUs avoids w
The current calculation of PV dom0 pIRQs uses:
n = min(fls(num_present_cpus()), dom0_max_vcpus());
The usage of fls() is wrong, as num_present_cpus() already returns the number
of present CPUs, not the bitmap mask of CPUs.
Fix by removing the usage of fls().
Fixes: 7e73a6e7f12a ('have architect
Do not return early in the PVH/HVM case, so that the number of pIRQs is also
printed.
Fixes: 17f6d398f765 ('cmdline: document and enforce "extra_guest_irqs" upper
bounds')
Signed-off-by: Roger Pau Monné
---
xen/arch/x86/io_apic.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-
Hello,
First patch is a fix for the calculation of the max number of pIRQs
allowed to dom0, second patch fixes the print of dom0 pIRQ limits so
it's also printed for a PVH dom0.
Thanks, Roger.
Roger Pau Monne (2):
x86/irq: fix calculation of max PV dom0 pIRQs
x86/pvh: also print har
The current guards to select whether user accesses should be speculative
hardened violate Misra rule 20.7, as the UA_KEEP() macro doesn't (and can't)
parenthesize the 'args' argument.
Change the logic so the guard is implemented inside the assembly block using
the .if assembly directive.
No funct
Hello,
Following series attempts to fix the remaining violation for rules 20.7,
and as a result make it blocking on x86 also (as it's already the case
for ARM).
Thanks, Roger.
Roger Pau Monne (4):
x8&/mm: fix IS_LnE_ALIGNED() to comply with Misra Rule 20.7
x86/msi: fix Misra Rul
While not strictly needed to guarantee operator precedence is as expected, add
the parentheses to comply with Misra Rule 20.7.
No functional change intended.
Reported-by: Andrew Cooper
Fixes: 5b52e1b0436f ('x86/mm: skip super-page alignment checks for non-present
entries')
Signed-off-by: Roger
There are no violations left, make the rule globally blocking for both x86 and
ARM.
Signed-off-by: Roger Pau Monné
---
automation/eclair_analysis/ECLAIR/tagging.ecl | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl
b/automation/e
Prune unused macros and adjust the remaining ones to parenthesize macro
arguments.
No functional change intended.
Singed-off-by: Roger Pau Monné
---
xen/arch/x86/include/asm/msi.h | 35 ++
1 file changed, 14 insertions(+), 21 deletions(-)
diff --git a/xen/arch/x
While the alignment of the mfn is not relevant for non-present entries, the
alignment of the linear address is. Commit 5b52e1b0436f introduced a
regression by not checking the alignment of the linear address when the new
entry was a non-present one.
Fix by always checking the alignment of the lin
INVALID_MFN is ~0, so by it having all bits as 1s it doesn't fulfill the
super-page address alignment checks for L3 and L2 entries. Skip the alignment
checks if the new entry is a non-present one.
This fixes a regression introduced by 0b6b51a69f4d, where the switch from 0 to
INVALID_MFN caused al
Split the code that detects whether the physical and linear address of a
mapping request are suitable to be used in an L3 or L2 slot.
No functional change intended.
Signed-off-by: Roger Pau Monné
Reviewed-by: Jan Beulich
---
Changes since v2:
- Fix parenthesization of macro parameter.
- Add a
bootstrap_map_addr() needs to be careful to not remove existing page-table
structures when tearing down mappings, as such pagetable structures might be
needed to fulfill subsequent mappings requests. The comment ahead of the
function already notes that pagetable memory shouldn't be allocated.
Fix
in 4/4.
Thanks, Roger.
Roger Pau Monne (4):
x86/mm: introduce helpers to detect super page alignment
x86/mm: skip super-page alignment checks for non-present entries
x86/setup: remove bootstrap_map_addr() usage of destroy_xen_mappings()
x86/mm: ensure L2 is always freed if empty
xen/arc
The current logic in modify_xen_mappings() allows for fully empty L2 tables to
not be freed and unhooked from the parent L3 if the last L2 slot is not
populated.
Ensure that even when an L2 slot is empty the logic to check whether the whole
L2 can be removed is not skipped.
Fixes: 4376c05c3113 ('
INVALID_MFN is ~0, so by it having all bits as 1s it doesn't fulfill the
super-page address alignment checks for L3 and L2 entries. Skip the alignment
checks if the new entry is a non-present one.
This fixes a regression introduced by 0b6b51a69f4d, where the switch from 0 to
INVALID_MFN caused al
in 4/4.
Thanks, Roger.
Roger Pau Monne (4):
x86/mm: introduce helpers to detect super page alignment
x86/mm: skip super-page alignment checks for non-present entries
x86/setup: remove bootstrap_map_addr() usage of destroy_xen_mappings()
x86/mm: ensure L2 is always freed if empty
xen/arc
bootstrap_map_addr() needs to be careful to not remove existing page-table
structures when tearing down mappings, as such pagetable structures might be
needed to fulfill subsequent mappings requests. The comment ahead of the
function already notes that pagetable memory shouldn't be allocated.
Fix
The current logic in modify_xen_mappings() allows for fully empty L2 tables to
not be freed and unhooked from the parent L3 if the last L2 slot is not
populated.
Ensure that even when an L2 slot is empty the logic to check whether the whole
L2 can be removed is not skipped.
Fixes: 4376c05c3113 ('
Split the code that detects whether the physical and linear address of a
mapping request are suitable to be used in an L3 or L2 slot.
No functional change intended.
Signed-off-by: Roger Pau Monné
---
Changes since v1:
- Make the macros local to map_pages_to_xen().
- Some adjustments to macro l
The tools infrastructure used to build livepatches for Xen
(livepatch-build-tools) consumes some DWARF debug information present in
xen-syms to generate a livepatch (see livepatch-build script usage of readelf
-wi).
The current Kconfig defaults however will enable LIVEPATCH without DEBUG_INFO
on r
XenServer uses quite long Xen version names, and encode such in the livepatch
filename, and it's currently running out of space in the file name.
Bump max filename size to 127, so it also matches the patch name length in the
hypervisor interface. Note the size of the buffer is 128 characters, and
GNU assembly that supports such feature will unconditionally add a
.note.gnu.property section to object files. The content of that section can
change depending on the generated instructions. The current logic in
livepatch-build-tools doesn't know how to deal with such section changing
as a result
Not all toolchains generate symbols for the .livepatch.hooks.* sections,
neither those symbols are required by the livepatch loading logic in Xen to
find and process the hooks. Hooks in livepatch payloads are found and
processed based exclusively on section data.
The unconditional attempt to expe
create-diff-object has a special handling for some specific sections, like
.altinstructions or .livepatch.hooks.*. The contents of those sections are in
the form of array elements, where each element can be processed independently
of the rest. For example an element in .altinstructions is a set o
The size of the alt_instr structure in Xen is 14 instead of 12 bytes, adjust
it.
Signed-off-by: Roger Pau Monné
---
create-diff-object.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/create-diff-object.c b/create-diff-object.c
index fed360a9aa68..d8a2afbf2774 100644
--- a/c
Hello,
First two patches in the series are misc (IMO trivial) fixes. Last two
patches fix the usage of hooks.
Thanks, Roger.
Roger Pau Monne (4):
livepatch-build: allow patch file name sizes up to 127 characters
create-diff-object: update default alt_instr size
create-diff-object: don
The tools infrastructure used to build livepatches for Xen
(livepatch-build-tools) consumes some DWARF debug information present in
xen-syms to generate a livepatch (see livepatch-build script usage of readelf
-wi).
The current Kconfig defaults however will enable LIVEPATCH without DEBUG_INFO
on r
Split the code that detects whether the physical and linear address of a
mapping request are suitable to be used in an L3 or L2 slot.
No functional change intended.
Signed-off-by: Roger Pau Monné
---
xen/arch/x86/include/asm/page.h | 6 ++
xen/arch/x86/mm.c | 11 +++
The current logic in modify_xen_mappings() allows for fully empty L2 tables to
not be freed and unhooked from the parent L3 if the last L2 slot is not
populated.
Ensure that even when an L2 slot is empty the logic to check whether the whole
L2 can be removed is not skipped.
Fixes: 4376c05c3113 ('
INVALID_MFN is ~0, so by it having all bits as 1s it doesn't fulfill the
super-page address alignment checks for L3 and L2 entries. Special case
INVALID_MFN so it's considered to be aligned for all slots.
This fixes a regression introduced by 0b6b51a69f4d, where the switch from 0 to
INVALID_MFN c
in 4/4.
Thanks, Roger.
Roger Pau Monne (4):
x86/mm: introduce helpers to detect super page alignment
x86/mm: special case super page alignment detection for INVALID_MFN
x86/setup: remove bootstrap_map_addr() usage of destroy_xen_mappings()
x86/mm: ensure L2 is always freed if empty
xen
1 - 100 of 1606 matches
Mail list logo