Hi Konrad,
On 21/09/2016 10:32, Konrad Rzeszutek Wilk wrote:
The patch piggybacks on: livepatch: Initial ARM64 support, which
brings up all of the necessary livepatch infrastructure pieces in.
This patch adds three major pieces:
1) ELF relocations. ARM32 uses SHT_REL instead of SHT_RELA which
means the adddendum had to be extracted from within the
instruction. Which required parsing BL/BLX, B/BL<cond>,
MOVT, and MOVW instructions.
The code was written from scratch using the ARM ELF manual
(and the ARM Architecture Reference Manual)
2) Inserting an trampoline. We use the B (branch to address)
which uses an offset that is based on the PC value: PC + imm32.
Because we insert the branch at the start of the old function
we have to account for the instruction already being fetched
and subtract -8 from the delta (new_addr - old_addr). See
ARM DDI 0406C.c, see A2.3 (pg 45) and A8.8.18 pg (pg 334,335)
3) Allows the test-cases to be built under ARM 32.
The "livepatch: tests: Make them compile under ARM64"
put in the right infrastructure for it and we piggyback on it.
Acked-by: Julien Grall <julien.gr...@arm.com>
Acked-by: Jan Beulich <jbeul...@suse.com> [for non-ARM parts]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
---
Cc: Julien Grall <julien.gr...@arm.com>
Cc: Stefano Stabellini <sstabell...@kernel.org>
v2: First submission.
v3: Use LIVEPATCH_ARCH_RANGE instead of NEGATIVE_32MB macro
-Use PATCH_INSN_SIZE instead of the value 4.
-Ditch the old_ptr local variable.
-Use 8 for evaluating the branch instead of 4. Based on ARM docs.
-NOP patch up to sizeof(opaque) % PATCH_INSN_SIZE (so 7 instructions).
-Don't mask 0x00FFFFF_E_ after shifting, instead mask by 0x00FFFFF_F_.
The reason is that offset is constructed by shifting by two the insn
(except the first two bytes) by left which meant we would have cleared
offset[2]! - and jumped to a location that was -4 bytes off.
-Update commit description to have -8 instead of -4 delta and also
include reference to spec.
v4: Added Jan's Ack.
s/PATCH_INSN_SIZE/ARCH_PATCH_INSN_SIZE/
s/arch_livepatch_insn_len/livepatch_insn_len/
s/LIVEPATCH_ARCH_RANGE/ARCH_LIVEPATCH_RANGE/
v5: Added Julien's Ack.
IHMO my ack should not have been retained given that ...
- Rebased on "livepatch: Drop _jmp from arch_livepatch_[apply,revert]_jmp"
- Added explanation for the usage of data cache and why we need to sync it.
... you also replace the clean_and_invalidate to the old_ptr by
clean_and_invalidate to the new_ptr.
diff --git a/xen/arch/arm/arm32/livepatch.c b/xen/arch/arm/arm32/livepatch.c
index 80f9646..3f47326 100644
--- a/xen/arch/arm/arm32/livepatch.c
+++ b/xen/arch/arm/arm32/livepatch.c
@@ -3,28 +3,303 @@
*/
#include <xen/errno.h>
+#include <xen/kernel.h>
#include <xen/lib.h>
#include <xen/livepatch_elf.h>
#include <xen/livepatch.h>
+#include <asm/page.h>
+#include <asm/livepatch.h>
+
void arch_livepatch_apply(struct livepatch_func *func)
{
[...]
+ /*
+ * When we upload the payload, it will go through the data cache
+ * (the region is cacheable). Until the data cache is cleaned, the data
+ * may not reach the memory. And in the case the data and instruction cache
+ * are separated, we may read invalid instruction from the memory because
+ * the data cache have not yet synced with the memory. Hence sync it.
+ */
+ if ( func->new_addr )
+ clean_and_invalidate_dcache_va_range(func->new_addr, func->new_size);
The clean_and_invalidate on the new_ptr needs to be add back here and ...
}
void arch_livepatch_revert(const struct livepatch_func *func)
{
+ uint32_t *new_ptr;
+ unsigned int i, len;
+
+ new_ptr = func->old_addr - (void *)_start + vmap_of_xen_text;
+ len = livepatch_insn_len(func) / sizeof(uint32_t);
+ for ( i = 0; i < len; i++ )
+ {
+ uint32_t insn;
+
+ memcpy(&insn, func->opaque + (i * sizeof(uint32_t)),
ARCH_PATCH_INSN_SIZE);
+ /* PATCH! */
+ *(new_ptr + i) = insn;
+ }
here. See the comments on the ARM64 part (i.e patch #5).
}
Regards,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel