Now that we build the early code without strict alignment and without suppressing the use of SIMD registers, ensure that the VFP unit is on before entering C code.
While at it, simplyify the mov_i macro, which is only used for 32-bit quantities. Signed-off-by: Ard Biesheuvel <a...@kernel.org> --- ArmVirtPkg/Library/ArmPlatformLibQemu/AArch64/ArmPlatformHelper.S | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/ArmVirtPkg/Library/ArmPlatformLibQemu/AArch64/ArmPlatformHelper.S b/ArmVirtPkg/Library/ArmPlatformLibQemu/AArch64/ArmPlatformHelper.S index 05ccc7f9f043..1787d52fbf51 100644 --- a/ArmVirtPkg/Library/ArmPlatformLibQemu/AArch64/ArmPlatformHelper.S +++ b/ArmVirtPkg/Library/ArmPlatformLibQemu/AArch64/ArmPlatformHelper.S @@ -8,9 +8,7 @@ #include <AsmMacroIoLibV8.h> .macro mov_i, reg:req, imm:req - movz \reg, :abs_g3:\imm - movk \reg, :abs_g2_nc:\imm - movk \reg, :abs_g1_nc:\imm + movz \reg, :abs_g1:\imm movk \reg, :abs_g0_nc:\imm .endm @@ -45,10 +43,9 @@ ASM_FUNC(ArmPlatformPeiBootAction) mrs x0, CurrentEL // check current exception level - tbz x0, #3, 0f // bail if above EL1 - ret + tbnz x0, #3, 0f // omit early ID map if above EL1 -0:mov_i x0, mairval + mov_i x0, mairval mov_i x1, tcrval adrp x2, idmap orr x2, x2, #0xff << 48 // set non-zero ASID @@ -87,7 +84,8 @@ ASM_FUNC(ArmPlatformPeiBootAction) msr sctlr_el1, x3 // enable MMU and caches isb - ret + +0:b ArmEnableVFP // enable SIMD before entering C code //UINTN //ArmPlatformGetCorePosition ( -- 2.39.0 -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#98022): https://edk2.groups.io/g/devel/message/98022 Mute This Topic: https://groups.io/mt/96075173/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-