Hi Min, This refactoring is already done by the SNP patch series.
https://edk2.groups.io/g/devel/message/77336?p=,,,20,0,0,0::Created,,posterid%3A5969970,20,2,20,83891510 It appears that you are also pulling in some of TDX logic inside the AMDSev.asm such as ; +PostJump64BitAndLandHereSev: + + ; + ; If it is Tdx guest, jump to exit point directly. + ; This is because following code may access the memory region which has + ; not been accepted. It is not allowed in Tdx guests. + ; + mov eax, dword[TDX_WORK_AREA] + cmp eax, 0x47584454 ; 'TDXG' + jz GoodCompare Why we are referring the TDX workarea inside the AmdSev.asm ? I will take out my refactoring patch outside of the SNP series and submit it so that you can build on top of. This will simplify review process. thanks On 7/27/21 12:42 AM, Min Xu wrote: > RFC: https://bugzilla.tianocore.org/show_bug.cgi?id=3429 > > AmdSev.asm includes below routines: > - CheckSevFeatures > Check if Secure Encrypted Virtualization (SEV) features are enabled. > - PreSetCr3ForPageTables64Sev > It is called before SetCr3ForPageTables64 in SEV guests. > - PostSetCr3PageTables64Sev > It is called after SetCr3PageTables64 in SEV guests. > - PostJump64BitAndLandHereSev > It is called after Jump64BitAndLandHere in SEV guests. > - #VC exception handling routines > > These routines are extracted from PageTables64.asm and Flat32ToFlat64.asm > Need AMD engineers' help to review/validate the patch so that there is > no regression. Thanks in advance! > > Note: > In above Pre/Post routines, dword[TDX_WORK_AREA] should be checked > to see if it is 'TDXG' (Tdx guests). This is because some memory region > for example, byte[SEV_ES_WORK_AREA] cannot be accessed in Tdx guests. > Tdx requires that any memory region to be accessed should be accepted > first or initialized by host VMM before Td guest is launched. > > Cc: Ard Biesheuvel <ardb+tianoc...@kernel.org> > Cc: Jordan Justen <jordan.l.jus...@intel.com> > Cc: Brijesh Singh <brijesh.si...@amd.com> > Cc: Erdem Aktas <erdemak...@google.com> > Cc: James Bottomley <j...@linux.ibm.com> > Cc: Jiewen Yao <jiewen....@intel.com> > Cc: Tom Lendacky <thomas.lenda...@amd.com> > Signed-off-by: Min Xu <min.m...@intel.com> > --- > OvmfPkg/ResetVector/Ia32/AmdSev.asm | 526 ++++++++++++++++++++++++++++ > 1 file changed, 526 insertions(+) > create mode 100644 OvmfPkg/ResetVector/Ia32/AmdSev.asm > > diff --git a/OvmfPkg/ResetVector/Ia32/AmdSev.asm > b/OvmfPkg/ResetVector/Ia32/AmdSev.asm > new file mode 100644 > index 000000000000..962b7e169c61 > --- /dev/null > +++ b/OvmfPkg/ResetVector/Ia32/AmdSev.asm > @@ -0,0 +1,526 @@ > +;------------------------------------------------------------------------------ > +; @file > +; AMD SEV routines > +; > +; Copyright (c) 2020, Advanced Micro Devices, Inc. All rights reserved.<BR> > +; Copyright (c) 2021, Intel Corporation. All rights reserved.<BR> > +; > +; SPDX-License-Identifier: BSD-2-Clause-Patent > +; > +;------------------------------------------------------------------------------ > + > + > +BITS 32 > +; > +; SEV-ES #VC exception handler support > +; > +; #VC handler local variable locations > +; > +%define VC_CPUID_RESULT_EAX 0 > +%define VC_CPUID_RESULT_EBX 4 > +%define VC_CPUID_RESULT_ECX 8 > +%define VC_CPUID_RESULT_EDX 12 > +%define VC_GHCB_MSR_EDX 16 > +%define VC_GHCB_MSR_EAX 20 > +%define VC_CPUID_REQUEST_REGISTER 24 > +%define VC_CPUID_FUNCTION 28 > + > +; #VC handler total local variable size > +; > +%define VC_VARIABLE_SIZE 32 > + > +; #VC handler GHCB CPUID request/response protocol values > +; > +%define GHCB_CPUID_REQUEST 4 > +%define GHCB_CPUID_RESPONSE 5 > +%define GHCB_CPUID_REGISTER_SHIFT 30 > +%define CPUID_INSN_LEN 2 > + > +; > +; Check if Secure Encrypted Virtualization (SEV) features are enabled. > +; > +; Register usage is tight in this routine, so multiple calls for the > +; same CPUID and MSR data are performed to keep things simple. > +; > +; Modified: EAX, EBX, ECX, EDX, ESP > +; > +; If SEV is enabled then EAX will be at least 32. > +; If SEV is disabled then EAX will be zero. > +; > +CheckSevFeatures: > + ; Set the first byte of the workarea to zero to communicate to the SEC > + ; phase that SEV-ES is not enabled. If SEV-ES is enabled, the CPUID > + ; instruction will trigger a #VC exception where the first byte of the > + ; workarea will be set to one or, if CPUID is not being intercepted, > + ; the MSR check below will set the first byte of the workarea to one. > + mov byte[SEV_ES_WORK_AREA], 0 > + > + ; > + ; Set up exception handlers to check for SEV-ES > + ; Load temporary RAM stack based on PCDs (see SevEsIdtVmmComm for > + ; stack usage) > + ; Establish exception handlers > + ; > + mov esp, SEV_ES_VC_TOP_OF_STACK > + mov eax, ADDR_OF(Idtr) > + lidt [cs:eax] > + > + ; Check if we have a valid (0x8000_001F) CPUID leaf > + ; CPUID raises a #VC exception if running as an SEV-ES guest > + mov eax, 0x80000000 > + cpuid > + > + ; This check should fail on Intel or Non SEV AMD CPUs. In future if > + ; Intel CPUs supports this CPUID leaf then we are guranteed to have exact > + ; same bit definition. > + cmp eax, 0x8000001f > + jl NoSev > + > + ; Check for SEV memory encryption feature: > + ; CPUID Fn8000_001F[EAX] - Bit 1 > + ; CPUID raises a #VC exception if running as an SEV-ES guest > + mov eax, 0x8000001f > + cpuid > + bt eax, 1 > + jnc NoSev > + > + ; Check if SEV memory encryption is enabled > + ; MSR_0xC0010131 - Bit 0 (SEV enabled) > + mov ecx, 0xc0010131 > + rdmsr > + bt eax, 0 > + jnc NoSev > + > + ; Check for SEV-ES memory encryption feature: > + ; CPUID Fn8000_001F[EAX] - Bit 3 > + ; CPUID raises a #VC exception if running as an SEV-ES guest > + mov eax, 0x8000001f > + cpuid > + bt eax, 3 > + jnc GetSevEncBit > + > + ; Check if SEV-ES is enabled > + ; MSR_0xC0010131 - Bit 1 (SEV-ES enabled) > + mov ecx, 0xc0010131 > + rdmsr > + bt eax, 1 > + jnc GetSevEncBit > + > + ; Set the first byte of the workarea to one to communicate to the SEC > + ; phase that SEV-ES is enabled. > + mov byte[SEV_ES_WORK_AREA], 1 > + > +GetSevEncBit: > + ; Get pte bit position to enable memory encryption > + ; CPUID Fn8000_001F[EBX] - Bits 5:0 > + ; > + and ebx, 0x3f > + mov eax, ebx > + > + ; The encryption bit position is always above 31 > + sub ebx, 32 > + jns SevSaveMask > + > + ; Encryption bit was reported as 31 or below, enter a HLT loop > +SevEncBitLowHlt: > + cli > + hlt > + jmp SevEncBitLowHlt > + > +SevSaveMask: > + xor edx, edx > + bts edx, ebx > + > + mov dword[SEV_ES_WORK_AREA_ENC_MASK], 0 > + mov dword[SEV_ES_WORK_AREA_ENC_MASK + 4], edx > + jmp SevExit > + > +NoSev: > + ; > + ; Perform an SEV-ES sanity check by seeing if a #VC exception occurred. > + ; > + cmp byte[SEV_ES_WORK_AREA], 0 > + jz NoSevPass > + > + ; > + ; A #VC was received, yet CPUID indicates no SEV-ES support, something > + ; isn't right. > + ; > +NoSevEsVcHlt: > + cli > + hlt > + jmp NoSevEsVcHlt > + > +NoSevPass: > + xor eax, eax > + > +SevExit: > + ; > + ; Clear exception handlers and stack > + ; > + push eax > + mov eax, ADDR_OF(IdtrClear) > + lidt [cs:eax] > + pop eax > + mov esp, 0 > + > + OneTimeCallRet CheckSevFeatures > + > +; > +; Called before SetCr3ForPageTables64 in SEV guests > +; > +; Modified: EAX, EBX, ECX, EDX, ESP > +; > +PreSetCr3ForPageTables64Sev: > + ; In Tdx TDX_WORK_AREA was set to 'TDXG'. > + ; CheckSevFeatures cannot be called in Tdx guest because SEV_ES_WORK_AREA > + ; cannot be accessed in this situation. Any memory region to be accessed > + ; in Td guest should be accepted first. > + cmp dword[TDX_WORK_AREA], 0x47584454 ; 'TDXG' > + jz ExitPreSetCr3ForPageTables64Sev > + > + OneTimeCall CheckSevFeatures > + > +ExitPreSetCr3ForPageTables64Sev: > + OneTimeCallRet PreSetCr3ForPageTables64Sev > + > +; > +; It is called in SEV after SetCr3PageTables64 > +; > +; Modified: EAX, EBX, ECX, EDX, ESP > +; > +PostSetCr3PageTables64Sev: > + ; In Tdx TDX_WORK_AREA was set to 'TDXG'. > + ; CheckSevFeatures cannot be called in Tdx because SEV_ES_WORK_AREA > + ; cannot be accessed in this situation. Any memory region to be accessed > + ; in Td guest should be accepted first. > + cmp dword[TDX_WORK_AREA], 0x47584454 ; 'TDXG' > + jz ExitPostSetCr3PageTables64Sev > + > + mov eax, cr4 > + bts eax, 5 ; enable PAE > + mov cr4, eax > + > + mov ecx, 0xc0000080 > + rdmsr > + bts eax, 8 ; set LME > + wrmsr > + > + ; > + ; SEV-ES mitigation check support > + ; > + xor ebx, ebx > + > + cmp byte[SEV_ES_WORK_AREA], 0 > + jz EnablePaging > + > + ; > + ; SEV-ES is active, perform a quick sanity check against the reported > + ; encryption bit position. This is to help mitigate against attacks where > + ; the hypervisor reports an incorrect encryption bit position. > + ; > + ; This is the first step in a two step process. Before paging is enabled > + ; writes to memory are encrypted. Using the RDRAND instruction (available > + ; on all SEV capable processors), write 64-bits of random data to the > + ; SEV_ES_WORK_AREA and maintain the random data in registers (register > + ; state is protected under SEV-ES). This will be used in the second step. > + ; > +RdRand1: > + rdrand ecx > + jnc RdRand1 > + mov dword[SEV_ES_WORK_AREA_RDRAND], ecx > +RdRand2: > + rdrand edx > + jnc RdRand2 > + mov dword[SEV_ES_WORK_AREA_RDRAND + 4], edx > + > + ; > + ; Use EBX instead of the SEV_ES_WORK_AREA memory to determine whether to > + ; perform the second step. > + ; > + mov ebx, 1 > + > +EnablePaging: > + mov eax, cr0 > + bts eax, 31 ; set PG > + mov cr0, eax ; enable paging > + > +ExitPostSetCr3PageTables64Sev: > + > + OneTimeCallRet PostSetCr3PageTables64Sev > + > +; > +; Start of #VC exception handling routines > +; > + > +SevEsIdtNotCpuid: > + ; > + ; Use VMGEXIT to request termination. > + ; 1 - #VC was not for CPUID > + ; > + mov eax, 1 > + jmp SevEsIdtTerminate > + > +SevEsIdtNoCpuidResponse: > + ; > + ; Use VMGEXIT to request termination. > + ; 2 - GHCB_CPUID_RESPONSE not received > + ; > + mov eax, 2 > + > +SevEsIdtTerminate: > + ; > + ; Use VMGEXIT to request termination. At this point the reason code is > + ; located in EAX, so shift it left 16 bits to the proper location. > + ; > + ; EAX[11:0] => 0x100 - request termination > + ; EAX[15:12] => 0x1 - OVMF > + ; EAX[23:16] => 0xXX - REASON CODE > + ; > + shl eax, 16 > + or eax, 0x1100 > + xor edx, edx > + mov ecx, 0xc0010130 > + wrmsr > + ; > + ; Issue VMGEXIT - NASM doesn't support the vmmcall instruction in 32-bit > + ; mode, so work around this by temporarily switching to 64-bit mode. > + ; > +BITS 64 > + rep vmmcall > +BITS 32 > + > + ; > + ; We shouldn't come back from the VMGEXIT, but if we do, just loop. > + ; > +SevEsIdtHlt: > + hlt > + jmp SevEsIdtHlt > + iret > + > + ; > + ; Total stack usage for the #VC handler is 44 bytes: > + ; - 12 bytes for the exception IRET (after popping error code) > + ; - 32 bytes for the local variables. > + ; > +SevEsIdtVmmComm: > + ; > + ; If we're here, then we are an SEV-ES guest and this > + ; was triggered by a CPUID instruction > + ; > + ; Set the first byte of the workarea to one to communicate that > + ; a #VC was taken. > + mov byte[SEV_ES_WORK_AREA], 1 > + > + pop ecx ; Error code > + cmp ecx, 0x72 ; Be sure it was CPUID > + jne SevEsIdtNotCpuid > + > + ; Set up local variable room on the stack > + ; CPUID function : + 28 > + ; CPUID request register : + 24 > + ; GHCB MSR (EAX) : + 20 > + ; GHCB MSR (EDX) : + 16 > + ; CPUID result (EDX) : + 12 > + ; CPUID result (ECX) : + 8 > + ; CPUID result (EBX) : + 4 > + ; CPUID result (EAX) : + 0 > + sub esp, VC_VARIABLE_SIZE > + > + ; Save the CPUID function being requested > + mov [esp + VC_CPUID_FUNCTION], eax > + > + ; The GHCB CPUID protocol uses the following mapping to request > + ; a specific register: > + ; 0 => EAX, 1 => EBX, 2 => ECX, 3 => EDX > + ; > + ; Set EAX as the first register to request. This will also be used as a > + ; loop variable to request all register values (EAX to EDX). > + xor eax, eax > + mov [esp + VC_CPUID_REQUEST_REGISTER], eax > + > + ; Save current GHCB MSR value > + mov ecx, 0xc0010130 > + rdmsr > + mov [esp + VC_GHCB_MSR_EAX], eax > + mov [esp + VC_GHCB_MSR_EDX], edx > + > +NextReg: > + ; > + ; Setup GHCB MSR > + ; GHCB_MSR[63:32] = CPUID function > + ; GHCB_MSR[31:30] = CPUID register > + ; GHCB_MSR[11:0] = CPUID request protocol > + ; > + mov eax, [esp + VC_CPUID_REQUEST_REGISTER] > + cmp eax, 4 > + jge VmmDone > + > + shl eax, GHCB_CPUID_REGISTER_SHIFT > + or eax, GHCB_CPUID_REQUEST > + mov edx, [esp + VC_CPUID_FUNCTION] > + mov ecx, 0xc0010130 > + wrmsr > + > + ; > + ; Issue VMGEXIT - NASM doesn't support the vmmcall instruction in 32-bit > + ; mode, so work around this by temporarily switching to 64-bit mode. > + ; > +BITS 64 > + rep vmmcall > +BITS 32 > + > + ; > + ; Read GHCB MSR > + ; GHCB_MSR[63:32] = CPUID register value > + ; GHCB_MSR[31:30] = CPUID register > + ; GHCB_MSR[11:0] = CPUID response protocol > + ; > + mov ecx, 0xc0010130 > + rdmsr > + mov ecx, eax > + and ecx, 0xfff > + cmp ecx, GHCB_CPUID_RESPONSE > + jne SevEsIdtNoCpuidResponse > + > + ; Save returned value > + shr eax, GHCB_CPUID_REGISTER_SHIFT > + mov [esp + eax * 4], edx > + > + ; Next register > + inc word [esp + VC_CPUID_REQUEST_REGISTER] > + > + jmp NextReg > + > +VmmDone: > + ; > + ; At this point we have all CPUID register values. Restore the GHCB MSR, > + ; set the return register values and return. > + ; > + mov eax, [esp + VC_GHCB_MSR_EAX] > + mov edx, [esp + VC_GHCB_MSR_EDX] > + mov ecx, 0xc0010130 > + wrmsr > + > + mov eax, [esp + VC_CPUID_RESULT_EAX] > + mov ebx, [esp + VC_CPUID_RESULT_EBX] > + mov ecx, [esp + VC_CPUID_RESULT_ECX] > + mov edx, [esp + VC_CPUID_RESULT_EDX] > + > + add esp, VC_VARIABLE_SIZE > + > + ; Update the EIP value to skip over the now handled CPUID instruction > + ; (the CPUID instruction has a length of 2) > + add word [esp], CPUID_INSN_LEN > + iret > + > +ALIGN 2 > + > +Idtr: > + dw IDT_END - IDT_BASE - 1 ; Limit > + dd ADDR_OF(IDT_BASE) ; Base > + > +IdtrClear: > + dw 0 ; Limit > + dd 0 ; Base > + > +ALIGN 16 > + > +; > +; The Interrupt Descriptor Table (IDT) > +; This will be used to determine if SEV-ES is enabled. Upon execution > +; of the CPUID instruction, a VMM Communication Exception will occur. > +; This will tell us if SEV-ES is enabled. We can use the current value > +; of the GHCB MSR to determine the SEV attributes. > +; > +IDT_BASE: > +; > +; Vectors 0 - 28 (No handlers) > +; > +%rep 29 > + dw 0 ; Offset low bits 15..0 > + dw 0x10 ; Selector > + db 0 ; Reserved > + db 0x8E ; Gate Type > (IA32_IDT_GATE_TYPE_INTERRUPT_32) > + dw 0 ; Offset high bits 31..16 > +%endrep > +; > +; Vector 29 (VMM Communication Exception) > +; > + dw (ADDR_OF(SevEsIdtVmmComm) & 0xffff) ; Offset low bits 15..0 > + dw 0x10 ; Selector > + db 0 ; Reserved > + db 0x8E ; Gate Type > (IA32_IDT_GATE_TYPE_INTERRUPT_32) > + dw (ADDR_OF(SevEsIdtVmmComm) >> 16) ; Offset high bits 31..16 > +; > +; Vectors 30 - 31 (No handlers) > +; > +%rep 2 > + dw 0 ; Offset low bits 15..0 > + dw 0x10 ; Selector > + db 0 ; Reserved > + db 0x8E ; Gate Type > (IA32_IDT_GATE_TYPE_INTERRUPT_32) > + dw 0 ; Offset high bits 31..16 > +%endrep > +IDT_END: > + > +BITS 64 > + > +; > +; Called after Jump64BitAndLandHere > +; > +PostJump64BitAndLandHereSev: > + > + ; > + ; If it is Tdx guest, jump to exit point directly. > + ; This is because following code may access the memory region which has > + ; not been accepted. It is not allowed in Tdx guests. > + ; > + mov eax, dword[TDX_WORK_AREA] > + cmp eax, 0x47584454 ; 'TDXG' > + jz GoodCompare > + > + ; > + ; Check if the second step of the SEV-ES mitigation is to be performed. > + ; > + test ebx, ebx > + jz InsnCompare > + > + ; > + ; SEV-ES is active, perform the second step of the encryption bit postion > + ; mitigation check. The ECX and EDX register contain data from RDRAND > that > + ; was stored to memory in encrypted form. If the encryption bit position > is > + ; valid, the contents of ECX and EDX will match the memory location. > + ; > + cmp dword[SEV_ES_WORK_AREA_RDRAND], ecx > + jne SevEncBitHlt > + cmp dword[SEV_ES_WORK_AREA_RDRAND + 4], edx > + jne SevEncBitHlt > + > + ; > + ; If SEV or SEV-ES is active, perform a quick sanity check against > + ; the reported encryption bit position. This is to help mitigate > + ; against attacks where the hypervisor reports an incorrect encryption > + ; bit position. If SEV is not active, this check will always succeed. > + ; > + ; The cmp instruction compares the first four bytes of the cmp > instruction > + ; itself (which will be read decrypted if SEV or SEV-ES is active and the > + ; encryption bit position is valid) against the immediate within the > + ; instruction (an instruction fetch is always decrypted correctly by > + ; hardware) based on RIP relative addressing. > + ; > +InsnCompare: > + cmp dword[rel InsnCompare], 0xFFF63D81 > + je GoodCompare > + > + ; > + ; The hypervisor provided an incorrect encryption bit position, do not > + ; proceed. > + ; > +SevEncBitHlt: > + cli > + hlt > + jmp SevEncBitHlt > + > +GoodCompare: > + OneTimeCallRet PostJump64BitAndLandHereSev -=-=-=-=-=-=-=-=-=-=-=- Groups.io Links: You receive all messages sent to this group. View/Reply Online (#78208): https://edk2.groups.io/g/devel/message/78208 Mute This Topic: https://groups.io/mt/84476064/21656 Group Owner: devel+ow...@edk2.groups.io Unsubscribe: https://edk2.groups.io/g/devel/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-