On 01.12.2021 20:07, Andrew Cooper wrote:
> On 01/12/2021 08:20, Jan Beulich wrote:
>> On 26.11.2021 22:22, Andrew Cooper wrote:
>>> With altcall, we convert indirect branches into direct ones.  With that
>>> complete, none of the potential targets need an endbr64 instruction.
>> Assuming that no other hooks remain which re-use the same function. I
>> think this constraint wants at least mentioning explicitly.
> 
> Fair point, but I think it is entirely reasonable to expect logic not to
> mix and match altcall on the same hook.

Take XSM's silo_xsm_ops and dummy_ops as an example. With what
xsm_fixup_ops() does it should be entirely benign if silo_xsm_ops
set any or all of the hooks which are currently unset to what
dummy_ops has.

>>> Furthermore, removing the endbr64 instructions is a security 
>>> defence-in-depth
>>> improvement, because it limits the options available to an attacker who has
>>> managed to hijack a function pointer.
>>>
>>> Introduce a new .init.data.cf_clobber section.  Have _apply_alternatives()
>>> walk over the entire section, looking for any pointers into .text, and 
>>> clobber
>>> an endbr64 instruction if found.  This is some minor structure (ab)use but 
>>> it
>>> works alarmingly well.
>> Iirc you've said more than once that non-function-pointer data in
>> those structures is fine; I'm not convinced. What if a sequence of
>> sub-pointer-size fields has a value looking like a pointer into
>> .text? This may not be very likely, but would result in corruption
>> that may be hard to associate with anything. Of course, with the
>> is_endbr64() check and with a build time check of there not being
>> any stray ENDBR64 patterns in .text, that issue would disappear.
>> But we aren't quite there yet.
> 
> I disagree with "not very likely" and put it firmly in the "not
> plausible" category.
> 
> To cause a problem, you need an aligned something which isn't actually a
> function pointer with a bit pattern forming [0xffff82d040200000,
> ffff82d04039e1ba) which hits an ENDBR64 pattern.  Removing the stray
> ENDBR64's doesn't prevent such a bit pattern pointing at a real (wrong)
> function.

Why "aligned" in "aligned something"? And I also don't see what you're
trying to tell me with the last sentence. It's still .text corruption
that would result if such a pattern (crossing an insn boundary)
happened to be pointed at.

> These structures are almost exclusively compile time generated.
> 
> So yes - it's not impossible, but it's also not going to happen
> accidentally.

I wonder how you mean to exclude such accidents. It occurs to me that
checking the linked binary for the pattern isn't going to be enough.
Such a patter could also form with alternatives patching. (It's all
quite unlikely, yes, but imo we need to fully exclude the possibility.)

>>> --- a/xen/arch/x86/alternative.c
>>> +++ b/xen/arch/x86/alternative.c
>>> @@ -173,6 +173,9 @@ text_poke(void *addr, const void *opcode, size_t len)
>>>      return memcpy(addr, opcode, len);
>>>  }
>>>  
>>> +extern unsigned long __initdata_cf_clobber_start[];
>>> +extern unsigned long __initdata_cf_clobber_end[];
>> const please. I also would find it quite a bit better if these
>> were suitably typed such that ...
>>
>>> @@ -329,6 +332,41 @@ static void init_or_livepatch 
>>> _apply_alternatives(struct alt_instr *start,
>>>          add_nops(buf + a->repl_len, total_len - a->repl_len);
>>>          text_poke(orig, buf, total_len);
>>>      }
>>> +
>>> +    /*
>>> +     * Clobber endbr64 instructions now that altcall has finished optimised
>>> +     * all indirect branches to direct ones.
>>> +     */
>>> +    if ( force && cpu_has_xen_ibt )
>>> +    {
>>> +        unsigned long *val;
>>> +        unsigned int clobbered = 0;
>>> +
>>> +        /*
>>> +         * This is some minor structure (ab)use.  We walk the entire 
>>> contents
>>> +         * of .init.data.cf_clobber as if it were an array of pointers.
>>> +         *
>>> +         * If the pointer points into .text, and has an endbr64 
>>> instruction,
>>> +         * nop out the endbr64.  This causes the pointer to no longer be a
>>> +         * legal indirect branch target under CET-IBT.  This is a
>>> +         * defence-in-depth measure, to reduce the options available to an
>>> +         * adversary who has managed to hijack a function pointer.
>>> +         */
>>> +        for ( val = __initdata_cf_clobber_start;
>>> +              val < __initdata_cf_clobber_end;
>>> +              val++ )
>>> +        {
>>> +            void *ptr = (void *)*val;
>> ... no cast was needed here.
> 
> Unless you know what this type is, I already tried and am stuck. 
> Everything else requires more horrible casts on val.

It's as simple as I thought is would be; proposed respective patch
at the end of the mail (the two //temp-marked #define-s were needed so
I could build-test this without needing to pull in further patches of
yours). No new casts at all, and the one gone that I wanted to see
eliminated.

>>> --- a/xen/include/xen/init.h
>>> +++ b/xen/include/xen/init.h
>>> @@ -18,6 +18,8 @@
>>>  #define __init_call(lvl)  __used_section(".initcall" lvl ".init")
>>>  #define __exit_call       __used_section(".exitcall.exit")
>>>  
>>> +#define __initdata_cf_clobber __section(".init.data.cf_clobber")
>> Just to repeat what I've said elsewhere: I think we want a const
>> version of this as well.
> 
> I can, but does it really matter?  initconst is merged into initdata and
> not actually read-only to begin with.

My remark wasn't about the actual mapping properties at all. What I'm
after is the compiler being able to spot modifications. If I see a
struct instance marked "const" and if I know the thing builds okay, I
know I don't need to go hunt for possible writes to this struct
instance. When it's non-const, to be sure there's no possible conflict
with the patching (yours or just the altcall part), I'd need to find
and verify all instances where the object gets written to.

Jan

**********************************************************************

--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -28,6 +28,9 @@
 #include <asm/nops.h>
 #include <xen/livepatch.h>
 
+#define cpu_has_xen_ibt true//temp
+#define is_endbr64(p) false//temp
+
 #define MAX_PATCH_LEN (255-1)
 
 extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
@@ -174,6 +177,9 @@ text_poke(void *addr, const void *opcode
     cpuid_eax(0);
 }
 
+extern void *const __initdata_cf_clobber_start[];
+extern void *const __initdata_cf_clobber_end[];
+
 /*
  * Replace instructions with better alternatives for this CPU type.
  * This runs before SMP is initialized to avoid SMP problems with
@@ -309,6 +315,41 @@ static void init_or_livepatch _apply_alt
         add_nops(buf + a->repl_len, total_len - a->repl_len);
         text_poke(orig, buf, total_len);
     }
+
+    /*
+     * Clobber endbr64 instructions now that altcall has finished optimised
+     * all indirect branches to direct ones.
+     */
+    if ( force && cpu_has_xen_ibt )
+    {
+        void *const *val;
+        unsigned int clobbered = 0;
+
+        /*
+         * This is some minor structure (ab)use.  We walk the entire contents
+         * of .init.data.cf_clobber as if it were an array of pointers.
+         *
+         * If the pointer points into .text, and has an endbr64 instruction,
+         * nop out the endbr64.  This causes the pointer to no longer be a
+         * legal indirect branch target under CET-IBT.  This is a
+         * defence-in-depth measure, to reduce the options available to an
+         * adversary who has managed to hijack a function pointer.
+         */
+        for ( val = __initdata_cf_clobber_start;
+              val < __initdata_cf_clobber_end;
+              val++ )
+        {
+            void *ptr = *val;
+
+            if ( !is_kernel_text(ptr) || !is_endbr64(ptr) )
+                continue;
+
+            add_nops(ptr, 4);
+            clobbered++;
+        }
+
+        printk("altcall: Optimised away %u endbr64 instructions\n", clobbered);
+    }
 }
 
 void init_or_livepatch apply_alternatives(struct alt_instr *start,
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -217,6 +217,11 @@ SECTIONS
        *(.initcall1.init)
        __initcall_end = .;
 
+       . = ALIGN(POINTER_ALIGN);
+        __initdata_cf_clobber_start = .;
+       *(.init.data.cf_clobber)
+        __initdata_cf_clobber_end = .;
+
        *(.init.data)
        *(.init.data.rel)
        *(.init.data.rel.*)
--- a/xen/include/xen/init.h
+++ b/xen/include/xen/init.h
@@ -18,6 +18,8 @@
 #define __init_call(lvl)  __used_section(".initcall" lvl ".init")
 #define __exit_call       __used_section(".exitcall.exit")
 
+#define __initdata_cf_clobber __section(".init.data.cf_clobber")
+
 /* These macros are used to mark some functions or 
  * initialized data (doesn't apply to uninitialized data)
  * as `initialization' functions. The kernel can take this


Reply via email to