On Fri, Sep 13, 2019 at 09:22:37AM +0200, Borislav Petkov wrote:
> In order to patch on machines which don't set X86_FEATURE_ERMS, I need
> to do a "reversed" patching of sorts, i.e., patch when the x86 feature
> flag is NOT set. See the below changes in alternative.c which basically
> add a flags field to struct alt_instr and thus control the patching
> behavior in apply_alternatives().
> 
> The result is this:
> 
> static __always_inline void *memset(void *dest, int c, size_t n)
> {
>         void *ret, *dummy;
> 
>         asm volatile(ALTERNATIVE_2_REVERSE("rep; stosb",
>                                            "call memset_rep",  
> X86_FEATURE_ERMS,
>                                            "call memset_orig", 
> X86_FEATURE_REP_GOOD)
>                 : "=&D" (ret), "=a" (dummy)
>                 : "0" (dest), "a" (c), "c" (n)
>                 /* clobbers used by memset_orig() and memset_rep_good() */
>                 : "rsi", "rdx", "r8", "r9", "memory");
> 
>         return dest;
> }

I think this also needs ASM_CALL_CONSTRAINT.

Doesn't this break on older non-ERMS CPUs when the memset() is done
early, before alternative patching?

Could it instead do this?

        ALTERNATIVE_2("call memset_orig",
                      "call memset_rep",        X86_FEATURE_REP_GOOD,
                      "rep; stosb",             X86_FEATURE_ERMS)

Then the "reverse alternatives" feature wouldn't be needed anyway.

-- 
Josh

Reply via email to