On 01.03.2022 15:51, Andrew Cooper wrote:
> On 01/03/2022 11:59, Jan Beulich wrote:
>> On 14.02.2022 13:56, Andrew Cooper wrote:
>>> @@ -330,6 +333,41 @@ static void init_or_livepatch 
>>> _apply_alternatives(struct alt_instr *start,
>>>          add_nops(buf + a->repl_len, total_len - a->repl_len);
>>>          text_poke(orig, buf, total_len);
>>>      }
>>> +
>>> +    /*
>>> +     * Clobber endbr64 instructions now that altcall has finished 
>>> optimising
>>> +     * all indirect branches to direct ones.
>>> +     */
>>> +    if ( force && cpu_has_xen_ibt )
>> Btw, this is now also entered when the function is called from
>> apply_alternatives() (i.e. when livepatching), but ...
>>
>>> +    {
>>> +        void *const *val;
>>> +        unsigned int clobbered = 0;
>>> +
>>> +        /*
>>> +         * This is some minor structure (ab)use.  We walk the entire 
>>> contents
>>> +         * of .init.{ro,}data.cf_clobber as if it were an array of 
>>> pointers.
>>> +         *
>>> +         * If the pointer points into .text, and at an endbr64 instruction,
>>> +         * nop out the endbr64.  This causes the pointer to no longer be a
>>> +         * legal indirect branch target under CET-IBT.  This is a
>>> +         * defence-in-depth measure, to reduce the options available to an
>>> +         * adversary who has managed to hijack a function pointer.
>>> +         */
>>> +        for ( val = __initdata_cf_clobber_start;
>>> +              val < __initdata_cf_clobber_end;
>> ... this being main binary boundaries, no action would be taken on
>> the livepatch binary. Hence (also due to having been here before
>> during boot), all that I understand will happen ...
>>
>>> +              val++ )
>>> +        {
>>> +            void *ptr = *val;
>>> +
>>> +            if ( !is_kernel_text(ptr) || !is_endbr64(ptr) )
>>> +                continue;
>>> +
>>> +            add_nops(ptr, 4);
>>> +            clobbered++;
>>> +        }
>>> +
>>> +        printk("altcall: Optimised away %u endbr64 instructions\n", 
>>> clobbered);
>> ... that this message be logged once per patch load (with a number
>> of 0). I think the enclosing if() wants to be amended by
>> "&& system_state < SYS_STATE_active". If you agree, I can easily
>> make a patch.
> 
> Hmm.  There are other livepatching fixes going on, but they're starting
> with fixing the build system breakage.  (The major livepatching fix is
> to adjust how we patch an old function that has an ENDBR64 at the start.)
> 
> That said, a livepatch needs to contain a section equivalent to
> __initdata_cf_clobber, to be processed during load, dependent on
> cpu_has_xen_ibt.

IOW you say altcall patching can occur in live patches? If so, then ...

> Perhaps the best option is to break the clobber out into a helper that
> takes a start/end pair and returns the number clobbered.  That way, it
> can be reused by the livepatch logic, and independently of this printk().

... yes, parametrizing would be necessary.

Jan


Reply via email to