On 08/03/2022 10:29, Bjoern Doebel wrote:
> @@ -104,18 +122,34 @@ void noinline arch_livepatch_revive(void)
>  
>  int arch_livepatch_verify_func(const struct livepatch_func *func)
>  {
> +    BUILD_BUG_ON(sizeof(struct x86_livepatch_meta) != LIVEPATCH_OPAQUE_SIZE);
> +
>      /* If NOPing.. */
>      if ( !func->new_addr )
>      {
>          /* Only do up to maximum amount we can put in the ->opaque. */
> -        if ( func->new_size > sizeof(func->opaque) )
> +        if ( func->new_size > sizeof_field(struct x86_livepatch_meta,
> +                                           instruction) )
>              return -EOPNOTSUPP;
>  
>          if ( func->old_size < func->new_size )
>              return -EINVAL;
>      }
> -    else if ( func->old_size < ARCH_PATCH_INSN_SIZE )
> -        return -EINVAL;
> +    else
> +    {
> +        /*
> +         * Space needed now depends on whether the target function
> +         * starts with an ENDBR64 instruction.
> +         */
> +        uint8_t needed;
> +
> +        needed = ARCH_PATCH_INSN_SIZE;
> +        if ( is_endbr64(func->old_addr) )
> +            needed += ENDBR64_LEN;

This won't work for cf_clobber targets, I don't think.  The ENDBR gets
converted to NOP4 and fails this check, but the altcalls calling
old_func had their displacements adjusted by +4.

The is_endbr64() check will fail, and the 5-byte jmp will be written at
the start of the function, and corrupt the instruction stream for the
altcall()'d callers.

Let me write an incremental patch to help.

~Andrew

Reply via email to