On Wed, Sep 12, 2018 at 03:09:35AM -0600, Jan Beulich wrote:
> The function does two translations in one go for a single guest access.
> Any failure of the first translation step (guest linear -> guest
> physical), resulting in #PF, ought to take precedence over any failure
> of the second step (guest physical -> host physical).

If my understanding is correct, this is done so that any #PF that
would arise from the access is injected to the guest, regardless of
whether there might also be gfn -> mfn errors that would otherwise
prevent the #PF from being detected.

> Bail out of the
> loop early solely when translation produces HVMTRANS_bad_linear_to_gfn,
> and record the most relevant of perhaps multiple different errors
> otherwise. (The choice of ZERO_BLOCK_PTR as sentinel is arbitrary.)
> 
> Signed-off-by: Jan Beulich <jbeul...@suse.com>
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -531,6 +531,20 @@ static int hvmemul_do_mmio_addr(paddr_t
>      return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa);
>  }
>  
> +static void *update_map_err(void *err, void *new)
> +{
> +    if ( err == ZERO_BLOCK_PTR || err == ERR_PTR(~X86EMUL_OKAY) )
> +        return new;
> +
> +    if ( new == ERR_PTR(~X86EMUL_OKAY) )
> +        return err;
> +
> +    if ( err == ERR_PTR(~X86EMUL_RETRY) )
> +        return new;
> +
> +    return err;
> +}
> +
>  /*
>   * Map the frame(s) covering an individual linear access, for writeable
>   * access.  May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other errors
> @@ -544,7 +558,7 @@ static void *hvmemul_map_linear_addr(
>      struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      struct vcpu *curr = current;
> -    void *err, *mapping;
> +    void *err = ZERO_BLOCK_PTR, *mapping;
>      unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
>          (linear >> PAGE_SHIFT) + 1;
>      unsigned int i;
> @@ -600,27 +614,28 @@ static void *hvmemul_map_linear_addr(
>              goto out;
>  
>          case HVMTRANS_bad_gfn_to_mfn:
> -            err = NULL;
> -            goto out;
> +            err = update_map_err(err, NULL);
> +            continue;
>  
>          case HVMTRANS_gfn_paged_out:
>          case HVMTRANS_gfn_shared:
> -            err = ERR_PTR(~X86EMUL_RETRY);
> -            goto out;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_RETRY));
> +            continue;
>  
>          default:
> -            goto unhandleable;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_UNHANDLEABLE));
> +            continue;
>          }
>  
>          *mfn++ = page_to_mfn(page);

I have to admit it find it weird that since now we don't exit the loop
when HVMTRANS_bad_gfn_to_mfn is returned, the item at mfn[0] might
point to the gfn -> mfn translation for the second half of the access.
AFAICT that would happen if the first half of the access fails to
translate with an error !HVMTRANS_bad_linear_to_gfn and the second
half is successful.

I guess it doesn't matter much, because the vmap below will be
skipped, might still be worth to add a comment.

In fact, if the first translation fails the following ones could use
the cheaper paging_gva_to_gfn(), as we no longer care to get the
underlying page, and just need to figure out whether the access would
trigger a #PF?

Thanks, Roger.

Reply via email to