>>> On 21.06.17 at 17:12, <andrew.coop...@citrix.com> wrote:
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -235,12 +235,16 @@ hvm_emulate_insn_fetch(enum x86_segment seg,
>  {
>      struct sh_emulate_ctxt *sh_ctxt =
>          container_of(ctxt, struct sh_emulate_ctxt, ctxt);
> -    unsigned int insn_off = offset - sh_ctxt->insn_buf_eip;
> +    /* Careful, as offset can wrap or truncate WRT insn_buf_eip. */
> +    uint8_t insn_off = offset - sh_ctxt->insn_buf_eip;
>  
>      ASSERT(seg == x86_seg_cs);
>  
> -    /* Fall back if requested bytes are not in the prefetch cache. */
> -    if ( unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
> +    /*
> +     * Fall back if requested bytes are not in the prefetch cache, but always
> +     * perform the zero-length read for segmentation purposes.
> +     */
> +    if ( !bytes || unlikely((insn_off + bytes) > sh_ctxt->insn_buf_bytes) )
>          return hvm_read(seg, offset, p_data, bytes,
>                          hvm_access_insn_fetch, sh_ctxt);

I only note this now - patch one has an (apparently unnecessary)
insn_off > sizeof() check ahead of the insn_off + bytes > sizeof()
one. If you see a need for it to be there, why don't you put it here
too? In any event I'd like to ask for both pieces of code to match
up in the end (i.e. I'm fine with the adjustment being done on either
side), and then
Reviewed-by: Jan Beulich <jbeul...@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to