On Mi, 2017-09-20 at 14:37 +0000, Paul Durrant wrote:
> >
> > -----Original Message-----
> > From: Jan Beulich [mailto:jbeul...@suse.com]
> > Sent: 20 September 2017 13:24
> > To: Alexandru Isaila <aisa...@bitdefender.com>
> > Cc: suravee.suthikulpa...@amd.com; Andrew Cooper
> > <andrew.coop...@citrix.com>; Paul Durrant <paul.durr...@citrix.com>
> > ;
> > Wei Liu <wei.l...@citrix.com>; George Dunlap <George.Dunlap@citrix.
> > com>;
> > Ian Jackson <ian.jack...@citrix.com>; jun.nakaj...@intel.com; Kevin
> > Tian
> > <kevin.t...@intel.com>; sstabell...@kernel.org; xen-de...@lists.xen
> > .org;
> > boris.ostrov...@oracle.com; konrad.w...@oracle.com; Tim (Xen.org)
> > <t...@xen.org>
> > Subject: Re: [PATCH v4 3/3] x86/hvm: Implement hvmemul_write()
> > using
> > real mappings
> >
> > >
> > > >
> > > > >
> > > > > On 20.09.17 at 11:22, <aisa...@bitdefender.com> wrote:
> > > +static void *hvmemul_map_linear_addr(
> > > +    unsigned long linear, unsigned int bytes, uint32_t pfec,
> > > +    struct hvm_emulate_ctxt *hvmemul_ctxt)
> > > +{
> > > +    struct vcpu *curr = current;
> > > +    void *err, *mapping;
> > > +
> > > +    /* First and final gfns which need mapping. */
> > > +    unsigned long frame = linear >> PAGE_SHIFT, first = frame;
> > > +    unsigned long final = (linear + bytes - !!bytes) >>
> > > PAGE_SHIFT;
> > > +
> > > +    /*
> > > +     * mfn points to the next free slot.  All used slots have a
> > > page reference
> > > +     * held on them.
> > > +     */
> > > +    mfn_t *mfn = &hvmemul_ctxt->mfn[0];
> > > +
> > > +    /*
> > > +     * The caller has no legitimate reason for trying a zero-
> > > byte write, but
> > > +     * final is calculate to fail safe in release builds.
> > > +     *
> > > +     * The maximum write size depends on the number of adjacent
> > > mfns[]
> > which
> > >
> > > +     * can be vmap()'d, accouting for possible misalignment
> > > within the
> > region.
> > >
> > > +     * The higher level emulation callers are responsible for
> > > ensuring that
> > > +     * mfns[] is large enough for the requested write size.
> > > +     */
> > > +    if ( bytes == 0 ||
> > > +         final - first >= ARRAY_SIZE(hvmemul_ctxt->mfn) )
> > > +    {
> > > +        ASSERT_UNREACHABLE();
> > > +        goto unhandleable;
> > > +    }
> > > +
> > > +    do {
> > > +        enum hvm_translation_result res;
> > > +        struct page_info *page;
> > > +        pagefault_info_t pfinfo;
> > > +        p2m_type_t p2mt;
> > > +
> > > +        /* Error checking.  Confirm that the current slot is
> > > clean. */
> > > +        ASSERT(mfn_x(*mfn) == 0);
> > > +
> > > +        res = hvm_translate_get_page(curr, frame << PAGE_SHIFT,
> > > true,
> > pfec,
> > >
> > > +                                     &pfinfo, &page, NULL,
> > > &p2mt);
> > > +
> > > +        switch ( res )
> > > +        {
> > > +        case HVMTRANS_okay:
> > > +            break;
> > > +
> > > +        case HVMTRANS_bad_linear_to_gfn:
> > > +            x86_emul_pagefault(pfinfo.ec, pfinfo.linear,
> > > &hvmemul_ctxt-
> > > ctxt);
> > > +            err = ERR_PTR(~X86EMUL_EXCEPTION);
> > > +            goto out;
> > > +
> > > +        case HVMTRANS_bad_gfn_to_mfn:
> > > +            err = NULL;
> > > +            goto out;
> > > +
> > > +        case HVMTRANS_gfn_paged_out:
> > > +        case HVMTRANS_gfn_shared:
> > > +            err = ERR_PTR(~X86EMUL_RETRY);
> > > +            goto out;
> > > +
> > > +        default:
> > > +            goto unhandleable;
> > > +        }
> > > +
> > > +        if ( p2m_is_discard_write(p2mt) )
> > > +        {
> > > +            err = ERR_PTR(~X86EMUL_OKAY);
> > > +            goto out;
> > > +        }
> > > +
> > > +        *mfn++ = _mfn(page_to_mfn(page));
> > > +
> > > +    } while ( ++frame < final );
> > Interesting - I had specifically pointed out in a reply to v3 that
> > the
> > increment of mfn _cannot_ be moved down here: You're now
> > leaking a page ref on the p2m_is_discard_write() error path afaict.
> It could be left here if a put_page() is added to the above error
> path, which I'd clearly deluded myself was already there.

I think it's clearer to move it back.

Alex

________________________
This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to