On 9/9/2016 1:26 PM, Yu Zhang wrote:
>>> On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> @@ -5551,7 +5553,35 @@ static int hvmop_map_mem_type_to_ioreq_server(
> if ( rc != 0 )
> goto out;
>
> - rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type,
op.flags);
> + if ( gfn == 0 )
> + rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type,
op.flags);
> +
> + /*
> + * Iterate p2m table when an ioreq server unmaps from
p2m_ioreq_server,
> + * and reset the remaining p2m_ioreq_server entries back to
p2m_ram_rw.
> + */
> + if ( op.flags == 0 && rc == 0 )
> + {
> + struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> + while ( read_atomic(&p2m->ioreq.entry_count) &&
> + gfn <= p2m->max_mapped_pfn )
> + {
> + unsigned long gfn_end = gfn + HVMOP_op_mask;
> +
> + p2m_finish_type_change(d, gfn, gfn_end,
> + p2m_ioreq_server, p2m_ram_rw);
> +
> + /* Check for continuation if it's not the last
iteration. */
> + if ( ++gfn_end <= p2m->max_mapped_pfn &&
> + hypercall_preempt_check() )
> + {
> + rc = -ERESTART;
> + *iter = gfn_end;
> + goto out;
> + }
> + }
"gfn" doesn't get updated, so if no preemption happens here, the
same GFN range will get acted on a second time (and so on, until
eventually preemption becomes necessary).
Oh, right. Thanks for pointing out. :)
Also when you don't really need to use "goto", please try to avoid
it - here you could easily use just "break" instead.
OK.
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -545,6 +545,9 @@ static int resolve_misconfig(struct p2m_domain
*p2m,
> unsigned long gfn)
> e.ipat = ipat;
> if ( e.recalc && p2m_is_changeable(e.sa_p2mt) )
> {
> + if ( e.sa_p2mt == p2m_ioreq_server )
> + p2m->ioreq.entry_count--;
ISTR that George had asked you to put this accounting elsewhere.
Yes. You have good memory. : )
George's suggestion is to put inside atomic_write_ept_entry(), which is
a quite core routine,
and is only for ept. And my suggestion is to put inside the
p2m_type_change_one() and routine
resolve_misconfig()/do_recalc() as well, so that we can support both
Intel EPT and AMD NPT -
like the p2m_change_entry_type_global().
I had given a more detailed explanation in a reply on Aug 30 in the v5
thread. :)
And then I'd really like you to add assertions making sure this
count doesn't underflow.
OK.
> @@ -965,7 +968,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
> if ( is_epte_valid(ept_entry) )
> {
> if ( (recalc || ept_entry->recalc) &&
> - p2m_is_changeable(ept_entry->sa_p2mt) )
> + p2m_is_changeable(ept_entry->sa_p2mt) &&
> + (ept_entry->sa_p2mt != p2m_ioreq_server) )
> *t = p2m_is_logdirty_range(p2m, gfn, gfn) ?
p2m_ram_logdirty
> : p2m_ram_rw;
> else
Considering this (and at least one more similar adjustment further
down), is it really appropriate to include p2m_ioreq_server in the
set of "changeable" types?
Well, I agree p2m_ioreq_server do have different behaviors than the
p2m_log_dirty, but removing p2m_ioreq_server from changeable type
would need more specific code for the p2m_ioreq_server in routines like
resolve_misconfig(), do_recalc() and p2m_change_entry_type_global() etc.
I've tried this approach and abandoned later. :)
> +void p2m_finish_type_change(struct domain *d,
> + unsigned long start, unsigned long end,
> + p2m_type_t ot, p2m_type_t nt)
> +{
> + struct p2m_domain *p2m = p2m_get_hostp2m(d);
> + p2m_type_t t;
> + unsigned long gfn = start;
> +
> + ASSERT(start <= end);
> + ASSERT(ot != nt);
> + ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> +
> + p2m_lock(p2m);
> +
> + end = (end > p2m->max_mapped_pfn) ? p2m->max_mapped_pfn : end;
min() or an if() without else.
Got it. Thanks
> + while ( gfn <= end )
> + {
> + get_gfn_query_unlocked(d, gfn, &t);
> +
> + if ( t == ot )
> + p2m_change_type_one(d, gfn, t, nt);
> +
> + gfn++;
> + }
> +
> + p2m_unlock(p2m);
> +}
And then I wonder why p2m_change_type_range() can't be used
instead of adding this new function.
Well, like p2m_change_entry_type_global() , p2m_change_type_range() also
does the change asynchronously. And here this routine is introduced
to synchronously reset the p2m type.
Jan
Thanks
Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel