On 5/10/2017 12:29 AM, Jan Beulich wrote:
On 05.04.17 at 10:59, <yu.c.zh...@linux.intel.com> wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -411,14 +411,17 @@ static int dm_op(domid_t domid,
while ( read_atomic(&p2m->ioreq.entry_count) &&
first_gfn <= p2m->max_mapped_pfn )
{
+ bool changed = false;
+
/* Iterate p2m table for 256 gfns each time. */
p2m_finish_type_change(d, _gfn(first_gfn), 256,
- p2m_ioreq_server, p2m_ram_rw);
+ p2m_ioreq_server, p2m_ram_rw, &changed);
first_gfn += 256;
/* Check for continuation if it's not the last iteration. */
if ( first_gfn <= p2m->max_mapped_pfn &&
+ changed &&
hypercall_preempt_check() )
{
rc = -ERESTART;
I appreciate and support the intention, but you're opening up a
long lasting loop here in case very little or no changes need to
be done. You need to check for preemption every so many
iterations even if you've never seen "changed" come back set.
Thanks for your comments, Jan.
Indeed, this patch is problematic. Another thought is - since current
p2m sweeping
implementation disables live migration when there's ioreq server entries
left, and George
had proposed a generic p2m change solution previously. I'd like to leave
the optimization
together with the generic solution in future xen release. :-)
Yu
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel