Hi, At 21:13 +0800 on 04 Dec (1417724006), Yu Zhang wrote: > A new p2m type, p2m_mmio_write_dm, is added to trap and emulate > the write operations on GPU's page tables. Handling of this new > p2m type are similar with existing p2m_ram_ro in most condition > checks, with only difference on final policy of emulation vs. drop. > For p2m_ram_ro types, write operations will not trigger the device > model, and will be discarded later in __hvm_copy(); while for the > p2m_mmio_write_dm type pages, writes will go to the device model > via ioreq-server. > > Signed-off-by: Yu Zhang <yu.c.zh...@linux.intel.com> > Signed-off-by: Wei Ye <wei...@intel.com>
Thanks for this -- only two comments: > @@ -5978,7 +5982,8 @@ long do_hvm_op(unsigned long op, > XEN_GUEST_HANDLE_PARAM(void) arg) > goto param_fail4; > } > if ( !p2m_is_ram(t) && > - (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) ) > + (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) && > + t != p2m_mmio_write_dm ) I think that Jan already brough this up, and maybe I missed your answer: this realaxation looks wrong to me. I would have thought that transition between p2m_mmio_write_dm and p2m_ram_rw/p2m_ram_logdirty would be the only ones you would want to allow. > @@ -111,7 +112,8 @@ typedef unsigned int p2m_query_t; > #define P2M_RO_TYPES (p2m_to_mask(p2m_ram_logdirty) \ > | p2m_to_mask(p2m_ram_ro) \ > | p2m_to_mask(p2m_grant_map_ro) \ > - | p2m_to_mask(p2m_ram_shared) ) > + | p2m_to_mask(p2m_ram_shared) \ > + | p2m_to_mask(p2m_mmio_write_dm)) Nit: please align the '\' with the others above it. Cheers, Tim. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel