On 9/5/2016 9:31 PM, Jan Beulich wrote:
On 02.09.16 at 12:47, <yu.c.zh...@linux.intel.com> wrote:
@@ -178,8 +179,27 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE:
{
- struct hvm_ioreq_server *s =
- hvm_select_ioreq_server(curr->domain, &p);
+ struct hvm_ioreq_server *s = NULL;
+ p2m_type_t p2mt = p2m_invalid;
+
+ if ( is_mmio )
+ {
+ unsigned long gmfn = paddr_to_pfn(addr);
+
+ (void) get_gfn_query_unlocked(currd, gmfn, &p2mt);
+
+ if ( p2mt == p2m_ioreq_server && dir == IOREQ_WRITE )
+ {
+ unsigned int flags;
+
+ s = p2m_get_ioreq_server(currd, &flags);
+ if ( !(flags & XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE) )
+ s = NULL;
+ }
+ }
+
+ if ( !s && p2mt != p2m_ioreq_server )
+ s = hvm_select_ioreq_server(currd, &p);
What I recall is that we had agreed on p2m_ioreq_server pages
to be treated as ordinary RAM ones as long as no server can be
found. The type check here contradicts that. Is there a reason?
Thanks Jan. I had given my explaination on Sep 6's reply. :)
If s is NULL for a p2m_ioreq_server page, we do not wish to traverse the
rangeset
by hvm_select_ioreq_server() again, it may probably be a read emulation
process
for a read-modify-write scenario.
+static int hvmop_map_mem_type_to_ioreq_server(
+ XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop)
+{
+ xen_hvm_map_mem_type_to_ioreq_server_t op;
+ struct domain *d;
+ int rc;
+
+ if ( copy_from_guest(&op, uop, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+ if ( rc != 0 )
+ return rc;
+
+ rc = -EINVAL;
+ if ( !is_hvm_domain(d) )
+ goto out;
+
+ if ( op.pad != 0 )
+ goto out;
This, I think, should be done first thing after having copied in the
structure. No need to lookup domain or anything if this is not zero.
Right. Thanks!
+int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
+ uint32_t type, uint32_t flags)
+{
+ struct hvm_ioreq_server *s;
+ int rc;
+
+ /* For now, only HVMMEM_ioreq_server is supported. */
+ if ( type != HVMMEM_ioreq_server )
+ return -EINVAL;
+
+ /* For now, only write emulation is supported. */
+ if ( flags & ~(XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE) )
+ return -EINVAL;
+
+ domain_pause(d);
+ spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+
+ rc = -ENOENT;
+ list_for_each_entry ( s,
+ &d->arch.hvm_domain.ioreq_server.list,
+ list_entry )
+ {
+ if ( s == d->arch.hvm_domain.default_ioreq_server )
+ continue;
+
+ if ( s->id == id )
+ {
+ rc = p2m_set_ioreq_server(d, flags, s);
+ break;
+ }
+ }
+
+ spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+ domain_unpause(d);
+ return rc;
+}
Blank line before final return statement of a function please.
Got it.
+int p2m_set_ioreq_server(struct domain *d,
+ unsigned int flags,
+ struct hvm_ioreq_server *s)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+ int rc;
+
+ /*
+ * Use lock to prevent concurrent setting requirements
+ * from multiple ioreq serers.
+ */
"Concurrent setting requirements"? DYM "attempts"?
Yep. "attempts" is more accurate.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel