Using memcpy() may result in multiple individual byte accesses
(dependening how memcpy() is implemented and how the resulting insns,
e.g. REP MOVSB, get carried out in hardware), which isn't what we
want/need for carrying out guest insns as correctly as possible. Fall
back to memcpy() only for misaligned accesses as well as ones not 2, 4,
or 8 bytes in size.

Suggested-by: Andrew Cooper <andrew.coop...@citrix.com>
Signed-off-by: Jan Beulich <jbeul...@suse.com>
---
RFC: Besides wanting to hear if this is considered acceptable and
     sufficient (or whether it is thought that the linear_write() path
     also needs playing with), the question is whether we'd want to
     extend this to reads as well. linear_{read,write}() currently don't
     use hvmemul_map_linear_addr(), i.e. in both cases I'd need to also
     fiddle with __hvm_copy() (perhaps by making the construct below a
     helper function).

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1352,7 +1352,14 @@ static int hvmemul_write(
     if ( !mapping )
         return linear_write(addr, bytes, p_data, pfec, hvmemul_ctxt);
 
-    memcpy(mapping, p_data, bytes);
+    /* For aligned accesses use single (and hence atomic) MOV insns. */
+    switch ( bytes | ((unsigned long)mapping & (bytes - 1)) )
+    {
+    case 2: write_u16_atomic(mapping, *(uint16_t *)p_data); break;
+    case 4: write_u32_atomic(mapping, *(uint32_t *)p_data); break;
+    case 8: write_u64_atomic(mapping, *(uint64_t *)p_data); break;
+    default: memcpy(mapping, p_data, bytes);                break;
+    }
 
     hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to