-----Original Message-----
From: Yu, Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 26 February 2016 06:59
To: Paul Durrant; xen-de...@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] docs/design: introduce
HVMMEM_ioreq_serverX types
Hi Paul,
Thanks a lot for your help on this! And below are my questions.
On 2/25/2016 11:49 PM, Paul Durrant wrote:
This patch adds a new 'designs' subdirectory under docs as a repository
for this and future design proposals.
Signed-off-by: Paul Durrant <paul.durr...@citrix.com>
---
For convenience this document can also be viewed in PDF at:
http://xenbits.xen.org/people/pauldu/hvmmem_ioreq_server.pdf
---
docs/designs/hvmmem_ioreq_server.md | 63
+++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
create mode 100755 docs/designs/hvmmem_ioreq_server.md
diff --git a/docs/designs/hvmmem_ioreq_server.md
b/docs/designs/hvmmem_ioreq_server.md
new file mode 100755
index 0000000..47fa715
--- /dev/null
+++ b/docs/designs/hvmmem_ioreq_server.md
@@ -0,0 +1,63 @@
+HVMMEM\_ioreq\_serverX
+----------------------
+
+Background
+==========
+
+The concept of the IOREQ server was introduced to allow multiple distinct
+device emulators to a single VM. The XenGT project uses an IOREQ server
to
+provide mediated pass-through of Intel GPUs to guests and, as part of the
+mediation, needs to intercept accesses to GPU page-tables (or GTTs) that
+reside in guest RAM.
+
+The current implementation of this sets the type of GTT pages to type
+HVMMEM\_mmio\_write\_dm, which causes Xen to emulate writes to
such pages,
+and then maps the guest physical addresses of those pages to the XenGT
+IOREQ server using the HVMOP\_map\_io\_range\_to\_ioreq\_server
hypercall.
+However, because the number of GTTs is potentially large, using this
+approach does not scale well.
+
+Proposal
+========
+
+Because the number of spare types available in the P2M type-space is
+currently very limited it is proposed that HVMMEM\_mmio\_write\_dm
be
+replaced by a single new type HVMMEM\_ioreq\_server. In future, if the
+P2M type-space is increased, this can be renamed to
HVMMEM\_ioreq\_server0
+and new HVMMEM\_ioreq\_server1, HVMMEM\_ioreq\_server2, etc.
types
+can be added.
+
+Accesses to a page of type HVMMEM\_ioreq\_serverX should be the
same as
+HVMMEM\_ram\_rw until the type is _claimed_ by an IOREQ server.
Furthermore
Sorry, do you mean even when a gfn is set to HVMMEM_ioreq_serverX
type,
its access rights in P2M still remains unchanged? So the new hypercall
pair, HVMOP_[un]map_mem_type_to_ioreq_server, are also responsible
for
the PTE updates on the access bits?
If it is true, I'm afraid this would be time consuming, because the
map/unmap will have to traverse all P2M structures to detect the PTEs
with HVMMEM_ioreq_serverX flag set. Yet in XenGT, setting this flag is
triggered dynamically with the construction/destruction of shadow PPGTT.
But I'm not sure to which degree the performance casualties will be,
with frequent EPT table walk and EPT tlb flush.