o_xen(), modify_xen_mappings() etc. To fix this, this
patch will
check the _PAGE_PRESENT and _PAGE_PSE flags, after the spinlock is
obtained,
for the corresponding L2/L3 entry.
Signed-off-by: Min He
Signed-off-by: Yi Zhang
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
Please try to have a cover l
reference a superpage.
Therefore the logic to enumerate the L1/L2 page table and to
reset the corresponding L2/L3 PTE need to be protected with
spinlock. And the _PAGE_PRESENT and _PAGE_PSE flags need be
checked after the lock is obtained.
Signed-off-by: Yu Zhang
---
Cc: Jan Beulich
Cc: Andrew
,
for the corresponding L2/L3 entry.
Signed-off-by: Min He
Signed-off-by: Yi Zhang
Signed-off-by: Yu Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
Changes in v3:
According to comments from Jan Beulich:
- use local variable instead of dereference pointer to pte to check flag.
- also chec
On 11/13/2017 5:31 PM, Jan Beulich wrote:
On 10.11.17 at 15:05, wrote:
On 11/10/2017 5:49 PM, Jan Beulich wrote:
I'm not certain this is important enough a fix to consider for 4.10,
and you seem to think it's good enough if this gets applied only
after the tree would be branched, as you didn
On 11/10/2017 5:49 PM, Jan Beulich wrote:
On 10.11.17 at 08:18, wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4844,9 +4844,19 @@ int map_pages_to_xen(
{
unsigned long base_mfn;
-pl1e = l2e_to_l1e(*pl2e);
if ( l
On 11/10/2017 5:57 PM, Jan Beulich wrote:
On 10.11.17 at 08:18, wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5097,6 +5097,17 @@ int modify_xen_mappings(unsigned long s, unsigned long
e, unsigned int nf)
*/
if ( (nf & _PAGE_PRESENT) || ((v != e) && (
lock, and checking the PSE flag of the `pl2e`.
Note: PSE flag of `pl3e` is also checked before its re-consolidation,
for the same reason we do for `pl2e` - we cannot presume the contents
of the target superpage.
Signed-off-by: Min He
Signed-off-by: Yi Zhang
Signed-off-by: Yu Zhang
---
Cc: Jan
.
Otherwise, the paging structure may be freed more than once, if
the same routine is invoked simultaneously on different CPUs.
Signed-off-by: Yu Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/mm.c | 31 +++
1 file changed, 31 insertions(+)
diff --git a/xen
` with the lock will fix this race condition.
Signed-off-by: Min He
Signed-off-by: Yi Zhang
Signed-off-by: Yu Zhang
Oh, one more thing: Is it really the case that all three of you
contributed to the patch? We don't use the Linux model of
everyone through whose hands a patch passes adding a
On 11/9/2017 5:19 PM, Jan Beulich wrote:
On 09.11.17 at 16:29, wrote:
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4844,9 +4844,10 @@ int map_pages_to_xen(
{
unsigned long base_mfn;
-pl1e = l2e_to_l1e(*pl2e);
if ( lo
-off-by: Yi Zhang
Signed-off-by: Yu Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/mm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a20fdca..9c9afa1 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
On 8/22/2017 8:44 PM, Julien Grall wrote:
Hi,
On 22/08/17 11:22, Yu Zhang wrote:
On 8/21/2017 6:15 PM, Julien Grall wrote:
Hi Paul,
On 21/08/17 11:11, Paul Durrant wrote:
-Original Message-
From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of
Julien Grall
Sent
face to support XenGT (v7)
- XEN-43
- Yu Zhang
- Paul Durrant
I think this is either done or obsolete now. Not sure which.
CCed Yu Zhang to tell which one.
Thanks, Julien. This is done now. :)
Yu
Cheers,
___
Xen-devel mailing
On 8/15/2017 6:28 PM, Andrew Cooper wrote:
On 15/08/17 04:18, Boqun Feng (Intel) wrote:
Add a "umip" test for the User-Model Instruction Prevention. The test
simply tries to run sgdt/sidt/sldt/str/smsw in guest user-mode with
CR4_UMIP = 1.
Signed-off-by: Boqun Feng (Intel)
Reviewed-by: Andr
On 7/20/2017 7:24 PM, Andrew Cooper wrote:
On 20/07/17 11:36, Yu Zhang wrote:
On 7/20/2017 6:42 PM, Andrew Cooper wrote:
On 20/07/17 11:10, Yu Zhang wrote:
On 7/17/2017 6:53 PM, Juergen Gross wrote:
Hey,
I took a few notes at the 5-level-paging session at the summit.
I hope there
On 7/20/2017 6:42 PM, Andrew Cooper wrote:
On 20/07/17 11:10, Yu Zhang wrote:
On 7/17/2017 6:53 PM, Juergen Gross wrote:
Hey,
I took a few notes at the 5-level-paging session at the summit.
I hope there isn't any major stuff missing...
Participants (at least naming the active
On 7/17/2017 6:53 PM, Juergen Gross wrote:
Hey,
I took a few notes at the 5-level-paging session at the summit.
I hope there isn't any major stuff missing...
Participants (at least naming the active ones): Andrew Cooper,
Jan Beulich, Yu Zhang and myself (the list is just from my m
On 5/10/2017 12:29 AM, Jan Beulich wrote:
On 05.04.17 at 10:59, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -411,14 +411,17 @@ static int dm_op(domid_t domid,
while ( read_atomic(&p2m->ioreq.entry_count) &&
first_gfn <= p2m->max_mapped
On 5/8/2017 7:12 PM, George Dunlap wrote:
On 08/05/17 11:52, Zhang, Xiong Y wrote:
On 06.05.17 at 03:51, wrote:
On 05.05.17 at 05:52, wrote:
'commit 1679e0df3df6 ("x86/ioreq server: asynchronously reset
outstanding p2m_ioreq_server entries")' will call
p2m_change_entry_type_global() which
On 4/28/2017 3:45 PM, Zhang, Xiong Y wrote:
I found this patch couldn't work, the reason is inline. And need propose to
fix this.
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 7e0da81..d72b7bd 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -384,15 +384,5
On 4/20/2017 6:23 PM, Andrew Cooper wrote:
On 20/04/17 11:10, Yu Zhang wrote:
On 4/20/2017 6:01 PM, Jan Beulich wrote:
On 20.04.17 at 11:53, wrote:
On 4/20/2017 5:47 PM, Jan Beulich wrote:
On 20.04.17 at 09:15, wrote:
And back to the schedule of this feature, are you working on it?
Or
On 4/20/2017 6:01 PM, Jan Beulich wrote:
On 20.04.17 at 11:53, wrote:
On 4/20/2017 5:47 PM, Jan Beulich wrote:
On 20.04.17 at 09:15, wrote:
And back to the schedule of this feature, are you working on it? Or any
specific plan?
Well, the HVM side is basically ready (as said, the single hun
On 4/20/2017 5:47 PM, Jan Beulich wrote:
On 20.04.17 at 09:15, wrote:
And back to the schedule of this feature, are you working on it? Or any
specific plan?
Well, the HVM side is basically ready (as said, the single hunk needed
to support UMIP when hardware supports it could be easily split
On 4/19/2017 10:09 PM, Andrew Cooper wrote:
On 19/04/17 15:07, Jan Beulich wrote:
On 19.04.17 at 15:58, wrote:
On 19/04/17 14:50, Yu Zhang wrote:
On 4/19/2017 9:34 PM, Jan Beulich wrote:
On 19.04.17 at 13:44, wrote:
On 4/19/2017 7:19 PM, Jan Beulich wrote:
On 19.04.17 at 11:48, wrote
On 4/19/2017 9:34 PM, Jan Beulich wrote:
On 19.04.17 at 13:44, wrote:
On 4/19/2017 7:19 PM, Jan Beulich wrote:
On 19.04.17 at 11:48, wrote:
Does hypervisor need to differentiate dom0 kernel and its
user space?
If we want to para-virtualize the feature, then yes. Otherwise
we can't assume
On 4/19/2017 7:19 PM, Jan Beulich wrote:
On 19.04.17 at 11:48, wrote:
On 4/19/2017 5:18 PM, Jan Beulich wrote:
On 19.04.17 at 10:48, wrote:
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xen does not have logic to expose UMIP
feature to
On 4/19/2017 5:59 PM, Andrew Cooper wrote:
On 19/04/17 10:48, Yu Zhang wrote:
On 4/19/2017 5:18 PM, Jan Beulich wrote:
On 19.04.17 at 10:48, wrote:
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xen does not have logic to expose UMIP
On 4/19/2017 5:18 PM, Jan Beulich wrote:
On 19.04.17 at 10:48, wrote:
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xen does not have logic to expose UMIP
feature to guests - you have sent out one in
https://lists.xenproject.org/archives/h
Hi Jan,
I saw that commit 8c14e5f provides emulations for UMIP affected
instructions. But realized that xen does not have logic to expose UMIP
feature to guests - you have sent out one in
https://lists.xenproject.org/archives/html/xen-devel/2016-12/msg00552.html
to emulate the cpuid leaf, b
and p2m-ept.c).
Signed-off-by: Yu Zhang
Signed-off-by: George Dunlap
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
Cc: Jun Nakajima
Cc: Kevin Tian
Note: this is the 5/6 patch for the ioreq server patch series v12.
with other patches got revi
t as type p2m_ioreq_server; if not, we reset
it to p2m_ram as appropriate.
To avoid code duplication, lift recalc_type() out of p2m-pt.c and use
it for all type recalculations (both in p2m-pt.c and p2m-ept.c).
Signed-off-by: Yu Zhang
Signed-off-by: George Dunlap
Reviewed-by: Paul Durrant
---
xen/a
On 4/7/2017 7:28 PM, Jan Beulich wrote:
On 07.04.17 at 12:50, wrote:
On 4/7/2017 6:28 PM, George Dunlap wrote:
On 07/04/17 11:14, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
On 4/7/2017 6:26 PM, Jan Beulich wrote:
On 07.04.17 at 11:53, wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long
On 4/7/2017 6:28 PM, George Dunlap wrote:
On 07/04/17 11:14, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain
*p2m
On 4/7/2017 6:22 PM, George Dunlap wrote:
On 07/04/17 10:53, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain
*p2m
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long gfn)
e.ipat = ipat;
if ( e.
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long gfn)
e.ipat = ipat;
if ( e.
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
p2m table. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Geo
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- Added "Reviewed-by: Jan Beulich " with one comment
change in hvm
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Note: this patch shall be a
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
Acked-by: Wei Liu
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
changes in v3:
- Added "Acked-by: Wei Liu ".
changes in v2:
- According to Paul and Wei's
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- According to comments from Jan
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
On 4/6/2017 10:25 PM, George Dunlap wrote:
On 06/04/17 14:19, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the current p2m_change_entry_type_global()
interface.
New
Sorry, forgot cc.
Please ignore this thread.
Yu
On 4/6/2017 9:18 PM, Yu Zhang wrote:
A new device model wrapper is added for the newly introduced
DMOP - XEN_DMOP_map_mem_type_to_ioreq_server.
Since currently this DMOP only supports the emulation of write
operations, attempts to trigger the
p2m table. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Geo
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
changes in v2:
- According to Paul and Wei's comments: drop the compat wrapper changes.
- Added "Reviewed-by: Pa
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- Added "Reviewed-by: Jan Beulich " with one comment
change in hvm
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Note: this patch shall be a
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Note: this patch shall be a
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
xen/arch/x86/hvm/hvm.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/xen/arch
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
xen/arch/x86/hvm/hvm.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/xen/arch
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
changes in v2:
- According to Paul and Wei's comments: drop the compat wrapper changes.
- Added "Reviewed-by: Pa
On 4/6/2017 3:48 PM, Jan Beulich wrote:
On 05.04.17 at 20:04, wrote:
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
put_gfn(d, gmfn);
return 1;
}
+if ( unlikely(p2mt == p2m_i
On 4/6/2017 2:02 AM, Yu Zhang wrote:
On 4/6/2017 1:28 AM, Yu Zhang wrote:
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10
On 4/6/2017 1:28 AM, Yu Zhang wrote:
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
wrote
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
wrote:
After an ioreq server has unmapped, the
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This
On 4/5/2017 11:11 PM, George Dunlap wrote:
On 05/04/17 16:10, George Dunlap wrote:
On 05/04/17 09:59, Yu Zhang wrote:
Previously, p2m_finish_type_change() is triggered to iterate and
clean up the p2m table when an ioreq server unmaps from memory type
HVMMEM_ioreq_server. And the current
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the current p2m_change_entry_type_global
On 4/5/2017 6:46 PM, Jan Beulich wrote:
On 05.04.17 at 12:26, wrote:
On 4/5/2017 6:33 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 06:21:16PM +0800, Yu Zhang wrote:
So this series is OK for merge. And with compat wrapper dropped while
committing,
we do not need send the V11, right
On 4/5/2017 6:33 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 06:21:16PM +0800, Yu Zhang wrote:
On 4/5/2017 6:08 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote
On 4/5/2017 6:20 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 11:08:46AM +0100, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu
On 4/5/2017 6:08 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 02 April 2017
On 4/5/2017 5:21 PM, Jan Beulich wrote:
On 05.04.17 at 08:53, wrote:
Or with other patches received "Reviewed by", we can just drop the
useless code of this patch.
Any suggestions?
Without the libxc wrapper, the new DMOP is effectively dead code
too. All or nothing, imo.
Thanks Jan. But I
change is performed for the just finished
iterations, which means p2m_finish_type_change() will return quite
soon. So in such scenario, we can allow the p2m iteration to continue,
without checking the hypercall pre-emption.
Signed-off-by: Yu Zhang
---
Note: this patch shall only be accepted after
we need to guarantee
the p2m table is clean before another ioreq server is mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
albeit I think ...
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen
On 4/3/2017 10:36 PM, Jan Beulich wrote:
So this produces the same -EINVAL as the earlier check in context
above. I think it would be nice if neither did - -EINUSE for the first
(which we don't have, so -EOPNOTSUPP would seem the second
bets option there) and -EBUSY for the second would seem mo
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 02 April 2017 13:24
To: xen-devel@lists.xen.org
Cc: zhiyuan...@intel.com; Paul Durrant ; Ian
Jackson ; Wei Liu
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- Added "Reviewed-by: Jan Beulich " with one comment
change in hvm
p2m table. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Geo
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
changes in v3:
- According to comments from Paul: use mar_nr, instead of
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
c> this patch shall be accepted together with the following ones in
this series.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
---
tools/libs/devicemodel/core.c | 25 +
tools/libs/devicemodel/include/xendevicemodel.h | 18
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- According to comments from Jan
On 3/24/2017 5:37 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Wednesday, March 22, 2017 6:12 PM
On 3/22/2017 4:10 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
After an ioreq server has
On 3/24/2017 6:37 PM, Jan Beulich wrote:
On 24.03.17 at 10:05, wrote:
On 3/23/2017 5:00 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:29 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -949,6 +949
On 3/24/2017 6:19 PM, Jan Beulich wrote:
On 24.03.17 at 10:05, wrote:
On 3/23/2017 4:57 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:21 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
@@ -177,8 +178,64 @@ static int hvmemul_do_io(
break;
ca
On 3/24/2017 5:26 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Wednesday, March 22, 2017 6:13 PM
On 3/22/2017 3:49 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
A new DMOP
On 3/23/2017 5:02 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:39 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -385,16 +385,51 @@ static int dm_op(domid_t domid,
case XEN_DMOP_map_mem_type_t
On 3/23/2017 5:00 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:29 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -949,6 +949,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d,
ioservid_t
On 3/23/2017 4:57 PM, Jan Beulich wrote:
On 23.03.17 at 04:23, wrote:
On 3/22/2017 10:21 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
---
xen/arch/x86/hvm/dm.c| 37 ++--
xen/arch/x86/hvm/emulate.c | 65 ---
xen
On 3/22/2017 10:21 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
---
xen/arch/x86/hvm/dm.c| 37 ++--
xen/arch/x86/hvm/emulate.c | 65 ---
xen/arch/x86/hvm/ioreq.c | 38 +
xen/arch/x86/mm/h
On 3/22/2017 10:39 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -385,16 +385,51 @@ static int dm_op(domid_t domid,
case XEN_DMOP_map_mem_type_to_ioreq_server:
{
-const struct xen_dm_op_map_mem_type_to_io
On 3/22/2017 10:29 PM, Jan Beulich wrote:
On 21.03.17 at 03:52, wrote:
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -949,6 +949,14 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d,
ioservid_t id,
spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock
On 3/22/2017 3:49 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
A new DMOP - XEN_DMOP_map_mem_type_to_ioreq_server, is added to let
one ioreq server claim/disclaim its responsibility for the handling of guest
pages with p2m
On 3/22/2017 4:10 PM, Tian, Kevin wrote:
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: Tuesday, March 21, 2017 10:53 AM
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the
On 3/21/2017 9:49 PM, Paul Durrant wrote:
-Original Message-
[snip]
+if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
+{
+struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+while ( read_atomic(&p2m->ioreq.entry_count) &&
+
On 3/21/2017 6:00 PM, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 21 March 2017 02:53
To: xen-devel@lists.xen.org
Cc: zhiyuan...@intel.com; Paul Durrant ; Jan
Beulich ; Andrew Cooper
; George Dunlap
Subject: [PATCH v9 5/5] x86/ioreq
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
changes in v2:
- According to comments from Jan and Andrew: do not use the
HVMOP
both XEN_DMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
are only supported for HVMs with HAP enabled.
Also note that only after one ioreq server claims its ownership
of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim D
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v3:
- According to comments from Jan: clarify comments in hvmemul_do_io().
changes in v2:
- According to
ff-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
Cc: Jun Nakajima
Cc: Kevin Tian
changes in v4:
- According to comments from Jan: use ASSERT() instead of 'if'
condition in p2m_change_type_one().
- According to comments from Jan:
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (5):
x86/ioreq server
1 - 100 of 389 matches
Mail list logo