change is performed for the just finished
iterations, which means p2m_finish_type_change() will return quite
soon. So in such scenario, we can allow the p2m iteration to continue,
without checking the hypercall pre-emption.
Signed-off-by: Yu Zhang
---
Note: this patch shall only be accepted after
On 4/5/2017 5:21 PM, Jan Beulich wrote:
On 05.04.17 at 08:53, wrote:
Or with other patches received "Reviewed by", we can just drop the
useless code of this patch.
Any suggestions?
Without the libxc wrapper, the new DMOP is effectively dead code
too. All or nothing, imo.
Thanks Jan. But I
On 4/5/2017 6:08 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu Zhang [mailto:yu.c.zh...@linux.intel.com]
Sent: 02 April 2017
On 4/5/2017 6:20 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 11:08:46AM +0100, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote:
-Original Message-
From: Yu
On 4/5/2017 6:33 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 06:21:16PM +0800, Yu Zhang wrote:
On 4/5/2017 6:08 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 02:53:42PM +0800, Yu Zhang wrote:
On 4/3/2017 5:28 PM, Wei Liu wrote:
On Mon, Apr 03, 2017 at 09:13:20AM +0100, Paul Durrant wrote
On 4/5/2017 6:46 PM, Jan Beulich wrote:
On 05.04.17 at 12:26, wrote:
On 4/5/2017 6:33 PM, Wei Liu wrote:
On Wed, Apr 05, 2017 at 06:21:16PM +0800, Yu Zhang wrote:
So this series is OK for merge. And with compat wrapper dropped while
committing,
we do not need send the V11, right
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the current p2m_change_entry_type_global
On 4/5/2017 11:11 PM, George Dunlap wrote:
On 05/04/17 16:10, George Dunlap wrote:
On 05/04/17 09:59, Yu Zhang wrote:
Previously, p2m_finish_type_change() is triggered to iterate and
clean up the p2m table when an ioreq server unmaps from memory type
HVMMEM_ioreq_server. And the current
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
wrote:
After an ioreq server has unmapped, the
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr 2, 2017 at 1:24 PM, Yu Zhang
wrote
On 4/6/2017 1:28 AM, Yu Zhang wrote:
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10:41 PM, George Dunlap wrote:
On Sun, Apr
On 4/6/2017 2:02 AM, Yu Zhang wrote:
On 4/6/2017 1:28 AM, Yu Zhang wrote:
On 4/6/2017 1:18 AM, Yu Zhang wrote:
On 4/6/2017 1:01 AM, George Dunlap wrote:
On 05/04/17 17:32, Yu Zhang wrote:
On 4/6/2017 12:35 AM, George Dunlap wrote:
On 05/04/17 17:22, Yu Zhang wrote:
On 4/5/2017 10
On 4/6/2017 3:48 PM, Jan Beulich wrote:
On 05.04.17 at 20:04, wrote:
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -288,6 +288,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
put_gfn(d, gmfn);
return 1;
}
+if ( unlikely(p2mt == p2m_i
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
changes in v2:
- According to Paul and Wei's comments: drop the compat wrapper changes.
- Added "Reviewed-by: Pa
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
xen/arch/x86/hvm/hvm.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/xen/arch
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Note: this patch shall be a
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
xen/arch/x86/hvm/hvm.c | 7 +--
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/xen/arch
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Note: this patch shall be a
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- Added "Reviewed-by: Jan Beulich " with one comment
change in hvm
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
changes in v2:
- According to Paul and Wei's comments: drop the compat wrapper changes.
- Added "Reviewed-by: Pa
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
p2m table. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Geo
Sorry, forgot cc.
Please ignore this thread.
Yu
On 4/6/2017 9:18 PM, Yu Zhang wrote:
A new device model wrapper is added for the newly introduced
DMOP - XEN_DMOP_map_mem_type_to_ioreq_server.
Since currently this DMOP only supports the emulation of write
operations, attempts to trigger the
On 4/6/2017 10:25 PM, George Dunlap wrote:
On 06/04/17 14:19, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
asynchronously with the current p2m_change_entry_type_global()
interface.
New
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (6):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- According to comments from Jan
server X, This wrapper shall be updated when such change
is made.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
Acked-by: Wei Liu
---
Cc: Paul Durrant
Cc: Ian Jackson
Cc: Wei Liu
changes in v3:
- Added "Acked-by: Wei Liu ".
changes in v2:
- According to Paul and Wei's
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v4:
- Added "Reviewed-by: Jan Beulich " with one comment
change in hvm
led.
b> only after one ioreq server claims its ownership of p2m_ioreq_server,
will the p2m type change to p2m_ioreq_server be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Note: this patch shall be a
p2m table. The core reason is our current
implementation of p2m_change_entry_type_global() lacks information
to resync p2m_ioreq_server entries correctly if global_logdirty is
on.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Geo
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
Reviewed-by: Paul Durrant
Reviewed-by: Jan Beulich
Reviewed-by: George Dunlap
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long gfn)
e.ipat = ipat;
if ( e.
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long gfn)
e.ipat = ipat;
if ( e.
On 4/7/2017 6:22 PM, George Dunlap wrote:
On 07/04/17 10:53, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain
*p2m
On 4/7/2017 6:28 PM, George Dunlap wrote:
On 07/04/17 11:14, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain
*p2m
On 4/7/2017 6:26 PM, Jan Beulich wrote:
On 07.04.17 at 11:53, wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -544,6 +544,12 @@ static int resolve_misconfig(struct p2m_domain *p2m,
unsigned long
On 4/7/2017 7:28 PM, Jan Beulich wrote:
On 07.04.17 at 12:50, wrote:
On 4/7/2017 6:28 PM, George Dunlap wrote:
On 07/04/17 11:14, Yu Zhang wrote:
On 4/7/2017 5:40 PM, Jan Beulich wrote:
On 06.04.17 at 17:53, wrote:
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
t as type p2m_ioreq_server; if not, we reset
it to p2m_ram as appropriate.
To avoid code duplication, lift recalc_type() out of p2m-pt.c and use
it for all type recalculations (both in p2m-pt.c and p2m-ept.c).
Signed-off-by: Yu Zhang
Signed-off-by: George Dunlap
Reviewed-by: Paul Durrant
---
xen/a
and p2m-ept.c).
Signed-off-by: Yu Zhang
Signed-off-by: George Dunlap
Reviewed-by: Paul Durrant
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
Cc: Jun Nakajima
Cc: Kevin Tian
Note: this is the 5/6 patch for the ioreq server patch series v12.
with other patches got revi
On 8/16/2016 9:35 PM, George Dunlap wrote:
On 12/07/16 10:02, Yu Zhang wrote:
This patch resets p2m_ioreq_server entries back to p2m_ram_rw,
after an ioreq server has unmapped. The resync is done both
asynchronously with the current p2m_change_entry_type_global()
interface, and synchronously
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (4):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/hvm/hvm.c | 8 ++--
1 file changed, 2 insertions(+), 6
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/hvm/emulate.c | 45 -
1 file changed, 44 insertions(+), 1 deletion
to be emulated or to be resynced).
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
Cc: Jun Nakajima
Cc: Kevin Tian
changes in v2:
- Move the calculation of ioreq server page entry_cout into
p2m_change_type_one() so that we do not n
both HVMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
are only supported for HVMs with HAP enabled.
Also note that only after one ioreq server claims its ownership
of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim Deegan
-
On 8/30/2016 8:17 PM, Yu Zhang wrote:
On 8/16/2016 9:35 PM, George Dunlap wrote:
On 12/07/16 10:02, Yu Zhang wrote:
This patch resets p2m_ioreq_server entries back to p2m_ram_rw,
after an ioreq server has unmapped. The resync is done both
asynchronously with the current
On 9/6/2016 4:13 PM, Jan Beulich wrote:
On 06.09.16 at 10:03, wrote:
-Original Message-
From: Jan Beulich [mailto:jbeul...@suse.com]
Sent: 06 September 2016 08:58
To: George Dunlap ; Yu Zhang
Cc: Andrew Cooper ; Paul Durrant
; George Dunlap ;
JunNakajima ; Kevin Tian ;
zhiyuan
On 9/2/2016 6:47 PM, Yu Zhang wrote:
XenGT leverages ioreq server to track and forward the accesses to GPU
I/O resources, e.g. the PPGTT(per-process graphic translation tables).
Currently, ioreq server uses rangeset to track the BDF/ PIO/MMIO ranges
to be emulated. To select an ioreq server
On 9/9/2016 1:22 PM, Yu Zhang wrote:
On 9/2/2016 6:47 PM, Yu Zhang wrote:
A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let one ioreq server claim/disclaim its responsibility for the
handling of guest pages with p2m type p2m_ioreq_server. Users
of this HVMOP can specify
On 9/5/2016 9:31 PM, Jan Beulich wrote:
On 02.09.16 at 12:47, wrote:
@@ -178,8 +179,27 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE:
{
-struct hvm_ioreq_server *s =
-hvm_select_ioreq_server(curr->domain, &p);
+struct hvm_ioreq
On 9/9/2016 1:24 PM, Yu Zhang wrote:
On 9/2/2016 6:47 PM, Yu Zhang wrote:
Routine hvmemul_do_io() may need to peek the p2m type of a gfn to
select the ioreq server. For example, operations on gfns with
p2m_ioreq_server type will be delivered to a corresponding ioreq
server, and this
On 9/9/2016 1:26 PM, Yu Zhang wrote:
>>> On 02.09.16 at 12:47, wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -95,6 +95,41 @@ static const struct hvm_io_handler null_handler = {
> .ops = &null_ops
> };
>
>
On 9/9/2016 1:26 PM, Yu Zhang wrote:
>>> On 02.09.16 at 12:47, wrote:
> @@ -5551,7 +5553,35 @@ static int hvmop_map_mem_type_to_ioreq_server(
> if ( rc != 0 )
> goto out;
>
> -rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type,
op.flags
On 9/9/2016 4:09 PM, Jan Beulich wrote:
On 09.09.16 at 07:55, wrote:
On 9/5/2016 9:31 PM, Jan Beulich wrote:
On 02.09.16 at 12:47, wrote:
@@ -178,8 +179,27 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE:
{
-struct hvm_ioreq_server *s =
-
On 9/9/2016 4:20 PM, Jan Beulich wrote:
On 09.09.16 at 09:24, wrote:
On 9/9/2016 1:26 PM, Yu Zhang wrote:
On 02.09.16 at 12:47, wrote:
@@ -965,7 +968,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
if ( is_epte_valid(ept_entry) )
{
if ( (recalc || ept_entry
On 9/9/2016 5:44 PM, Jan Beulich wrote:
On 09.09.16 at 11:24, wrote:
On 9/9/2016 4:20 PM, Jan Beulich wrote:
On 09.09.16 at 09:24, wrote:
On 9/9/2016 1:26 PM, Yu Zhang wrote:
On 02.09.16 at 12:47, wrote:
@@ -965,7 +968,8 @@ static mfn_t ept_get_entry(struct p2m_domain *p2m,
if
On 9/9/2016 6:09 PM, Jan Beulich wrote:
On 09.09.16 at 11:56, wrote:
On 9/9/2016 5:44 PM, Jan Beulich wrote:
On 09.09.16 at 11:24, wrote:
On 9/9/2016 4:20 PM, Jan Beulich wrote:
On 09.09.16 at 09:24, wrote:
On 9/9/2016 1:26 PM, Yu Zhang wrote:
On 02.09.16 at 12:47, wrote:
@@ -965,7
On 9/9/2016 6:09 PM, Jan Beulich wrote:
On 09.09.16 at 11:56, wrote:
On 9/9/2016 5:44 PM, Jan Beulich wrote:
On 09.09.16 at 11:24, wrote:
On 9/9/2016 4:20 PM, Jan Beulich wrote:
On 09.09.16 at 09:24, wrote:
On 9/9/2016 1:26 PM, Yu Zhang wrote:
On 02.09.16 at 12:47, wrote:
@@ -965,7
On 9/21/2016 9:04 PM, George Dunlap wrote:
On Fri, Sep 9, 2016 at 6:51 AM, Yu Zhang wrote:
On 9/2/2016 6:47 PM, Yu Zhang wrote:
A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let one ioreq server claim/disclaim its responsibility for the
handling of guest pages with p2m type
On 9/22/2016 7:32 PM, George Dunlap wrote:
On Thu, Sep 22, 2016 at 10:12 AM, Yu Zhang wrote:
On 9/21/2016 9:04 PM, George Dunlap wrote:
On Fri, Sep 9, 2016 at 6:51 AM, Yu Zhang
wrote:
On 9/2/2016 6:47 PM, Yu Zhang wrote:
A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let
On 9/23/2016 2:06 AM, George Dunlap wrote:
On Tue, Sep 20, 2016 at 3:57 AM, Yu Zhang wrote:
Well, for the logic of p2m type recalculation, similarities between
p2m_ioreq_server
and other changeable types exceeds their differences. As to the special
cases, how
about we use a macro, i.e
On 9/22/2016 5:12 PM, Yu Zhang wrote:
On 9/21/2016 9:04 PM, George Dunlap wrote:
On Fri, Sep 9, 2016 at 6:51 AM, Yu Zhang
wrote:
On 9/2/2016 6:47 PM, Yu Zhang wrote:
A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let one ioreq server claim/disclaim its responsibility for
On 9/23/2016 6:35 PM, George Dunlap wrote:
On 22/09/16 17:02, Yu Zhang wrote:
On 9/22/2016 7:32 PM, George Dunlap wrote:
On Thu, Sep 22, 2016 at 10:12 AM, Yu Zhang
wrote:
On 9/21/2016 9:04 PM, George Dunlap wrote:
On Fri, Sep 9, 2016 at 6:51 AM, Yu Zhang
wrote:
On 9/2/2016 6:47 PM, Yu
On 7/12/2016 5:02 PM, Yu Zhang wrote:
XenGT leverages ioreq server to track and forward the accesses to GPU
I/O resources, e.g. the PPGTT(per-process graphic translation tables).
Currently, ioreq server uses rangeset to track the BDF/ PIO/MMIO ranges
to be emulated. To select an ioreq server
On 8/8/2016 11:40 PM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -178,8 +179,34 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE:
{
-struct hvm_ioreq_server *s =
-hvm_select_ioreq_server(curr->domain, &p);
+struct hvm_iore
On 8/9/2016 12:29 AM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -5512,6 +5513,12 @@ static int hvmop_set_mem_type(
if ( rc )
goto out;
+if ( t == p2m_ram_rw && memtype[a.hvmmem_type] == p2m_ioreq_server )
+p2m->ioreq.entry_count++;
+
+
On 8/9/2016 4:20 PM, Paul Durrant wrote:
-Original Message-
From: Jan Beulich [mailto:jbeul...@suse.com]
Sent: 09 August 2016 09:11
To: Paul Durrant; Yu Zhang
Cc: Andrew Cooper; George Dunlap; Jun Nakajima; Kevin Tian;
zhiyuan...@intel.com; xen-devel@lists.xen.org; Tim (Xen.org
On 8/9/2016 4:13 PM, Jan Beulich wrote:
On 09.08.16 at 09:39, wrote:
On 8/9/2016 12:29 AM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -5512,6 +5513,12 @@ static int hvmop_set_mem_type(
if ( rc )
goto out;
+if ( t == p2m_ram_rw && memtype[a.hvmm
On 8/9/2016 5:45 PM, Jan Beulich wrote:
On 09.08.16 at 11:25, wrote:
On 8/9/2016 4:13 PM, Jan Beulich wrote:
On 09.08.16 at 09:39, wrote:
On 8/9/2016 12:29 AM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -5512,6 +5513,12 @@ static int hvmop_set_mem_type(
if ( rc )
On 8/8/2016 11:40 PM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -178,8 +179,34 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE:
{
-struct hvm_ioreq_server *s =
-hvm_select_ioreq_server(curr->domain, &p);
+struct hvm_iore
On 8/10/2016 6:33 PM, Jan Beulich wrote:
On 10.08.16 at 10:09, wrote:
On 8/8/2016 11:40 PM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -178,8 +179,34 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE:
{
-struct hvm_ioreq_server *s =
-
On 8/10/2016 6:43 PM, Paul Durrant wrote:
-Original Message-
From: Jan Beulich [mailto:jbeul...@suse.com]
Sent: 10 August 2016 11:33
To: Paul Durrant; Yu Zhang
Cc: Andrew Cooper; George Dunlap; Jun Nakajima; Kevin Tian;
zhiyuan...@intel.com; xen-devel@lists.xen.org; Tim (Xen.org
On 8/10/2016 6:43 PM, Yu Zhang wrote:
On 8/10/2016 6:33 PM, Jan Beulich wrote:
On 10.08.16 at 10:09, wrote:
On 8/8/2016 11:40 PM, Jan Beulich wrote:
On 12.07.16 at 11:02, wrote:
@@ -178,8 +179,34 @@ static int hvmemul_do_io(
break;
case X86EMUL_UNHANDLEABLE
On 8/11/2016 4:58 PM, Jan Beulich wrote:
On 11.08.16 at 10:47, wrote:
On 8/10/2016 6:43 PM, Yu Zhang wrote:
For " && p2mt != p2m_ioreq_server" condition, it is just to guarantee
that if a write
operation is trapped, and at the same period, device model changed the
status o
Hi Jan,
Previously I saw your UMIP patches merged in xen, and we'd like to
try some unit test here in Intel. And I wonder do you have any unit test
code for this feature, or any suggestions? :)
Thanks
Yu
___
Xen-devel mailing list
Xen-devel@lists
Wah. Thank you, Andrew & Wei. :-)
On 3/2/2017 5:05 PM, Andrew Cooper wrote:
On 02/03/2017 08:42, Wei Liu wrote:
I wrote this long time ago before UMIP was merged.
Yu, since you asked, I might as well post it for your reference on how to
do it with XTF.
This series is not yet tested in any wa
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v2:
- According to comments from Jan: rename mem_ops to ioreq_server_ops.
- According to comments from Jan: use
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (5):
x86/ioreq server
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/hvm/hvm.c | 8 ++--
1 file
both XEN_DMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
are only supported for HVMs with HAP enabled.
Also note that only after one ioreq server claims its ownership
of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim D
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
changes in v1:
- This patch is splitted from patch 4 of last version.
- According
nding p2m_ioreq_server entry left. The core reason is our
current implementation of p2m_change_entry_type_global() can not
tell the state of p2m_ioreq_server entries(can not decide if an
entry is to be emulated or to be resynced).
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Coope
On 3/8/2017 10:06 PM, Paul Durrant wrote:
-Original Message-
From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of Yu
Zhang
Sent: 08 March 2017 13:32
To: xen-devel@lists.xen.org
Cc: zhiyuan...@intel.com
Subject: [Xen-devel] [PATCH v7 1/5] x86/ioreq server: Release the
. Now this new
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly.
Yu Zhang (5):
x86/ioreq server
both XEN_DMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
are only supported for HVMs with HAP enabled.
Also note that only after one ioreq server claims its ownership
of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
be allowed.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
Acked-by: Tim D
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
changes in v2:
- According to comments from Jan: rename mem_ops to ioreq_server_ops.
- According to comments from Jan: use
nding p2m_ioreq_server entry left. The core reason is our
current implementation of p2m_change_entry_type_global() can not
tell the state of p2m_ioreq_server entries(can not decide if an
entry is to be emulated or to be resynced).
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Coope
mapped. And
since the sweeping of p2m table could be time consuming, it is done
with hypercall continuation.
Signed-off-by: Yu Zhang
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: George Dunlap
changes in v1:
- This patch is splitted from patch 4 of last version.
- According
). Later, a per-event channel
lock was introduced in commit de6acb7, to send events. So we do not
need to worry about the deadlock issue.
Signed-off-by: Yu Zhang
Reviewed-by: Jan Beulich
---
Cc: Paul Durrant
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/hvm/hvm.c | 8 ++--
1 file
On 3/11/2017 12:03 AM, Jan Beulich wrote:
On 08.03.17 at 16:33, wrote:
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -954,6 +954,26 @@ int p2m_change_type_one(struct domain *d, unsigned long
gfn,
p2m->default_access)
: -EBUSY;
+if ( !
On 3/10/2017 11:29 PM, Jan Beulich wrote:
On 08.03.17 at 16:33, wrote:
changes in v7:
- Use new ioreq server interface - XEN_DMOP_map_mem_type_to_ioreq_server.
- According to comments from George: removed domain_pause/unpause() in
hvm_map_mem_type_to_ioreq_server(), because it's to
On 3/10/2017 11:33 PM, Jan Beulich wrote:
On 08.03.17 at 16:33, wrote:
@@ -197,6 +217,10 @@ static int hvmemul_do_io(
* - If the IOREQ_MEM_ACCESS_WRITE flag is not set, treat it
* like a normal PIO or MMIO that doesn't have an ioreq
* server (i.e., by ig
On 3/11/2017 12:17 AM, Jan Beulich wrote:
On 08.03.17 at 16:33, wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -288,6 +288,7 @@ static int inject_event(struct domain *d,
return 0;
}
+#define DMOP_op_mask 0xff
static int dm_op(domid_t domid,
Please follow the
On 3/11/2017 12:59 AM, Andrew Cooper wrote:
On 08/03/17 15:33, Yu Zhang wrote:
After an ioreq server has unmapped, the remaining p2m_ioreq_server
entries need to be reset back to p2m_ram_rw. This patch does this
synchronously by iterating the p2m table.
The synchronous resetting is necessary
On 3/13/2017 7:20 PM, Jan Beulich wrote:
On 11.03.17 at 09:42, wrote:
On 3/10/2017 11:29 PM, Jan Beulich wrote:
On 08.03.17 at 16:33, wrote:
changes in v7:
- Use new ioreq server interface - XEN_DMOP_map_mem_type_to_ioreq_server.
- According to comments from George: removed domain_
On 3/13/2017 7:24 PM, Jan Beulich wrote:
On 11.03.17 at 09:42, wrote:
On 3/11/2017 12:03 AM, Jan Beulich wrote:
But there's a wider understanding issue I'm having here: What is
an "entry" here? Commonly I would assume this to refer to an
individual (4k) page, but it looks like you really mea
On 3/13/2017 7:32 PM, Jan Beulich wrote:
On 11.03.17 at 09:42, wrote:
On 3/11/2017 12:59 AM, Andrew Cooper wrote:
On 08/03/17 15:33, Yu Zhang wrote:
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -288,6 +288,7 @@ static int inject_event(struct domain *d,
return 0
101 - 200 of 389 matches
Mail list logo