> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 16 March 2016 13:32
> To: Paul Durrant
> Cc: Andrew Cooper; xen-de...@lists.xenproject.org; Keir (Xen.org)
> Subject: Re: [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
>
> >>> On 16.03.16 at 14:00, wrote:
On 16/03/16 13:00, Paul Durrant wrote:
> Commit b38d426a "flush remote tlbs by hypercall" add support to allow
> Windows to request flush of remote TLB via hypercall rather than IPI.
> Unfortunately it seems that this code was broken in a couple of ways:
>
> 1) The allocation of the per-vcpu flush
> -Original Message-
> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
> Sent: 16 March 2016 13:37
> To: Jan Beulich; Paul Durrant
> Cc: xen-de...@lists.xenproject.org; Keir (Xen.org)
> Subject: Re: [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
>
> On 16/03/16 13:31, Jan Beu
> -Original Message-
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: 16 March 2016 12:57
> To: xen-de...@lists.xenproject.org
> Cc: Paul Durrant
> Subject: [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
>
> Commit b38d426a "flush remote tlbs by hypercall" add support t
>>> On 16.03.16 at 14:00, wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2576,12 +2576,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
> if ( rc != 0 )
> goto fail6;
>
> -if ( is_viridian_domain(d) )
> -{
> -rc = viridian_vcpu_init(v);
> -
> -Original Message-
> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
> Sent: 16 March 2016 13:20
> To: Paul Durrant; xen-de...@lists.xenproject.org
> Cc: Keir (Xen.org); Jan Beulich
> Subject: Re: [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
>
> On 16/03/16 13:00, Paul Du
Commit b38d426a "flush remote tlbs by hypercall" add support to allow
Windows to request flush of remote TLB via hypercall rather than IPI.
Unfortunately it seems that this code was broken in a couple of ways:
1) The allocation of the per-vcpu flush mask is gated on whether the
domain has virid
On 16/03/16 13:31, Jan Beulich wrote:
>
> That said, I now wonder anyway why this is a per-vCPU mask
> instead of a per-pCPU one: There's no need for every vCPU in
> the system to have its own afaics.
If every vcpu makes a viridian hypercall at the same time, Xen would end
up clobbering same mask
Commit b38d426a "flush remote tlbs by hypercall" add support to allow
Windows to request flush of remote TLB via hypercall rather than IPI.
Unfortunately it seems that this code was broken in a couple of ways:
1) The allocation of the per-vcpu flush mask is gated on whether the
domain has virid