> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 17 March 2016 08:12
> To: Paul Durrant
> Cc: Andrew Cooper; xen-de...@lists.xenproject.org; Keir (Xen.org)
> Subject: RE: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
>
> >>> On 16.03.16 at 18:35, wrot
On 17/03/16 08:35, Jan Beulich wrote:
On 16.03.16 at 15:21, wrote:
>> Commit b38d426a "flush remote tlbs by hypercall" add support to allow
>> Windows to request flush of remote TLB via hypercall rather than IPI.
>> Unfortunately it seems that this code was broken in a couple of ways:
>>
>> 1
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 16 March 2016 15:36
> To: Paul Durrant
> Cc: Andrew Cooper; xen-de...@lists.xenproject.org; Keir (Xen.org)
> Subject: Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
>
> >>> On 16.03.16 at 15:21, wrot
>>> On 16.03.16 at 18:35, wrote:
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 16 March 2016 15:36
>> >>> On 16.03.16 at 15:21, wrote:
>> > @@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>> > * so we may unnecessarily IPI some CPUs.
>> > */
>
Commit b38d426a "flush remote tlbs by hypercall" add support to allow
Windows to request flush of remote TLB via hypercall rather than IPI.
Unfortunately it seems that this code was broken in a couple of ways:
1) The allocation of the per-vcpu ipi mask is gated on whether the
domain has viridia
>>> On 16.03.16 at 15:21, wrote:
> v2:
> - Move to per-pcpu ipi mask.
> - Use smp_send_event_check_mask() to IPI rather than flush_tlb_mask().
> ---
> xen/arch/x86/hvm/hvm.c | 12
> xen/arch/x86/hvm/viridian.c| 19 ++-
> xen/include/asm-x86/hvm/v
>>> On 16.03.16 at 15:21, wrote:
> Commit b38d426a "flush remote tlbs by hypercall" add support to allow
> Windows to request flush of remote TLB via hypercall rather than IPI.
> Unfortunately it seems that this code was broken in a couple of ways:
>
> 1) The allocation of the per-vcpu ipi mask i