On 04/01/19 09:54, lantianyu1...@gmail.com wrote:
>               rmap_head = __gfn_to_rmap(slot->base_gfn + gfn_offset + 
> __ffs(mask),
>                                         PT_PAGE_TABLE_LEVEL, slot);
> -             __rmap_write_protect(kvm, rmap_head, false);
> +             flush |= __rmap_write_protect(kvm, rmap_head, false);
>  
>               /* clear the first set bit */
>               mask &= mask - 1;
>       }
> +
> +     if (flush && kvm_available_flush_tlb_with_range()) {
> +             kvm_flush_remote_tlbs_with_address(kvm,
> +                             slot->base_gfn + gfn_offset,
> +                             hweight_long(mask));

Mask is zero here, so this probably won't work.

In addition, I suspect calling the hypercall once for every 64 pages is
not very efficient.  Passing a flush list into
kvm_mmu_write_protect_pt_masked, and flushing in
kvm_arch_mmu_enable_log_dirty_pt_masked, isn't efficient either because
kvm_arch_mmu_enable_log_dirty_pt_masked is also called once per word.

I don't have any good ideas, except for moving the whole
kvm_clear_dirty_log_protect loop into architecture-specific code (which
is not the direction we want---architectures should share more code, not
less).

Paolo

> +             flush = false;
> +     }
> +

Reply via email to