On 06/28/2012 10:01 AM, Takuya Yoshikawa wrote:
> This makes it possible to loop over rmap_pde arrays in the same way as
> we do over rmap so that we can optimize kvm_handle_hva_range() easily in
> the following patch.
> 
> Signed-off-by: Takuya Yoshikawa <yoshikawa.tak...@oss.ntt.co.jp>
> ---
>  arch/x86/include/asm/kvm_host.h |    2 +-
>  arch/x86/kvm/mmu.c              |    6 +++---
>  arch/x86/kvm/x86.c              |   11 +++++++++++
>  3 files changed, 15 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5aab8d4..aea1673 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -499,11 +499,11 @@ struct kvm_vcpu_arch {
>  };
> 
>  struct kvm_lpage_info {
> -     unsigned long rmap_pde;
>       int write_count;
>  };
> 
>  struct kvm_arch_memory_slot {
> +     unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
>       struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
>  };
> 

It looks little complex than before - need manage more alloc-ed/freed buffers.

Why not just introduce a function to get the next rmap? Something like this:

static unsigned long *get_next_rmap(unsigned long *rmap, int level)
{
        struct kvm_lpage_info *linfo

        if (level == 1)
                return rmap++
        
        linfo = container_of(rmap, struct kvm_lpage_info, rmap_pde);
        
        return &(++linfo)->rmap_pde
}

[ Completely untested ]

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to