On 06/29/2012 01:39 AM, Avi Kivity wrote:
> But I still think it's the right thing since it simplifies the code.
> Maybe we should add a prefetch() on write_count do mitigate the
> overhead, if it starts showing up in profiles.
>
Long time ago, there was a discussion about dropping prefetch in t
On Thu, 28 Jun 2012 20:39:55 +0300
Avi Kivity wrote:
> > Note: write_count: 4 bytes, rmap_pde: 8 bytes. So we are wasting
> > extra paddings by packing them into lpage_info.
>
> The wastage is quite low since it's just 4 bytes per 2MB.
Yes.
> >> Why not just introduce a function to get the ne
On 06/28/2012 06:45 AM, Takuya Yoshikawa wrote:
> On Thu, 28 Jun 2012 11:12:51 +0800
> Xiao Guangrong wrote:
>
>> > struct kvm_arch_memory_slot {
>> > + unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
>> >struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
>> > };
>> >
>>
>> It loo
On Thu, 28 Jun 2012 11:12:51 +0800
Xiao Guangrong wrote:
> > struct kvm_arch_memory_slot {
> > + unsigned long *rmap_pde[KVM_NR_PAGE_SIZES - 1];
> > struct kvm_lpage_info *lpage_info[KVM_NR_PAGE_SIZES - 1];
> > };
> >
>
> It looks little complex than before - need manage more alloc-ed/f
On 06/28/2012 10:01 AM, Takuya Yoshikawa wrote:
> This makes it possible to loop over rmap_pde arrays in the same way as
> we do over rmap so that we can optimize kvm_handle_hva_range() easily in
> the following patch.
>
> Signed-off-by: Takuya Yoshikawa
> ---
> arch/x86/include/asm/kvm_host.h |