On Wed, 25 Sep 2019 11:12:11 +0800
Peter Xu <pet...@redhat.com> wrote:

> On Tue, Sep 24, 2019 at 10:47:50AM -0400, Igor Mammedov wrote:
> 
> [...]
> 
> > @@ -2877,6 +2912,7 @@ static bool kvm_accel_has_memory(MachineState *ms, 
> > AddressSpace *as,
> >  
> >      for (i = 0; i < kvm->nr_as; ++i) {
> >          if (kvm->as[i].as == as && kvm->as[i].ml) {
> > +            size = MIN(kvm_max_slot_size, size);
> >              return NULL != kvm_lookup_matching_slot(kvm->as[i].ml,
> >                                                      start_addr, size);
> >          }  
> 
> Ideally we could also check that the whole (start_addr, size) region
> is covered by KVM memslots here, but with current code I can't think
> of a case where the result doesn't match with only checking the 1st
> memslot. So I assume it's fine.
yep, it's micro-optimization that works on assumption that whole memory
section always is covered by memslots and original semantics where
working only for if start_addr/size where covering whole memory section.

Sole user mtree_print_flatview() is not performance sensitive,
so if you'd like I can post an additional patch that iterates
over whole range.

> Reviewed-by: Peter Xu <pet...@redhat.com>
> 


Reply via email to