On 04/01/19 17:30, Sean Christopherson wrote:
>> +
>> +            if (kvm_available_flush_tlb_with_range()
>> +                && (tmp_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)) {
>> +                    struct kvm_mmu_page *leaf_sp = page_header(sp->spt[i]
>> +                                    & PT64_BASE_ADDR_MASK);
>> +                    list_add(&leaf_sp->flush_link, &flush_list);
>> +            }
>> +
>> +            set_spte_ret |= tmp_spte_ret;
>> +
>>      }
>>  
>>      if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
>> -            kvm_flush_remote_tlbs(vcpu->kvm);
>> +            kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
> This is a bit confusing and potentially fragile.  It's not obvious that
> kvm_flush_remote_tlbs_with_list() is guaranteed to call
> kvm_flush_remote_tlbs() when kvm_available_flush_tlb_with_range() is
> false, and you're relying on the kvm_flush_remote_tlbs_with_list() call
> chain to never optimize away the empty list case.  Rechecking
> kvm_available_flush_tlb_with_range() isn't expensive.
> 

Alternatively, do not check it during the loop: always build the
flush_list, and always call kvm_flush_remote_tlbs_with_list.  The
function can then check whether the list is empty, and the OR of
tmp_spte_ret on every iteration goes away.

Paolo

Reply via email to