On Fri, Sep 19, 2008 at 05:29:28PM -0700, Avi Kivity wrote:
> Marcelo Tosatti wrote:
>> There is not much point in write protecting large mappings. This
>> can only happen when a page is shadowed during the window between
>> is_largepage_backed and mmu_lock acquision. Zap the entry instead, so
>> the next pagefault will find a shadowed page via is_largepage_backed and
>> fallback to 4k translations.
>>
>> Simplifies out of sync shadow.
>>
>> Signed-off-by: Marcelo Tosatti <[EMAIL PROTECTED]>
>>
>> Index: kvm/arch/x86/kvm/mmu.c
>> ===================================================================
>> --- kvm.orig/arch/x86/kvm/mmu.c
>> +++ kvm/arch/x86/kvm/mmu.c
>> @@ -1180,11 +1180,16 @@ static int set_spte(struct kvm_vcpu *vcp
>> || (write_fault && !is_write_protection(vcpu) && !user_fault)) {
>> struct kvm_mmu_page *shadow;
>> + if (largepage && has_wrprotected_page(vcpu->kvm, gfn)) {
>> + ret = 1;
>> + spte = shadow_trap_nonpresent_pte;
>> + goto set_pte;
>> + }
>
> Don't we need to drop a reference to the page?
mmu_set_spte will do it if necessary:
if (!was_rmapped) {
rmap_add(vcpu, shadow_pte, gfn, largepage);
if (!is_rmap_pte(*shadow_pte))
kvm_release_pfn_clean(pfn);
But as noted, this part is wrong:
@@ -307,11 +307,10 @@ static int FNAME(shadow_walk_entry)(stru
return 1;
}
- if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
+ if (is_shadow_present_pte(*sptep))
return 0;
- if (is_large_pte(*sptep))
- rmap_remove(vcpu->kvm, sptep);
+ WARN_ON (is_large_pte(*sptep));
Since it might still be necessary to replace a read-only large mapping
(which rmap_write_protect will not nuke) with a normal pte pointer.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html