diff options
author | Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> | 2012-02-05 20:42:41 +0900 |
---|---|---|
committer | Avi Kivity <avi@redhat.com> | 2012-03-08 14:10:12 +0200 |
commit | 6dbf79e7164e9a86c1e466062c48498142ae6128 (patch) | |
tree | 2ffc942d370f5f239df05a154e0354ab06fc0489 | |
parent | 10166744b80a41c30d82bc6e11140f5b28d257ab (diff) |
KVM: Fix write protection race during dirty logging
This patch fixes a race introduced by:
commit 95d4c16ce78cb6b7549a09159c409d52ddd18dae
KVM: Optimize dirty logging by rmap_write_protect()
During protecting pages for dirty logging, other threads may also try
to protect a page in mmu_sync_children() or kvm_mmu_get_page().
In such a case, because get_dirty_log releases mmu_lock before flushing
TLB's, the following race condition can happen:
A (get_dirty_log) B (another thread)
lock(mmu_lock)
clear pte.w
unlock(mmu_lock)
lock(mmu_lock)
pte.w is already cleared
unlock(mmu_lock)
skip TLB flush
return
...
TLB flush
Though thread B assumes the page has already been protected when it
returns, the remaining TLB entry will break that assumption.
This patch fixes this problem by making get_dirty_log hold the mmu_lock
until it flushes the TLB's.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
-rw-r--r-- | arch/x86/kvm/x86.c | 11 |
1 files changed, 5 insertions, 6 deletions
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e86f9b22eaca..3df0b7a140b0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -3065,6 +3065,8 @@ static void write_protect_slot(struct kvm *kvm, unsigned long *dirty_bitmap, unsigned long nr_dirty_pages) { + spin_lock(&kvm->mmu_lock); + /* Not many dirty pages compared to # of shadow pages. */ if (nr_dirty_pages < kvm->arch.n_used_mmu_pages) { unsigned long gfn_offset; @@ -3072,16 +3074,13 @@ static void write_protect_slot(struct kvm *kvm, for_each_set_bit(gfn_offset, dirty_bitmap, memslot->npages) { unsigned long gfn = memslot->base_gfn + gfn_offset; - spin_lock(&kvm->mmu_lock); kvm_mmu_rmap_write_protect(kvm, gfn, memslot); - spin_unlock(&kvm->mmu_lock); } kvm_flush_remote_tlbs(kvm); - } else { - spin_lock(&kvm->mmu_lock); + } else kvm_mmu_slot_remove_write_access(kvm, memslot->id); - spin_unlock(&kvm->mmu_lock); - } + + spin_unlock(&kvm->mmu_lock); } /* |