diff options
author | Sean Christopherson <sean.j.christopherson@intel.com> | 2019-09-12 19:46:12 -0700 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2019-09-24 14:36:06 +0200 |
commit | 9a5c034c9abaef81ad9df0221638785a088942b5 (patch) | |
tree | 12f8134e59e817a5b575a22ae63cb93151ca87cc /arch | |
parent | ca333add6933ad9732ba2c6671f133d7367ad96c (diff) |
KVM: x86/mmu: Skip invalid pages during zapping iff root_count is zero
Do not skip invalid shadow pages when zapping obsolete pages if the
pages' root_count has reached zero, in which case the page can be
immediately zapped and freed.
Update the comment accordingly.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kvm/mmu.c | 9 |
1 files changed, 5 insertions, 4 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 5f0864000360..5269aa057dfa 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5693,11 +5693,12 @@ restart: break; /* - * Since we are reversely walking the list and the invalid - * list will be moved to the head, skip the invalid page - * can help us to avoid the infinity list walking. + * Skip invalid pages with a non-zero root count, zapping pages + * with a non-zero root count will never succeed, i.e. the page + * will get thrown back on active_mmu_pages and we'll get stuck + * in an infinite loop. */ - if (sp->role.invalid) + if (sp->role.invalid && sp->root_count) continue; /* |