summaryrefslogtreecommitdiffstats
path: root/arch/x86/kvm/mmu/paging_tmpl.h
diff options
context:
space:
mode:
authorSean Christopherson <sean.j.christopherson@intel.com>2020-01-08 12:24:46 -0800
committerPaolo Bonzini <pbonzini@redhat.com>2020-01-27 20:00:08 +0100
commit293e306e7faac4eafaefb9518a1cd6eaecad88e9 (patch)
tree5f602370c71d3ed81ef44618078f55c870e108f3 /arch/x86/kvm/mmu/paging_tmpl.h
parentd32ec81bab670e599e645e1d1d5231d62de7d0d6 (diff)
KVM: x86/mmu: Fold max_mapping_level() into kvm_mmu_hugepage_adjust()
Fold max_mapping_level() into kvm_mmu_hugepage_adjust() now that HugeTLB mappings are handled in kvm_mmu_hugepage_adjust(), i.e. there isn't a need to pre-calculate the max mapping level. Co-locating all hugepage checks eliminates a memslot lookup, at the cost of performing the __mmu_gfn_lpage_is_disallowed() checks while holding mmu_lock. The latency of lpage_is_disallowed() is likely negligible relative to the rest of the code run while holding mmu_lock, and can be offset to some extent by eliminating the mmu_gfn_lpage_is_disallowed() check in set_spte() in a future patch. Eliminating the check in set_spte() is made possible by performing the initial lpage_is_disallowed() checks while holding mmu_lock. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu/paging_tmpl.h')
-rw-r--r--arch/x86/kvm/mmu/paging_tmpl.h2
1 files changed, 0 insertions, 2 deletions
diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 885da924827c..4e1ef0473663 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -832,8 +832,6 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gpa_t addr, u32 error_code,
else
max_level = walker.level;
- max_level = max_mapping_level(vcpu, walker.gfn, max_level);
-
mmu_seq = vcpu->kvm->mmu_notifier_seq;
smp_rmb();