summaryrefslogtreecommitdiff
path: root/arch/arm64/kvm/va_layout.c
AgeCommit message (Collapse)Author
2020-07-28KVM: arm64: Make nVHE ASLR conditional on RANDOMIZE_BASEDavid Brazdil
If there are spare bits in non-VHE hyp VA, KVM unconditionally replaces them with a random tag chosen at init. Disable this if the kernel is built without RANDOMIZE_BASE to align with kernel behavior. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200721094445.82184-2-dbrazdil@google.com
2020-01-19arm64: kvm: Fix IDMAP overlap with HYP VARussell King
Booting 5.4 on LX2160A reveals that KVM is non-functional: kvm: Limiting the IPA size due to kernel Virtual Address limit kvm [1]: IPA Size Limit: 43bits kvm [1]: IDMAP intersecting with HYP VA, unable to continue kvm [1]: error initializing Hyp mode: -22 Debugging shows: kvm [1]: IDMAP page: 81a26000 kvm [1]: HYP VA range: 0:22ffffffff as RAM is located at: 80000000-fbdfffff : System RAM 2080000000-237fffffff : System RAM Comparing this with the same kernel on Armada 8040 shows: kvm: Limiting the IPA size due to kernel Virtual Address limit kvm [1]: IPA Size Limit: 43bits kvm [1]: IDMAP page: 2a26000 kvm [1]: HYP VA range: 4800000000:493fffffff ... kvm [1]: Hyp mode initialized successfully which indicates that hyp_va_msb is set, and is always set to the opposite value of the idmap page to avoid the overlap. This does not happen with the LX2160A. Further debugging shows vabits_actual = 39, kva_msb = 38 on LX2160A and kva_msb = 33 on Armada 8040. Looking at the bit layout of the HYP VA, there is still one bit available for hyp_va_msb. Set this bit appropriately. This allows KVM to be functional on the LX2160A, but without any HYP VA randomisation: kvm: Limiting the IPA size due to kernel Virtual Address limit kvm [1]: IPA Size Limit: 43bits kvm [1]: IDMAP page: 81a24000 kvm [1]: HYP VA range: 4000000000:62ffffffff ... kvm [1]: Hyp mode initialized successfully Fixes: ed57cac83e05 ("arm64: KVM: Introduce EL2 VA randomisation") Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> [maz: small additional cleanups, preserved case where the tag is legitimately 0 and we can just use the mask, Fixes tag] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/E1ilAiY-0000MA-RG@rmk-PC.armlinux.org.uk
2019-12-06arm64: KVM: Invoke compute_layout() before alternatives are appliedSebastian Andrzej Siewior
compute_layout() is invoked as part of an alternative fixup under stop_machine(). This function invokes get_random_long() which acquires a sleeping lock on -RT which can not be acquired in this context. Rename compute_layout() to kvm_compute_layout() and invoke it before stop_machine() applies the alternatives. Add a __init prefix to kvm_compute_layout() because the caller has it, too (and so the code can be discarded after boot). Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-08-09arm64: mm: Introduce vabits_actualSteve Capper
In order to support 52-bit kernel addresses detectable at boot time, one needs to know the actual VA_BITS detected. A new variable vabits_actual is introduced in this commit and employed for the KVM hypervisor layout, KASAN, fault handling and phys-to/from-virt translation where there would normally be compile time constants. In order to maintain performance in phys_to_virt, another variable physvirt_offset is introduced. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2019-07-05KVM: arm64: Abstract the size of the HYP vectors pre-ambleJames Morse
The EL2 vector hardening feature causes KVM to generate vectors for each type of CPU present in the system. The generated sequences already do some of the early guest-exit work (i.e. saving registers). To avoid duplication the generated vectors branch to the original vector just after the preamble. This size is hard coded. Adding new instructions to the HYP vector causes strange side effects, which are difficult to debug as the affected code is patched in at runtime. Add KVM_VECTOR_PREAMBLE to tell kvm_patch_vector_branch() how big the preamble is. The valid_vect macro can then validate this at build time. Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-03-19arm64: KVM: Allow mapping of vectors outside of the RAM regionMarc Zyngier
We're now ready to map our vectors in weird and wonderful locations. On enabling ARM64_HARDEN_EL2_VECTORS, a vector slot gets allocated if this hasn't been already done via ARM64_HARDEN_BRANCH_PREDICTOR and gets mapped outside of the normal RAM region, next to the idmap. That way, being able to obtain VBAR_EL2 doesn't reveal the mapping of the rest of the hypervisor code. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19arm64: KVM: Allow far branches from vector slots to the main vectorsMarc Zyngier
So far, the branch from the vector slots to the main vectors can at most be 4GB from the main vectors (the reach of ADRP), and this distance is known at compile time. If we were to remap the slots to an unrelated VA, things would break badly. A way to achieve VA independence would be to load the absolute address of the vectors (__kvm_hyp_vector), either using a constant pool or a series of movs, followed by an indirect branch. This patches implements the latter solution, using another instance of a patching callback. Note that since we have to save a register pair on the stack, we branch to the *second* instruction in the vectors in order to compensate for it. This also results in having to adjust this balance in the invalid vector entry point. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19arm64: KVM: Introduce EL2 VA randomisationMarc Zyngier
The main idea behind randomising the EL2 VA is that we usually have a few spare bits between the most significant bit of the VA mask and the most significant bit of the linear mapping. Those bits could be a bunch of zeroes, and could be useful to move things around a bit. Of course, the more memory you have, the less randomisation you get... Alternatively, these bits could be the result of KASLR, in which case they are already random. But it would be nice to have a *different* randomization, just to make the job of a potential attacker a bit more difficult. Inserting these random bits is a bit involved. We don't have a spare register (short of rewriting all the kern_hyp_va call sites), and the immediate we want to insert is too random to be used with the ORR instruction. The best option I could come up with is the following sequence: and x0, x0, #va_mask ror x0, x0, #first_random_bit add x0, x0, #(random & 0xfff) add x0, x0, #(random >> 12), lsl #12 ror x0, x0, #(63 - first_random_bit) making it a fairly long sequence, but one that a decent CPU should be able to execute without breaking a sweat. It is of course NOPed out on VHE. The last 4 instructions can also be turned into NOPs if it appears that there is no free bits to use. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19arm64: KVM: Dynamically compute the HYP VA maskMarc Zyngier
As we're moving towards a much more dynamic way to compute our HYP VA, let's express the mask in a slightly different way. Instead of comparing the idmap position to the "low" VA mask, we directly compute the mask by taking into account the idmap's (VA_BIT-1) bit. No functionnal change. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19arm64: KVM: Dynamically patch the kernel/hyp VA maskMarc Zyngier
So far, we're using a complicated sequence of alternatives to patch the kernel/hyp VA mask on non-VHE, and NOP out the masking altogether when on VHE. The newly introduced dynamic patching gives us the opportunity to simplify that code by patching a single instruction with the correct mask (instead of the mind bending cumulative masking we have at the moment) or even a single NOP on VHE. This also adds some initial code that will allow the patching callback to switch to a more complex patching. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>