summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-05-15arm64: scs: Add shadow stacks for SDEISami Tolvanen
This change adds per-CPU shadow call stacks for the SDEI handler. Similarly to how the kernel stacks are handled, we add separate shadow stacks for normal and critical events. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: James Morse <james.morse@arm.com> Tested-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: Implement Shadow Call StackSami Tolvanen
This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: Disable SCS for hypervisor codeSami Tolvanen
Disable SCS for code that runs at a different exception level by adding __noscs to __hyp_text. Suggested-by: James Morse <james.morse@arm.com> Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: vdso: Disable Shadow Call StackSami Tolvanen
Shadow stacks are only available in the kernel, so disable SCS instrumentation for the vDSO. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: efi: Restore register x18 if it was corruptedSami Tolvanen
If we detect a corrupted x18, restore the register before jumping back to potentially SCS instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: Preserve register x18 when CPU is suspendedSami Tolvanen
Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: Reserve register x18 from general allocation with SCSSami Tolvanen
Reserve the x18 register from general allocation when SCS is enabled, because the compiler uses the register to store the current task's shadow stack pointer. Note that all external kernel modules must also be compiled with -ffixed-x18 if the kernel has SCS enabled. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Disable when function graph tracing is enabledSami Tolvanen
The graph tracer hooks returns by modifying frame records on the (regular) stack, but with SCS the return address is taken from the shadow stack, and the value in the frame record has no effect. As we don't currently have a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), for now let's disable SCS when the graph tracer is enabled. With SCS the return address is taken from the shadow stack and the value in the frame record has no effect. The mcount based graph tracer hooks returns by modifying frame records on the (regular) stack, and thus is not compatible. The patchable-function-entry graph tracer used for DYNAMIC_FTRACE_WITH_REGS modifies the LR before it is saved to the shadow stack, and is compatible. Modifying the mcount based graph tracer to work with SCS would require a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), and we expect that everyone will eventually move to the patchable-function-entry based graph tracer anyway, so for now let's disable SCS when the mcount-based graph tracer is enabled. SCS and patchable-function-entry are both supported from LLVM 10.x. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Add support for stack usage debuggingSami Tolvanen
Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled, also prints out the highest shadow stack usage per process. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> [will: rewrote most of scs_check_usage()] Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Add page accounting for shadow call stack allocationsSami Tolvanen
This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15scs: Add support for Clang's Shadow Call Stack (SCS)Sami Tolvanen
This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying the stacks. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [will: Numerous cosmetic changes] Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12arm64: bti: Fix support for userspace only BTIMark Brown
When setting PTE_MAYBE_GP we check system_supports_bti() but this is true for systems where only CONFIG_BTI is set causing us to enable BTI on some kernel text. Add an extra check for the kernel mode option, using an ifdef due to line length. Fixes: c8027285e366 ("arm64: Set GP bit in kernel page tables to enable BTI for the kernel") Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200512113950.29996-1-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12arm64: cpufeature: Add "or" to mitigations for multiple errataGeert Uytterhoeven
Several actions are not mitigations for a single erratum, but for multiple errata. However, printing a line like CPU features: detected: ARM errata 1165522, 1530923 may give the false impression that all three listed errata have been detected. This can confuse the user, who may think his Cortex-A55 is suddenly affected by a Cortex-A76 erratum. Add "or" to all descriptions for mitigations for multiple errata, to make it clear that only one or more of the errata printed are applicable, and not necessarily all of them. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20200512145255.5520-1-geert+renesas@glider.be Signed-off-by: Will Deacon <will@kernel.org>
2020-05-12arm64: kconfig: Update and comment GCC version check for kernel BTIWill Deacon
Some versions of GCC are known to suffer from a BTI code generation bug, meaning that CONFIG_CC_HAS_BRANCH_PROT_PAC_RET_BTI cannot be solely used to determine whether or not we can compile with kernel with BTI enabled. Update the BTI Kconfig entry to refer to the relevant GCC bugzilla entry (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697) and update the check now that the fix has been merged into GCC release 10.1. Acked-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11ACPI: IORT: Add extra message "applying workaround" for off-by-1 issueHanjun Guo
As we already applied a workaround for the off-by-1 issue, it's good to add extra message "applying workaround" to make people less uneasy to see FW_BUG message in the boot log. Signed-off-by: Hanjun Guo <guohanjun@huawei.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/1588910198-8348-1-git-send-email-guohanjun@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'Amit Daniel Kachhap
Add documentation for KERNELPACMASK variable being added to the vmcoreinfo. It indicates the PAC bits mask information of signed kernel pointers if Armv8.3-A Pointer Authentication feature is present. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Young <dyoung@redhat.com> Cc: Baoquan He <bhe@redhat.com> Link: https://lore.kernel.org/r/1589202116-18265-2-git-send-email-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11arm64/crash_core: Export KERNELPACMASK in vmcoreinfoAmit Daniel Kachhap
Recently arm64 linux kernel added support for Armv8.3-A Pointer Authentication feature. If this feature is enabled in the kernel and the hardware supports address authentication then the return addresses are signed and stored in the stack to prevent ROP kind of attack. Kdump tool will now dump the kernel with signed lr values in the stack. Any user analysis tool for this kernel dump may need the kernel pac mask information in vmcoreinfo to generate the correct return address for stacktrace purpose as well as to resolve the symbol name. This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC bit positions via ptrace") which exposes pac mask information via ptrace interfaces. The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros like ptrauth_kernel_pac_mask can be used ungaurded. This config protection is confusing as the pointer authentication feature may be missing at runtime even though this config is present. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1589202116-18265-1-git-send-email-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediatesLuke Nelson
The current code for BPF_{ADD,SUB} BPF_K loads the BPF immediate to a temporary register before performing the addition/subtraction. Similarly, BPF_JMP BPF_K cases load the immediate to a temporary register before comparison. This patch introduces optimizations that use arm64 immediate add, sub, cmn, or cmp instructions when the BPF immediate fits. If the immediate does not fit, it falls back to using a temporary register. Example of generated code for BPF_ALU64_IMM(BPF_ADD, R0, 2): without optimization: 24: mov x10, #0x2 28: add x7, x7, x10 with optimization: 24: add x7, x7, #0x2 The code could use A64_{ADD,SUB}_I directly and check if it returns AARCH64_BREAK_FAULT, similar to how logical immediates are handled. However, aarch64_insn_gen_add_sub_imm from insn.c prints error messages when the immediate does not fit, and it's simpler to check if the immediate fits ahead of time. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20200508181547.24783-4-luke.r.nels@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11bpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediatesLuke Nelson
The current code for BPF_{AND,OR,XOR,JSET} BPF_K loads the immediate to a temporary register before use. This patch changes the code to avoid using a temporary register when the BPF immediate is encodable using an arm64 logical immediate instruction. If the encoding fails (due to the immediate not being encodable), it falls back to using a temporary register. Example of generated code for BPF_ALU32_IMM(BPF_AND, R0, 0x80000001): without optimization: 24: mov w10, #0x8000ffff 28: movk w10, #0x1 2c: and w7, w7, w10 with optimization: 24: and w7, w7, #0x80000001 Since the encoding process is quite complex, the JIT reuses existing functionality in arch/arm64/kernel/insn.c for encoding logical immediates rather than duplicate it in the JIT. Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/r/20200508181547.24783-3-luke.r.nels@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-11arm64: insn: Fix two bugs in encoding 32-bit logical immediatesLuke Nelson
This patch fixes two issues present in the current function for encoding arm64 logical immediates when using the 32-bit variants of instructions. First, the code does not correctly reject an all-ones 32-bit immediate, and returns an undefined instruction encoding. Second, the code incorrectly rejects some 32-bit immediates that are actually encodable as logical immediates. The root cause is that the code uses a default mask of 64-bit all-ones, even for 32-bit immediates. This causes an issue later on when the default mask is used to fill the top bits of the immediate with ones, shown here: /* * Pattern: 0..01..10..01..1 * * Fill the unused top bits with ones, and check if * the result is a valid immediate (all ones with a * contiguous ranges of zeroes). */ imm |= ~mask; if (!range_of_ones(~imm)) return AARCH64_BREAK_FAULT; To see the problem, consider an immediate of the form 0..01..10..01..1, where the upper 32 bits are zero, such as 0x80000001. The code checks if ~(imm | ~mask) contains a range of ones: the incorrect mask yields 1..10..01..10..0, which fails the check; the correct mask yields 0..01..10..0, which succeeds. The fix for both issues is to generate a correct mask based on the instruction immediate size, and use the mask to check for all-ones, all-zeroes, and values wider than the mask. Currently, arch/arm64/kvm/va_layout.c is the only user of this function, which uses 64-bit immediates and therefore won't trigger these bugs. We tested the new code against llvm-mc with all 1,302 encodable 32-bit logical immediates and all 5,334 encodable 64-bit logical immediates. Fixes: ef3935eeebff ("arm64: insn: Add encoder for bitwise operations using literals") Suggested-by: Will Deacon <will@kernel.org> Co-developed-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200508181547.24783-2-luke.r.nels@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Map the vDSO text with guarded pages when built for BTIMark Brown
The kernel is responsible for mapping the vDSO into userspace processes, including mapping the text section as executable. Handle the mapping of the vDSO for BTI similarly, mapping the text section as guarded pages so the BTI annotations in the vDSO become effective when they are present. This will mean that we can have BTI active for the vDSO in processes that do not otherwise support BTI. This should not be an issue for any expected use of the vDSO. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-12-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Force the vDSO to be linked as BTI when built for BTIMark Brown
When the kernel and hence vDSO are built with BTI enabled force the linker to link the vDSO as BTI. This will cause the linker to warn if any of the input files do not have the BTI annotation, ensuring that we don't silently fail to provide a vDSO that is built and annotated for BTI when the kernel is being built with BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-11-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Annotate for BTIMark Brown
Generate BTI annotations for all assembly files included in the 64 bit vDSO. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-10-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: asm: Provide a mechanism for generating ELF note for BTIMark Brown
ELF files built for BTI should have a program property note section which identifies them as such. The linker expects to find this note in all object files it is linking into a BTI annotated output, the compiler will ensure that this happens for C files but for assembler files we need to do this in the source so provide a macro which can be used for this purpose. To support likely future requirements for additional notes we split the defininition of the flags to set for BTI code from the macro that creates the note itself. This is mainly for use in the vDSO which should be a normal ELF shared library and should therefore include BTI annotations when built for BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200506195138.22086-9-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bti: Provide Kconfig for kernel mode BTIMark Brown
Now that all the code is in place provide a Kconfig option allowing users to enable BTI for the kernel if their toolchain supports it, defaulting it on since this has security benefits. This is a separate configuration option since we currently don't support secondary CPUs that lack BTI if the boot CPU supports it. Code generation issues mean that current GCC 9 versions are not able to produce usable BTI binaries so we disable support for building with GCC versions prior to 10, once a fix is backported to GCC 9 the dependencies will be updated. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-8-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: mm: Mark executable text as guarded pagesMark Brown
When the kernel is built for BTI and running on a system which supports make all executable text guarded pages to ensure that loadable module and JITed BPF code is protected by BTI. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-7-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bpf: Annotate JITed code for BTIMark Brown
In order to extend the protection offered by BTI to all code executing in kernel mode we need to annotate JITed BPF code appropriately for BTI. To do this we need to add a landing pad to the start of each BPF function and also immediately after the function prologue if we are emitting a function which can be tail called. Jumps within BPF functions are all to immediate offsets and therefore do not require landing pads. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-6-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: Set GP bit in kernel page tables to enable BTI for the kernelMark Brown
Now that the kernel is built with BTI annotations enable the feature by setting the GP bit in the stage 1 translation tables. This is done based on the features supported by the boot CPU so that we do not need to rewrite the translation tables. In order to avoid potential issues on big.LITTLE systems when there are a mix of BTI and non-BTI capable CPUs in the system when we have enabled kernel mode BTI we change BTI to be a _STRICT_BOOT_CPU_FEATURE when we have kernel BTI. This will prevent any CPUs that don't support BTI being started if the boot CPU supports BTI rather than simply not using BTI as we do when supporting BTI only in userspace. The main concern is the possibility of BTYPE being preserved by a CPU that does not implement BTI when a thread is migrated to it resulting in an incorrect state which could generate an exception when the thread migrates back to a CPU that does support BTI. If we encounter practical systems which mix BTI and non-BTI CPUs we will need to revisit this implementation. Since we currently do not generate landing pads in the BPF JIT we only map the base kernel text in this way. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: asm: Override SYM_FUNC_START when building the kernel with BTIMark Brown
When the kernel is built for BTI override SYM_FUNC_START and related macros to add a BTI landing pad to the start of all global functions, ensuring that they are BTI safe. The ; at the end of the BTI_x macros is for the benefit of the macro-generated functions in xen-hypercall.S. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: bti: Support building kernel C code using BTIMark Brown
When running with BTI enabled we need to ask the compiler to enable generation of BTI landing pads beyond those generated as a result of pointer authentication instructions being landing pads. Since the two features are practically speaking unlikely to be used separately we will make kernel mode BTI depend on pointer authentication in order to simplify the Makefile. Signed-off-by: Mark Brown <broonie@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200506195138.22086-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: Document why we enable PAC support for leaf functionsMark Brown
Document the fact that we enable pointer authentication protection for leaf functions since there is some narrow potential for ROP protection benefits and little overhead has been observed. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20200506195138.22086-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-07arm64: vdso: Add --eh-frame-hdr to ldflagsVincenzo Frascino
LLVM's unwinder depends on the .eh_frame_hdr being present for unwinding. However, when compiling Linux with GCC, the section is not present in the vdso library object and when compiling with Clang, it is present, but it has zero length. With GCC the problem was not spotted because libgcc unwinder does not require the .eh_frame_hdr section to be present. Add --eh-frame-hdr to ldflags to correctly generate and populate the section for both GCC and LLVM. Fixes: 28b1a824a4f44 ("arm64: vdso: Substitute gettimeofday() with C implementation") Reported-by: Tamas Zsoldos <tamas.zsoldos@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by: Tamas Zsoldos <tamas.zsoldos@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20200507104049.47834-1-vincenzo.frascino@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-05arm64: cacheflush: Fix KGDB trap detectionDaniel Thompson
flush_icache_range() contains a bodge to avoid issuing IPIs when the kgdb trap handler is running because issuing IPIs is unsafe (and not needed) in this execution context. However the current test, based on kgdb_connected is flawed: it both over-matches and under-matches. The over match occurs because kgdb_connected is set when gdb attaches to the stub and remains set during normal running. This is relatively harmelss because in almost all cases irq_disabled() will be false. The under match is more serious. When kdb is used instead of kgdb to access the debugger then kgdb_connected is not set in all the places that the debug core updates sw breakpoints (and hence flushes the icache). This can lead to deadlock. Fix by replacing the ad-hoc check with the proper kgdb macro. This also allows us to drop the #ifdef wrapper. Fixes: 3b8c9f1cdfc5 ("arm64: IPI each CPU after invalidating the I-cache for kernel mappings") Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20200504170518.2959478-1-daniel.thompson@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-05Merge branches 'for-next/asm' and 'for-next/insn' into for-next/btiWill Deacon
Merge in dependencies for in-kernel Branch Target Identification support. * for-next/asm: arm64: Disable old style assembly annotations arm64: kernel: Convert to modern annotations for assembly functions arm64: entry: Refactor and modernise annotation for ret_to_user x86/asm: Provide a Kconfig symbol for disabling old assembly annotations x86/32: Remove CONFIG_DOUBLEFAULT * for-next/insn: arm64: insn: Report PAC and BTI instructions as skippable arm64: insn: Don't assume unrecognized HINTs are skippable arm64: insn: Provide a better name for aarch64_insn_is_nop() arm64: insn: Add constants for new HINT instruction decode
2020-05-05Merge branch 'for-next/bti-user' into for-next/btiWill Deacon
Merge in user support for Branch Target Identification, which narrowly missed the cut for 5.7 after a late ABI concern. * for-next/bti-user: arm64: bti: Document behaviour for dynamically linked binaries arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY arm64: BTI: Add Kconfig entry for userspace BTI mm: smaps: Report arm64 guarded pages in smaps arm64: mm: Display guarded pages in ptdump KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: BTI: Reset BTYPE when skipping emulated instructions arm64: traps: Shuffle code to eliminate forward declarations arm64: unify native/compat instruction skipping arm64: BTI: Decode BYTPE bits when printing PSTATE arm64: elf: Enable BTI at exec based on ELF program properties elf: Allow arch to tweak initial mmap prot flags arm64: Basic Branch Target Identification support ELF: Add ELF program property parsing support ELF: UAPI and Kconfig additions for ELF program properties
2020-05-05arm64: cpufeature: Group indexed system register definitions by nameWill Deacon
Some system registers contain an index in the name (e.g. ID_MMFR<n>_EL1) and, while this index often follows the register encoding, newer additions to the architecture are necessarily tacked on the end. Sorting these registers by encoding therefore becomes a bit of a mess. Group the indexed system register definitions by name so that it's easier to read and will hopefully reduce the chance of us accidentally introducing duplicate definitions in the future. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-05arm64: cpufeature: Extend comment to describe absence of field infoWill Deacon
When a feature register field is omitted from the description of the register, the corresponding bits are treated as STRICT RES0, including for KVM guests. This is subtly different to declaring the field as HIDDEN/STRICT/EXACT/0, so update the comment to call this out. Signed-off-by: Will Deacon <will@kernel.org>
2020-05-05arm64: Sort vendor-specific errataGeert Uytterhoeven
Sort configuration options for vendor-specific errata by vendor, to increase uniformity. Move ARM64_WORKAROUND_REPEAT_TLBI up, as it is also selected by ARM64_ERRATUM_1286807. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04firmware: arm_sdei: Drop check for /firmware/ node and always register driverSudeep Holla
As with most of the drivers, let us register this driver unconditionally by dropping the checks for presence of firmware nodes(DT) or entries(ACPI). Further, as mentioned in the commit acafce48b07b ("firmware: arm_sdei: Fix DT platform device creation"), the core takes care of creation of platform device when the appropriate device node is found and probe is called accordingly. Let us check only for the presence of ACPI firmware entry before creating the platform device and flag warning if we fail. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Cc: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200422122823.1390-1-sudeep.holla@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init()Anshuman Khandual
This moves device_initcall() near cpuinfo_regs_init() making the calling sequence clear. Besides it is a standard practice to have device_initcall() (any __initcall for that matter) just after the function it actually calls. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1588595377-4503-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Report PAC and BTI instructions as skippableMark Brown
The PAC and BTI instructions can be safely skipped so report them as such, allowing them to be probed. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-5-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Don't assume unrecognized HINTs are skippableMark Brown
Currently the kernel assumes that any HINT which it does not explicitly recognise is skippable. This is not robust as new instructions may be added which need special handling, and in any case software should only be using explicit NOP instructions for deliberate NOPs. This has the effect of rendering PAC and BTI instructions unprobeable which means that probes can't be inserted on the first instruction of functions built with those features. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Provide a better name for aarch64_insn_is_nop()Mark Brown
The current aarch64_insn_is_nop() has exactly one caller which uses it solely to identify if the instruction is a HINT that can safely be stepped, requiring us to list things that aren't NOPs and make things more confusing than they need to be. Rename the function to reflect the actual usage and make things more clear. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Add constants for new HINT instruction decodeMark Brown
Add constants for decoding newer instructions defined in the HINT space. Since we are now decoding both the op2 and CRm fields rename the enum as well; this is compatible with what the existing users are doing. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: Unify WORKAROUND_SPECULATIVE_AT_{NVHE,VHE}Andrew Scull
Errata 1165522, 1319367 and 1530923 each allow TLB entries to be allocated as a result of a speculative AT instruction. In order to avoid mandating VHE on certain affected CPUs, apply the workaround to both the nVHE and the VHE case for all affected CPUs. Signed-off-by: Andrew Scull <ascull@google.com> Acked-by: Will Deacon <will@kernel.org> CC: Marc Zyngier <maz@kernel.org> CC: James Morse <james.morse@arm.com> CC: Suzuki K Poulose <suzuki.poulose@arm.com> CC: Will Deacon <will@kernel.org> CC: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20200504094858.108917-1-ascull@google.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: Disable old style assembly annotationsMark Brown
Now that we have converted arm64 over to the new style SYM_ assembler annotations select ARCH_USE_SYM_ANNOTATIONS so the old macros aren't available and we don't regress. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-4-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: kernel: Convert to modern annotations for assembly functionsMark Brown
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These replace ENTRY and ENDPROC and also add a new annotation for static functions which previously had no ENTRY equivalent. Update the annotations in the core kernel code to the new macros. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: entry: Refactor and modernise annotation for ret_to_userMark Brown
As part of an effort to clarify and clean up the assembler annotations new macros have been introduced which annotate the start and end of blocks of code in assembler files. Currently ret_to_user has an out of line slow path work_pending placed above the main function which makes annotating the start and end of these blocks of code awkward. Since work_pending is only referenced from within ret_to_user try to make things a bit clearer by moving it after the current ret_to_user and then marking both ret_to_user and work_pending as part of a single ret_to_user code block. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200501115430.37315-2-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04ACPI/IORT: work around num_ids ambiguityArd Biesheuvel
The ID mapping table structure of the IORT table describes the size of a range using a num_ids field carrying the number of IDs in the region minus one. This has been misinterpreted in the past in the parsing code, and firmware is known to have shipped where this results in an ambiguity, where regions that should be adjacent have an overlap of one value. So let's work around this by detecting this case specifically: when resolving an ID translation, allow one that matches right at the end of a multi-ID region to be superseded by a subsequent one. To prevent potential regressions on broken firmware that happened to work before, only take the subsequent match into account if it occurs at the start of a mapping region. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/20200501161014.5935-3-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04Revert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()"Ard Biesheuvel
This reverts commit 3c23b83a88d00383e1d498cfa515249aa2fe0238. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200501161014.5935-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>