summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2020-12-24Merge tag 'riscv-for-linus-5.11-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux Pull RISC-V fix from Palmer Dabbelt "Avoid trying to initialize memory regions outside the usable range" * tag 'riscv-for-linus-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: RISC-V: Fix usage of memblock_enforce_memory_limit
2020-12-24Merge tag 'powerpc-5.11-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: - Four commits fixing various things in the new C VDSO code - One fix for a 32-bit VMAP stack bug - Two minor build fixes Thanks to Cédric Le Goater, Christophe Leroy, and Will Springer. * tag 'powerpc-5.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on syscall too powerpc/vdso: Fix DOTSYM for 32-bit LE VDSO powerpc/vdso: Don't pass 64-bit ABI cflags to 32-bit VDSO powerpc/vdso: Block R_PPC_REL24 relocations powerpc/smp: Add __init to init_big_cores() powerpc/time: Force inlining of get_tb() powerpc/boot: Fix build of dts/fsl
2020-12-24Merge tag 'irq-core-2020-12-23' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "This is the second attempt after the first one failed miserably and got zapped to unblock the rest of the interrupt related patches. A treewide cleanup of interrupt descriptor (ab)use with all sorts of racy accesses, inefficient and disfunctional code. The goal is to remove the export of irq_to_desc() to prevent these things from creeping up again" * tag 'irq-core-2020-12-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (30 commits) genirq: Restrict export of irq_to_desc() xen/events: Implement irq distribution xen/events: Reduce irq_info:: Spurious_cnt storage size xen/events: Only force affinity mask for percpu interrupts xen/events: Use immediate affinity setting xen/events: Remove disfunct affinity spreading xen/events: Remove unused bind_evtchn_to_irq_lateeoi() net/mlx5: Use effective interrupt affinity net/mlx5: Replace irq_to_desc() abuse net/mlx4: Use effective interrupt affinity net/mlx4: Replace irq_to_desc() abuse PCI: mobiveil: Use irq_data_get_irq_chip_data() PCI: xilinx-nwl: Use irq_data_get_irq_chip_data() NTB/msi: Use irq_has_action() mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc pinctrl: nomadik: Use irq_has_action() drm/i915/pmu: Replace open coded kstat_irqs() copy drm/i915/lpe_audio: Remove pointless irq_to_desc() usage s390/irq: Use irq_desc_kstat_cpu() in show_msi_interrupt() parisc/irq: Use irq_desc_kstat_cpu() in show_interrupts() ...
2020-12-24Merge tag 'efi_updates_for_v5.11' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI updates from Borislav Petkov: "These got delayed due to a last minute ia64 build issue which got fixed in the meantime. EFI updates collected by Ard Biesheuvel: - Don't move BSS section around pointlessly in the x86 decompressor - Refactor helper for discovering the EFI secure boot mode - Wire up EFI secure boot to IMA for arm64 - Some fixes for the capsule loader - Expose the RT_PROP table via the EFI test module - Relax DT and kernel placement restrictions on ARM with a few followup fixes: - fix the build breakage on IA64 caused by recent capsule loader changes - suppress a type mismatch build warning in the expansion of EFI_PHYS_ALIGN on ARM" * tag 'efi_updates_for_v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: efi: arm: force use of unsigned type for EFI_PHYS_ALIGN efi: ia64: disable the capsule loader efi: stub: get rid of efi_get_max_fdt_addr() efi/efi_test: read RuntimeServicesSupported efi: arm: reduce minimum alignment of uncompressed kernel efi: capsule: clean scatter-gather entries from the D-cache efi: capsule: use atomic kmap for transient sglist mappings efi: x86/xen: switch to efi_get_secureboot_mode helper arm64/ima: add ima_arch support ima: generalize x86/EFI arch glue for other EFI architectures efi: generalize efi_get_secureboot efi/libstub: EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER should not default to yes efi/x86: Only copy the compressed kernel image in efi_relocate_kernel() efi/libstub/x86: simplify efi_is_native()
2020-12-22Merge tag 'kbuild-v5.11' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - Use /usr/bin/env for shebang lines in scripts - Remove useless -Wnested-externs warning flag - Update documents - Refactor log handling in modpost - Stop building modules without MODULE_LICENSE() tag - Make the insane combination of 'static' and EXPORT_SYMBOL an error - Improve genksyms to handle _Static_assert() * tag 'kbuild-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: Documentation/kbuild: Document platform dependency practises Documentation/kbuild: Document COMPILE_TEST dependencies genksyms: Ignore module scoped _Static_assert() modpost: turn static exports into error modpost: turn section mismatches to error from fatal() modpost: change license incompatibility to error() from fatal() modpost: turn missing MODULE_LICENSE() into error modpost: refactor error handling and clarify error/fatal difference modpost: rename merror() to error() kbuild: don't hardcode depmod path kbuild: doc: document subdir-y syntax kbuild: doc: clarify the difference between extra-y and always-y kbuild: doc: split if_changed explanation to a separate section kbuild: doc: merge 'Special Rules' and 'Custom kbuild commands' sections kbuild: doc: fix 'List directories to visit when descending' section kbuild: doc: replace arch/$(ARCH)/ with arch/$(SRCARCH)/ kbuild: doc: update the description about kbuild Makefiles Makefile.extrawarn: remove -Wnested-externs warning tweewide: Fix most Shebang lines
2020-12-22Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge KASAN updates from Andrew Morton. This adds a new hardware tag-based mode to KASAN. The new mode is similar to the existing software tag-based KASAN, but relies on arm64 Memory Tagging Extension (MTE) to perform memory and pointer tagging (instead of shadow memory and compiler instrumentation). By Andrey Konovalov and Vincenzo Frascino. * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (60 commits) kasan: update documentation kasan, mm: allow cache merging with no metadata kasan: sanitize objects when metadata doesn't fit kasan: clarify comment in __kasan_kfree_large kasan: simplify assign_tag and set_tag calls kasan: don't round_up too much kasan, mm: rename kasan_poison_kfree kasan, mm: check kasan_enabled in annotations kasan: add and integrate kasan boot parameters kasan: inline (un)poison_range and check_invalid_free kasan: open-code kasan_unpoison_slab kasan: inline random_tag for HW_TAGS kasan: inline kasan_reset_tag for tag-based modes kasan: remove __kasan_unpoison_stack kasan: allow VMAP_STACK for HW_TAGS mode kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK kasan: introduce set_alloc_info kasan: rename get_alloc/free_info kasan: simplify quarantine_put call site kselftest/arm64: check GCR_EL1 after context switch ...
2020-12-22Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linuxLinus Torvalds
Pull ARM updates from Russell King: - Rework phys/virt translation - Add KASan support - Move DT out of linear map region - Use more PC-relative addressing in assembly - Remove FP emulation handling while in kernel mode - Link with '-z norelro' - remove old check for GCC <= 4.2 in ARM unwinder code - disable big endian if using clang's linker * tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: (46 commits) ARM: 9027/1: head.S: explicitly map DT even if it lives in the first physical section ARM: 9038/1: Link with '-z norelro' ARM: 9037/1: uncompress: Add OF_DT_MAGIC macro ARM: 9036/1: uncompress: Fix dbgadtb size parameter name ARM: 9035/1: uncompress: Add be32tocpu macro ARM: 9033/1: arm/smp: Drop the macro S(x,s) ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functions ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbol ARM: 9044/1: vfp: use undef hook for VFP support detection ARM: 9034/1: __div64_32(): straighten up inline asm constraints ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode ARM: 9029/1: Make iwmmxt.S support Clang's integrated assembler ARM: 9028/1: disable KASAN in call stack capturing routines ARM: 9026/1: unwind: remove old check for GCC <= 4.2 ARM: 9025/1: Kconfig: CPU_BIG_ENDIAN depends on !LD_IS_LLD ARM: 9024/1: Drop useless cast of "u64" to "long long" ARM: 9023/1: Spelling s/mmeory/memory/ ARM: 9022/1: Change arch/arm/lib/mem*.S to use WEAK instead of .weak ARM: kvm: replace open coded VA->PA calculations with adr_l call ARM: head.S: use PC relative insn sequence to calculate PHYS_OFFSET ...
2020-12-22Merge tag 'dma-mapping-5.11' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds
Pull dma-mapping updates from Christoph Hellwig: - support for a partial IOMMU bypass (Alexey Kardashevskiy) - add a DMA API benchmark (Barry Song) - misc fixes (Tiezhu Yang, tangjianqiang) * tag 'dma-mapping-5.11' of git://git.infradead.org/users/hch/dma-mapping: selftests/dma: add test application for DMA_MAP_BENCHMARK dma-mapping: add benchmark support for streaming DMA APIs dma-contiguous: fix a typo error in a comment dma-pool: no need to check return value of debugfs_create functions powerpc/dma: Fallback to dma_ops when persistent memory present dma-mapping: Allow mixing bypass and mapped DMA operation
2020-12-22x86/split-lock: Avoid returning with interrupts enabledAndi Kleen
When a split lock is detected always make sure to disable interrupts before returning from the trap handler. The kernel exit code assumes that all exits run with interrupts disabled, otherwise the SWAPGS sequence can race against interrupts and cause recursing page faults and later panics. The problem will only happen on CPUs with split lock disable functionality, so Icelake Server, Tiger Lake, Snow Ridge, Jacobsville. Fixes: ca4c6a9858c2 ("x86/traps: Make interrupt enable/disable symmetric in C code") Fixes: bce9b042ec73 ("x86/traps: Disable interrupts in exc_aligment_check()") # v5.8+ Signed-off-by: Andi Kleen <ak@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Tony Luck <tony.luck@intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan: allow VMAP_STACK for HW_TAGS modeAndrey Konovalov
Even though hardware tag-based mode currently doesn't support checking vmalloc allocations, it doesn't use shadow memory and works with VMAP_STACK as is. Change VMAP_STACK definition accordingly. Link: https://lkml.kernel.org/r/ecdb2a1658ebd88eb276dee2493518ac0e82de41.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I3552cbc12321dec82cd7372676e9372a2eb452ac Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Marco Elver <elver@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: unpoison stack only with CONFIG_KASAN_STACKAndrey Konovalov
There's a config option CONFIG_KASAN_STACK that has to be enabled for KASAN to use stack instrumentation and perform validity checks for stack variables. There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled. Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is enabled. Note, that CONFIG_KASAN_STACK is an option that is currently always defined when CONFIG_KASAN is enabled, and therefore has to be tested with #if instead of #ifdef. Link: https://lkml.kernel.org/r/d09dd3f8abb388da397fd11598c5edeaa83fe559.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3 Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Marco Elver <elver@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: enable CONFIG_KASAN_HW_TAGSAndrey Konovalov
Hardware tag-based KASAN is now ready, enable the configuration option. Link: https://lkml.kernel.org/r/a6fa50d3bb6b318e05c6389a44095be96442b8b0.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: print report from tag fault handlerAndrey Konovalov
Add error reporting for hardware tag-based KASAN. When CONFIG_KASAN_HW_TAGS is enabled, print KASAN report from the arm64 tag fault handler. SAS bits aren't set in ESR for all faults reported in EL1, so it's impossible to find out the size of the access the caused the fault. Adapt KASAN reporting code to handle this case. Link: https://lkml.kernel.org/r/b559c82b6a969afedf53b4694b475f0234067a1a.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: implement HW_TAGS runtimeAndrey Konovalov
Provide implementation of KASAN functions required for the hardware tag-based mode. Those include core functions for memory and pointer tagging (tags_hw.c) and bug reporting (report_tags_hw.c). Also adapt common KASAN code to support the new mode. Link: https://lkml.kernel.org/r/cfd0fbede579a6b66755c98c88c108e54f9c56bf.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: expand CONFIG_KASAN checksAndrey Konovalov
Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes (either related to shadow memory or compiler instrumentation). Expand those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS. Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, x86, s390: update undef CONFIG_KASANAndrey Konovalov
With the intoduction of hardware tag-based KASAN some kernel checks of this kind: ifdef CONFIG_KASAN will be updated to: if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) x86 and s390 use a trick to #undef CONFIG_KASAN for some of the code that isn't linked with KASAN runtime and shouldn't have any KASAN annotations. Also #undef CONFIG_KASAN_GENERIC with CONFIG_KASAN. Link: https://lkml.kernel.org/r/9d84bfaaf8fabe0fc89f913c9e420a30bd31a260.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Marco Elver <elver@google.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: kasan: add arch layer for memory tagging helpersAndrey Konovalov
This patch add a set of arch_*() memory tagging helpers currently only defined for arm64 when hardware tag-based KASAN is enabled. These helpers will be used by KASAN runtime to implement the hardware tag-based mode. The arch-level indirection level is introduced to simplify adding hardware tag-based KASAN support for other architectures in the future by defining the appropriate arch_*() macros. Link: https://lkml.kernel.org/r/fc9e5bb71201c03131a2fc00a74125723568dda9.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Co-developed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: kasan: align allocations for HW_TAGSAndrey Konovalov
Hardware tag-based KASAN uses the memory tagging approach, which requires all allocations to be aligned to the memory granule size. Align the allocations to MTE_GRANULE_SIZE via ARCH_SLAB_MINALIGN when CONFIG_KASAN_HW_TAGS is enabled. Link: https://lkml.kernel.org/r/fe64131606b1c2aabfd34ae99554c0d9df18eb19.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: mte: switch GCR_EL1 in kernel entry and exitVincenzo Frascino
When MTE is present, the GCR_EL1 register contains the tags mask that allows to exclude tags from the random generation via the IRG instruction. With the introduction of the new Tag-Based KASAN API that provides a mechanism to reserve tags for special reasons, the MTE implementation has to make sure that the GCR_EL1 setting for the kernel does not affect the userspace processes and viceversa. Save and restore the kernel/user mask in GCR_EL1 in kernel entry and exit. Link: https://lkml.kernel.org/r/578b03294708cc7258fad0dc9c2a2e809e5a8214.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: mte: convert gcr_user into an exclude maskVincenzo Frascino
The gcr_user mask is a per thread mask that represents the tags that are excluded from random generation when the Memory Tagging Extension is present and an 'irg' instruction is invoked. gcr_user affects the behavior on EL0 only. Currently that mask is an include mask and it is controlled by the user via prctl() while GCR_EL1 accepts an exclude mask. Convert the include mask into an exclude one to make it easier the register setting. Note: This change will affect gcr_kernel (for EL1) introduced with a future patch. Link: https://lkml.kernel.org/r/946dd31be833b660334c4f93410acf6d6c4cf3c4.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: kasan: allow enabling in-kernel MTEVincenzo Frascino
Hardware tag-based KASAN relies on Memory Tagging Extension (MTE) feature and requires it to be enabled. MTE supports This patch adds a new mte_enable_kernel() helper, that enables MTE in Synchronous mode in EL1 and is intended to be called from KASAN runtime during initialization. The Tag Checking operation causes a synchronous data abort as a consequence of a tag check fault when MTE is configured in synchronous mode. As part of this change enable match-all tag for EL1 to allow the kernel to access user pages without faulting. This is required because the kernel does not have knowledge of the tags set by the user in a page. Note: For MTE, the TCF bit field in SCTLR_EL1 affects only EL1 in a similar way as TCF0 affects EL0. MTE that is built on top of the Top Byte Ignore (TBI) feature hence we enable it as part of this patch as well. Link: https://lkml.kernel.org/r/7352b0a0899af65c2785416c8ca6bf3845b66fa1.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: mte: add in-kernel tag fault handlerVincenzo Frascino
Add the implementation of the in-kernel fault handler. When a tag fault happens on a kernel address: * MTE is disabled on the current CPU, * the execution continues. When a tag fault happens on a user address: * the kernel executes do_bad_area() and panics. The tag fault handler for kernel addresses is currently empty and will be filled in by a future commit. Link: https://lkml.kernel.org/r/20201203102628.GB2224@gaia Link: https://lkml.kernel.org/r/ad31529b073e22840b7a2246172c2b67747ed7c4.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: ensure CONFIG_ARM64_PAN is enabled with MTE] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: mte: reset the page tag in page->flagsVincenzo Frascino
The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Link: https://lkml.kernel.org/r/9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: mte: add in-kernel MTE helpersVincenzo Frascino
Provide helper functions to manipulate allocation and pointer tags for kernel addresses. Low-level helper functions (mte_assign_*, written in assembly) operate tag values from the [0x0, 0xF] range. High-level helper functions (mte_get/set_*) use the [0xF0, 0xFF] range to preserve compatibility with normal kernel pointers that have 0xFF in their top byte. MTE_GRANULE_SIZE and related definitions are moved to mte-def.h header that doesn't have any dependencies and is safe to include into any low-level header. Link: https://lkml.kernel.org/r/c31bf759b4411b2d98cdd801eb928e241584fd1f.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22arm64: enable armv8.5-a asm-arch optionVincenzo Frascino
Hardware tag-based KASAN relies on Memory Tagging Extension (MTE) which is an armv8.5-a architecture extension. Enable the correct asm option when the compiler supports it in order to allow the usage of ALTERNATIVE()s with MTE instructions. Link: https://lkml.kernel.org/r/d03d1157124ea3532eaeb77507988733f5734986.1606161801.git.andreyknvl@google.com Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: rename kasan_init_tags and mark as __initAndrey Konovalov
Rename kasan_init_tags() to kasan_init_sw_tags() as the upcoming hardware tag-based KASAN mode will have its own initialization routine. Also similarly to kasan_init() mark kasan_init_tags() as __init. Link: https://lkml.kernel.org/r/71e52af72a09f4b50c8042f16101c60e50649fbb.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: move initialization messageAndrey Konovalov
Software tag-based KASAN mode is fully initialized with kasan_init_tags(), while the generic mode only requires kasan_init(). Move the initialization message for tag-based mode into kasan_init_tags(). Also fix pr_fmt() usage for KASAN code: generic.c doesn't need it as it doesn't use any printing functions; tag-based mode should use "kasan:" instead of KBUILD_MODNAME (which stands for file name). Link: https://lkml.kernel.org/r/29a30ea4e1750450dd1f693d25b7b6cb05913ecf.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: only use kasan_depth for software modesAndrey Konovalov
This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't use kasan_depth. Only define and use it when one of the software KASAN modes are enabled. No functional changes for software modes. Link: https://lkml.kernel.org/r/e16f15aeda90bc7fb4dfc2e243a14b74cc5c8219.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-22kasan, arm64: only init shadow for software modesAndrey Konovalov
This is a preparatory commit for the upcoming addition of a new hardware tag-based (MTE-based) KASAN mode. Hardware tag-based KASAN won't be using shadow memory. Only initialize it when one of the software KASAN modes are enabled. No functional changes for software modes. Link: https://lkml.kernel.org/r/d1742eea2cd728d150d49b144e49b6433405c7ba.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-21RISC-V: Fix usage of memblock_enforce_memory_limitAtish Patra
memblock_enforce_memory_limit accepts the maximum memory size not the maximum address that can be handled by kernel. Fix the function invocation accordingly. Fixes: 1bd14a66ee52 ("RISC-V: Remove any memblock representing unusable memory area") Cc: stable@vger.kernel.org Reported-by: Bin Meng <bin.meng@windriver.com> Tested-by: Bin Meng <bin.meng@windriver.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Atish Patra <atish.patra@wdc.com> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
2020-12-21Merge tag 'clk-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk updates from Stephen Boyd: "The core framework got some nice improvements this time around. We gained the ability to get struct clk pointers from a struct clk_hw so that clk providers can consume the clks they provide, if they need to do something like that. This has been a long missing part of the clk provider API that will help us move away from exposing a struct clk pointer in the struct clk_hw. Tracepoints are added for the clk_set_rate() "range" functions, similar to the tracepoints we already have for clk_set_rate() and we added a column to debugfs to help developers understand the hardware enable state of clks in case firmware or bootloader state is different than what is expected. Overall the core changes are mostly improving the clk driver writing experience. At the driver level, we have the usual collection of driver updates and new drivers for new SoCs. This time around the Qualcomm folks introduced a good handful of clk drivers for various parts of three or four SoCs. The SiFive folks added a new clk driver for their FU740 SoCs, coming in second on the diffstat and then Atmel AT91 and Amlogic SoCs had lots of work done after that for various new features. One last thing to note in the driver area is that the i.MX driver has gained a new binding to support SCU clks after being on the list for many months. It uses a two cell binding which is sort of rare in clk DT bindings. Beyond that we have the usual set of driver fixes and tweaks that come from more testing and finding out that some configuration was wrong or that a driver could support being built as a module. Summary: Core: - Add some trace points for clk_set_rate() "range" functions - Add hardware enable information to clk_summary debugfs - Replace clk-provider.h with of_clk.h when possible - Add devm variant of clk_notifier_register() - Add clk_hw_get_clk() to generate a struct clk from a struct clk_hw New Drivers: - Bindings for Canaan K210 SoC clks - Support for SiFive FU740 PRCI - Camera clks on Qualcomm SC7180 SoCs - GCC and RPMh clks on Qualcomm SDX55 SoCs - RPMh clks on Qualcomm SM8350 SoCs - LPASS clks on Qualcomm SM8250 SoCs Updates: - DVFS support for AT91 clk driver - Update git repo branch for Renesas clock drivers - Add camera (CSI) and video-in (VIN) clocks on Renesas R-Car V3U - Add RPC (QSPI/HyperFLASH) clocks on Renesas RZ/G2M, RZ/G2N, and RZ/G2E - Stop using __raw_*() I/O accessors in Renesas clk drivers - One more conversion of DT bindings to json-schema - Make i.MX clk-gate2 driver more flexible - New two cell binding for i.MX SCU clks - Drop of_match_ptr() in i.MX8 clk drivers - Add arch dependencies for Rockchip clk drivers - Fix i2s on Rockchip rk3066 - Add MIPI DSI clks on Amlogic axg and g12 SoCs - Support modular builds of Amlogic clk drivers - Fix an Amlogic Video PLL clock dependency - Samsung Kconfig dependencies updates for better compile test coverage - Refactoring of the Samsung PLL clocks driver - Small Tegra driver cleanups - Minor fixes to Ingenic and VC5 clk drivers - Cleanup patches to remove unused variables and plug memory leaks" * tag 'clk-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: (134 commits) dt-binding: clock: Document canaan,k210-clk bindings dt-bindings: Add Canaan vendor prefix clk: vc5: Use "idt,voltage-microvolt" instead of "idt,voltage-microvolts" clk: ingenic: Fix divider calculation with div tables clk: sunxi-ng: Make sure divider tables have sentinel clk: s2mps11: Fix a resource leak in error handling paths in the probe function clk: mvebu: a3700: fix the XTAL MODE pin to MPP1_9 clk: si5351: Wait for bit clear after PLL reset clk: at91: sam9x60: remove atmel,osc-bypass support clk: at91: sama7g5: register cpu clock clk: at91: clk-master: re-factor master clock clk: at91: sama7g5: do not allow cpu pll to go higher than 1GHz clk: at91: sama7g5: decrease lower limit for MCK0 rate clk: at91: sama7g5: remove mck0 from parent list of other clocks clk: at91: clk-sam9x60-pll: allow runtime changes for pll clk: at91: sama7g5: add 5th divisor for mck0 layout and characteristics clk: at91: clk-master: add 5th divisor for mck master clk: at91: sama7g5: allow SYS and CPU PLLs to be exported and referenced in DT dt-bindings: clock: at91: add sama7g5 pll defines clk: at91: sama7g5: fix compilation error ...
2020-12-21Merge tag 'm68knommu-for-v5.11' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu Pull m68knommu updates from Greg Ungerer: - cleanup of 68328 code - align BSS section to 32bit * tag 'm68knommu-for-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/gerg/m68knommu: m68k: m68328: remove duplicate code m68k: m68328: move platform code to separate files m68knommu: align BSS section to 4-byte boundaries
2020-12-21powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on ↵Christophe Leroy
syscall too We need r1 to be properly set before activating MMU, otherwise any new exception taken while saving registers into the stack in syscall prologs will use the user stack, which is wrong and will even lockup or crash when KUAP is selected. Do that by switching the meaning of r11 and r1 until we have saved r1 to the stack: copy r1 into r11 and setup the new stack pointer in r1. To avoid complicating and impacting all generic and specific prolog code (and more), copy back r1 into r11 once r11 is save onto the stack. We could get rid of copying r1 back and forth at the cost of rewriting everything to use r1 instead of r11 all the way when CONFIG_VMAP_STACK is set, but the effort is probably not worth it for now. Fixes: da7bb43ab9da ("powerpc/32: Fix vmap stack - Properly set r1 before activating MMU") Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a3d819d5c348cee9783a311d5d3f3ba9b48fd219.1608531452.git.christophe.leroy@csgroup.eu
2020-12-21Merge branch 'devel-stable' into for-nextRussell King
2020-12-21Merge branches 'fixes' and 'misc' into for-nextRussell King
2020-12-21ARM: 9027/1: head.S: explicitly map DT even if it lives in the first ↵Ard Biesheuvel
physical section The early ATAGS/DT mapping code uses SECTION_SHIFT to mask low order bits of R2, and decides that no ATAGS/DTB were provided if the resulting value is 0x0. This means that on systems where DRAM starts at 0x0 (such as Raspberry Pi), no explicit mapping of the DT will be created if R2 points into the first 1 MB section of memory. This was not a problem before, because the decompressed kernel is loaded at the base of DRAM and mapped using sections as well, and so as long as the DT is referenced via a virtual address that uses the same translation (the linear map, in this case), things work fine. However, commit 7a1be318f579 ("9012/1: move device tree mapping out of linear region") changes this, and now the DT is referenced via a virtual address that is disjoint from the linear mapping of DRAM, and so we need the early code to create the DT mapping unconditionally. So let's create the early DT mapping for any value of R2 != 0x0. Reported-by: "kernelci.org bot" <bot@kernelci.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9038/1: Link with '-z norelro'Nathan Chancellor
When linking a multi_v7_defconfig + CONFIG_KASAN=y kernel with LD=ld.lld, the following error occurs: $ make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- LLVM=1 zImage ld.lld: error: section: .exit.data is not contiguous with other relro sections LLD defaults to '-z relro', which is unneeded for the kernel because program headers are not used nor is there any position independent code generation or linking for ARM. Add '-z norelro' to LDFLAGS_vmlinux to avoid this error. Link: https://github.com/ClangBuiltLinux/linux/issues/1189 Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9037/1: uncompress: Add OF_DT_MAGIC macroGeert Uytterhoeven
The DTB magic marker is stored as a 32-bit big-endian value, and thus depends on the CPU's endianness. Add a macro to define this value in native endianness, to reduce #ifdef clutter and (future) duplication. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9036/1: uncompress: Fix dbgadtb size parameter nameGeert Uytterhoeven
The dbgadtb macro is passed the size of the appended DTB, not the end address. Fixes: c03e41470e901123 ("ARM: 9010/1: uncompress: Print the location of appended DTB") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9035/1: uncompress: Add be32tocpu macroGeert Uytterhoeven
DTB stores all values as 32-bit big-endian integers. Add a macro to convert such values to native CPU endianness, to reduce duplication. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9033/1: arm/smp: Drop the macro S(x,s)Anshuman Khandual
Mapping between IPI type index and its string is direct without requiring an additional offset. Hence the existing macro S(x, s) is now redundant and can just be dropped. This also makes the code clean and simple. Cc: Marc Zyngier <maz@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9032/1: arm/mm: Convert PUD level pgtable helper macros into functionsAnshuman Khandual
Macros used as functions can be problematic from the compiler perspective. There was a build failure report caused primarily because of non reference of an argument variable. Hence convert PUD level pgtable helper macros into functions in order to avoid such problems in the future. In the process, it fixes the argument variables sequence in set_pud() which probably remained hidden for being a macro. https://lore.kernel.org/linux-mm/202011020749.5XQ3Hfzc-lkp@intel.com/ https://lore.kernel.org/linux-mm/5fa49698.Vu2O3r+dU20UoEJ+%25lkp@intel.com/ Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9031/1: hyp-stub: remove unused .L__boot_cpu_mode_offset symbolArd Biesheuvel
Commit aaac3733171fca94 ("ARM: kvm: replace open coded VA->PA calculations with adr_l call") removed all uses of .L__boot_cpu_mode_offset, so there is no longer a need to define it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21ARM: 9044/1: vfp: use undef hook for VFP support detectionArd Biesheuvel
Commit f77ac2e378be9dd6 ("ARM: 9030/1: entry: omit FP emulation for UND exceptions taken in kernel mode") failed to take into account that there is in fact a case where we relied on this code path: during boot, the VFP detection code issues a read of FPSID, which will trigger an undef exception on cores that lack VFP support. So let's reinstate this logic using an undef hook which is registered only for the duration of the initcall to vpf_init(), and which sets VFP_arch to a non-zero value - as before - if no VFP support is present. Fixes: f77ac2e378be9dd6 ("ARM: 9030/1: entry: omit FP emulation for UND ...") Reported-by: "kernelci.org bot" <bot@kernelci.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-12-21powerpc/vdso: Fix DOTSYM for 32-bit LE VDSOMichael Ellerman
Skirmisher reported on IRC that the 32-bit LE VDSO was hanging. This turned out to be due to a branch to self in eg. __kernel_gettimeofday. Looking at the disassembly with objdump -dR shows why: 00000528 <__kernel_gettimeofday>: 528: f0 ff 21 94 stwu r1,-16(r1) 52c: a6 02 08 7c mflr r0 530: f0 ff 21 94 stwu r1,-16(r1) 534: 14 00 01 90 stw r0,20(r1) 538: 05 00 9f 42 bcl 20,4*cr7+so,53c <__kernel_gettimeofday+0x14> 53c: a6 02 a8 7c mflr r5 540: ff ff a5 3c addis r5,r5,-1 544: c4 fa a5 38 addi r5,r5,-1340 548: f0 00 a5 38 addi r5,r5,240 54c: 01 00 00 48 bl 54c <__kernel_gettimeofday+0x24> 54c: R_PPC_REL24 .__c_kernel_gettimeofday Because we don't process relocations for the VDSO, this branch remains a branch from 0x54c to 0x54c. With the preceding patch to prohibit R_PPC_REL24 relocations, we instead get a build failure: 0000054c R_PPC_REL24 .__c_kernel_gettimeofday 00000598 R_PPC_REL24 .__c_kernel_clock_gettime 000005e4 R_PPC_REL24 .__c_kernel_clock_gettime64 00000630 R_PPC_REL24 .__c_kernel_clock_getres 0000067c R_PPC_REL24 .__c_kernel_time arch/powerpc/kernel/vdso32/vdso32.so.dbg: dynamic relocations are not supported The root cause is that we're branching to `.__c_kernel_gettimeofday`. But this is 32-bit LE code, which doesn't use function descriptors, so there are no dot symbols. The reason we're trying to branch to a dot symbol is because we're using the DOTSYM macro, but the ifdefs we use to define the DOTSYM macro do not currently work for 32-bit LE. So like previous commits we need to differentiate if the current compilation unit is 64-bit, rather than the kernel as a whole. ie. switch from CONFIG_PPC64 to __powerpc64__. With that fixed 32-bit LE code gets the empty version of DOTSYM, which just resolves to the original symbol name, leading to a direct branch and no relocations: 000003f8 <__kernel_gettimeofday>: 3f8: f0 ff 21 94 stwu r1,-16(r1) 3fc: a6 02 08 7c mflr r0 400: f0 ff 21 94 stwu r1,-16(r1) 404: 14 00 01 90 stw r0,20(r1) 408: 05 00 9f 42 bcl 20,4*cr7+so,40c <__kernel_gettimeofday+0x14> 40c: a6 02 a8 7c mflr r5 410: ff ff a5 3c addis r5,r5,-1 414: f4 fb a5 38 addi r5,r5,-1036 418: f0 00 a5 38 addi r5,r5,240 41c: 85 06 00 48 bl aa0 <__c_kernel_gettimeofday> Fixes: ab037dd87a2f ("powerpc/vdso: Switch VDSO to generic C implementation.") Reported-by: "Will Springer <skirmisher@protonmail.com>" Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201218111619.1206391-3-mpe@ellerman.id.au
2020-12-21powerpc/vdso: Don't pass 64-bit ABI cflags to 32-bit VDSOMichael Ellerman
When building the 32-bit VDSO, we are building 32-bit code as part of a 64-bit kernel build. That requires us to tweak the cflags to trick the compiler into building 32-bit code for us. The main way we do that is by passing -m32, but there are other options that affect code generation and ABI selection. In particular when building vgettimeofday.c, we end up passing -mcall-aixdesc because it's in KBUILD_CFLAGS, which causes the compiler to generate function descriptors, and dot symbols, eg: $ nm arch/powerpc/kernel/vdso32/vgettimeofday.o 000005d0 T .__c_kernel_clock_getres 00000024 D __c_kernel_clock_getres ... We get away with that at the moment because we also use the DOTSYM macro, and that is also incorrectly prepending a '.' in 32-bit VDSO code due to a separate bug. But we shouldn't be generating function descriptors for this file, there's no 32-bit ABI that includes function descriptors, so the resulting object file is some frankenstein and it's surprising that it even links. So filter out all the ABI-related options we add to CFLAGS for 64-bit builds, so that they're not used when building 32-bit code. With that we only see regular text symbols: $ nm arch/powerpc/kernel/vdso32/vgettimeofday.o michael@alpine1-p1 000005d0 T __c_kernel_clock_getres 00000000 T __c_kernel_clock_gettime 00000200 T __c_kernel_clock_gettime64 00000410 T __c_kernel_gettimeofday 00000650 T __c_kernel_time Fixes: ab037dd87a2f ("powerpc/vdso: Switch VDSO to generic C implementation.") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201218111619.1206391-2-mpe@ellerman.id.au
2020-12-21powerpc/vdso: Block R_PPC_REL24 relocationsMichael Ellerman
Add R_PPC_REL24 relocations to the list of relocations we do NOT support in the VDSO. These are generated in some cases and we do not support relocating them at runtime, so if they appear then the VDSO will not work at runtime, therefore it's preferable to break the build if we see them. Fixes: ab037dd87a2f ("powerpc/vdso: Switch VDSO to generic C implementation.") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201218111619.1206391-1-mpe@ellerman.id.au
2020-12-21powerpc/smp: Add __init to init_big_cores()Cédric Le Goater
It fixes this link warning: WARNING: modpost: vmlinux.o(.text.unlikely+0x2d98): Section mismatch in reference from the function init_big_cores.isra.0() to the function .init.text:init_thread_group_cache_map() The function init_big_cores.isra.0() references the function __init init_thread_group_cache_map(). This is often because init_big_cores.isra.0 lacks a __init annotation or the annotation of init_thread_group_cache_map is wrong. Fixes: 425752c63b6f ("powerpc: Detect the presence of big-cores via "ibm, thread-groups"") Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201221074154.403779-1-clg@kaod.org
2020-12-21powerpc/time: Force inlining of get_tb()Christophe Leroy
Force inlining of get_tb() in order to avoid getting following function in vdso32, leading to suboptimal performance in clock_gettime() 00000688 <.get_tb>: 688: 7c 6d 42 a6 mftbu r3 68c: 7c 8c 42 a6 mftb r4 690: 7d 2d 42 a6 mftbu r9 694: 7c 03 48 40 cmplw r3,r9 698: 40 e2 ff f0 bne+ 688 <.get_tb> 69c: 4e 80 00 20 blr Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/df05d53eed1210cf1aa76d1fb44aa0fab29c018e.1608488286.git.christophe.leroy@csgroup.eu
2020-12-21powerpc/boot: Fix build of dts/fslMichael Ellerman
The lkp robot reported that some configs fail to build, for example mpc85xx_smp_defconfig, with: cc1: fatal error: opening output file arch/powerpc/boot/dts/fsl/.mpc8540ads.dtb.dts.tmp: No such file or directory This bisects to: cc8a51ca6f05 ("kbuild: always create directories of targets") Although that commit claims to be about in-tree builds, it somehow breaks out-of-tree builds. But presumably it's just exposing a latent bug in our Makefiles. We can fix it by adding to targets for dts/fsl in the same way that we do for dts. Fixes: cc8a51ca6f05 ("kbuild: always create directories of targets") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201215032906.473460-1-mpe@ellerman.id.au