summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2019-11-30s390: implement perf_arch_fetch_caller_regsIlya Leoshkevich
On s390 bpf_get_stack_raw_tp() returns 0 entries for both kernel and user stacks. While there is no practical unwinding solution for userspace on s390 at this moment, there certainly is a kernel unwinder. However, it is not properly integrated with BPF. In order to start unwinding, bpf_get_stack_raw_tp() obtains the current kernel register values using perf_fetch_caller_regs(), which is not implemented for s390. The actual unwinding then happens by passing those registers to perf_callchain_kernel(). Implement perf_arch_fetch_caller_regs() for s390, where __builtin_frame_address(0) points to back_chain. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-25Merge tag 's390-5.5-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 updates from Vasily Gorbik: - Adjust PMU device drivers registration to avoid WARN_ON and few other perf improvements. - Enhance tracing in vfio-ccw. - Few stack unwinder fixes and improvements, convert get_wchan custom stack unwinding to generic api usage. - Fixes for mm helpers issues uncovered with tests validating architecture page table helpers. - Fix noexec bit handling when hardware doesn't support it. - Fix memleak and unsigned value compared with zero bugs in crypto code. Minor code simplification. - Fix crash during kdump with kasan enabled kernel. - Switch bug and alternatives from asm to asm_inline to improve inlining decisions. - Use 'depends on cc-option' for MARCH and TUNE options in Kconfig, add z13s and z14 ZR1 to TUNE descriptions. - Minor head64.S simplification. - Fix physical to logical CPU map for SMT. - Several cleanups in qdio code. - Other minor cleanups and fixes all over the code. * tag 's390-5.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (41 commits) s390/cpumf: Adjust registration of s390 PMU device drivers s390/smp: fix physical to logical CPU map for SMT s390/early: move access registers setup in C code s390/head64: remove unnecessary vdso_per_cpu_data setup s390/early: move control registers setup in C code s390/kasan: support memcpy_real with TRACE_IRQFLAGS s390/crypto: Fix unsigned variable compared with zero s390/pkey: use memdup_user() to simplify code s390/pkey: fix memory leak within _copy_apqns_from_user() s390/disassembler: don't hide instruction addresses s390/cpum_sf: Assign error value to err variable s390/cpum_sf: Replace function name in debug statements s390/cpum_sf: Use consistant debug print format for sampling s390/unwind: drop unnecessary code around calling ftrace_graph_ret_addr() s390: add error handling to perf_callchain_kernel s390: always inline current_stack_pointer() s390/mm: add mm_pxd_folded() checks to pxd_free() s390/mm: properly clear _PAGE_NOEXEC bit when it is not supported s390/mm: simplify page table helpers for large entries s390/mm: make pmd/pud_bad() report large entries as bad ...
2019-11-25Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "Apart from the arm64-specific bits (core arch and perf, new arm64 selftests), it touches the generic cow_user_page() (reviewed by Kirill) together with a macro for x86 to preserve the existing behaviour on this architecture. Summary: - On ARMv8 CPUs without hardware updates of the access flag, avoid failing cow_user_page() on PFN mappings if the pte is old. The patches introduce an arch_faults_on_old_pte() macro, defined as false on x86. When true, cow_user_page() makes the pte young before attempting __copy_from_user_inatomic(). - Covert the synchronous exception handling paths in arch/arm64/kernel/entry.S to C. - FTRACE_WITH_REGS support for arm64. - ZONE_DMA re-introduced on arm64 to support Raspberry Pi 4 - Several kselftest cases specific to arm64, together with a MAINTAINERS update for these files (moved to the ARM64 PORT entry). - Workaround for a Neoverse-N1 erratum where the CPU may fetch stale instructions under certain conditions. - Workaround for Cortex-A57 and A72 errata where the CPU may speculatively execute an AT instruction and associate a VMID with the wrong guest page tables (corrupting the TLB). - Perf updates for arm64: additional PMU topologies on HiSilicon platforms, support for CCN-512 interconnect, AXI ID filtering in the IMX8 DDR PMU, support for the CCPI2 uncore PMU in ThunderX2. - GICv3 optimisation to avoid a heavy barrier when accessing the ICC_PMR_EL1 register. - ELF HWCAP documentation updates and clean-up. - SMC calling convention conduit code clean-up. - KASLR diagnostics printed during boot - NVIDIA Carmel CPU added to the KPTI whitelist - Some arm64 mm clean-ups: use generic free_initrd_mem(), remove stale macro, simplify calculation in __create_pgd_mapping(), typos. - Kconfig clean-ups: CMDLINE_FORCE to depend on CMDLINE, choice for endinanness to help with allmodconfig" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (93 commits) arm64: Kconfig: add a choice for endianness kselftest: arm64: fix spelling mistake "contiguos" -> "contiguous" arm64: Kconfig: make CMDLINE_FORCE depend on CMDLINE MAINTAINERS: Add arm64 selftests to the ARM64 PORT entry arm64: kaslr: Check command line before looking for a seed arm64: kaslr: Announce KASLR status on boot kselftest: arm64: fake_sigreturn_misaligned_sp kselftest: arm64: fake_sigreturn_bad_size kselftest: arm64: fake_sigreturn_duplicated_fpsimd kselftest: arm64: fake_sigreturn_missing_fpsimd kselftest: arm64: fake_sigreturn_bad_size_for_magic0 kselftest: arm64: fake_sigreturn_bad_magic kselftest: arm64: add helper get_current_context kselftest: arm64: extend test_init functionalities kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht] kselftest: arm64: mangle_pstate_invalid_daif_bits kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils kselftest: arm64: extend toplevel skeleton Makefile drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform arm64: mm: reserve CMA and crashkernel in ZONE_DMA32 ...
2019-11-25Merge tag 'linux-kselftest-5.5-rc1-kunit' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kselftest KUnit support gtom Shuah Khan: "This adds KUnit, a lightweight unit testing and mocking framework for the Linux kernel from Brendan Higgins. KUnit is not an end-to-end testing framework. It is currently supported on UML and sub-systems can write unit tests and run them in UML env. KUnit documentation is included in this update. In addition, this Kunit update adds 3 new kunit tests: - proc sysctl test from Iurii Zaikin - the 'list' doubly linked list test from David Gow - ext4 tests for decoding extended timestamps from Iurii Zaikin In the future KUnit will be linked to Kselftest framework to provide a way to trigger KUnit tests from user-space" * tag 'linux-kselftest-5.5-rc1-kunit' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (23 commits) lib/list-test: add a test for the 'list' doubly linked list ext4: add kunit test for decoding extended timestamps Documentation: kunit: Fix verification command kunit: Fix '--build_dir' option kunit: fix failure to build without printk MAINTAINERS: add proc sysctl KUnit test to PROC SYSCTL section kernel/sysctl-test: Add null pointer test for sysctl.c:proc_dointvec() MAINTAINERS: add entry for KUnit the unit testing framework Documentation: kunit: add documentation for KUnit kunit: defconfig: add defconfigs for building KUnit tests kunit: tool: add Python wrappers for running KUnit tests kunit: test: add tests for KUnit managed resources kunit: test: add the concept of assertions kunit: test: add tests for kunit test abort kunit: test: add support for test abort objtool: add kunit_try_catch_throw to the noreturn list kunit: test: add initial tests lib: enable building KUnit in lib/ kunit: test: add the concept of expectations kunit: test: add assertion printing library ...
2019-11-21Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Will Deacon: "Ensure PAN is re-enabled following user fault in uaccess routines. After I thought we were done for 5.4, we had a report this week of a nasty issue that has been shown to leak data between different user address spaces thanks to corruption of entries in the TLB. In hindsight, we should have spotted this in review when the PAN code was merged back in v4.3, but hindsight is 20/20 and I'm trying not to beat myself up too much about it despite being fairly miserable. Anyway, the fix is "obvious" but the actual failure is more more subtle, and is described in the commit message. I've included a fairly mechanical follow-up patch here as well, which moves this checking out into the C wrappers which is what we do for {get,put}_user() already and allows us to remove these bloody assembly macros entirely. The patches have passed kernelci [1] [2] [3] and CKI [4] tests over night, as well as some targetted testing [5] for this particular issue. The first patch is tagged for stable and should be applied to 4.14, 4.19 and 5.3. I have separate backports for 4.4 and 4.9, which I'll send out once this has landed in your tree (although the original patch applies cleanly, it won't build for those two trees). Thanks to Pavel Tatashin for reporting this and Mark Rutland for helping to diagnose the issue and review/test the solution" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: uaccess: Remove uaccess_*_not_uao asm macros arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault
2019-11-20arm64: uaccess: Remove uaccess_*_not_uao asm macrosPavel Tatashin
It is safer and simpler to drop the uaccess assembly macros in favour of inline C functions. Although this bloats the Image size slightly, it aligns our user copy routines with '{get,put}_user()' and generally makes the code a lot easier to reason about. Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> [will: tweaked commit message and changed temporary variable names] Signed-off-by: Will Deacon <will@kernel.org>
2019-11-20arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess faultPavel Tatashin
A number of our uaccess routines ('__arch_clear_user()' and '__arch_copy_{in,from,to}_user()') fail to re-enable PAN if they encounter an unhandled fault whilst accessing userspace. For CPUs implementing both hardware PAN and UAO, this bug has no effect when both extensions are in use by the kernel. For CPUs implementing hardware PAN but not UAO, this means that a kernel using hardware PAN may execute portions of code with PAN inadvertently disabled, opening us up to potential security vulnerabilities that rely on userspace access from within the kernel which would usually be prevented by this mechanism. In other words, parts of the kernel run the same way as they would on a CPU without PAN implemented/emulated at all. For CPUs not implementing hardware PAN and instead relying on software emulation via 'CONFIG_ARM64_SW_TTBR0_PAN=y', the impact is unfortunately much worse. Calling 'schedule()' with software PAN disabled means that the next task will execute in the kernel using the page-table and ASID of the previous process even after 'switch_mm()', since the actual hardware switch is deferred until return to userspace. At this point, or if there is a intermediate call to 'uaccess_enable()', the page-table and ASID of the new process are installed. Sadly, due to the changes introduced by KPTI, this is not an atomic operation and there is a very small window (two instructions) where the CPU is configured with the page-table of the old task and the ASID of the new task; a speculative access in this state is disastrous because it would corrupt the TLB entries for the new task with mappings from the previous address space. As Pavel explains: | I was able to reproduce memory corruption problem on Broadcom's SoC | ARMv8-A like this: | | Enable software perf-events with PERF_SAMPLE_CALLCHAIN so userland's | stack is accessed and copied. | | The test program performed the following on every CPU and forking | many processes: | | unsigned long *map = mmap(NULL, PAGE_SIZE, PROT_READ|PROT_WRITE, | MAP_SHARED | MAP_ANONYMOUS, -1, 0); | map[0] = getpid(); | sched_yield(); | if (map[0] != getpid()) { | fprintf(stderr, "Corruption detected!"); | } | munmap(map, PAGE_SIZE); | | From time to time I was getting map[0] to contain pid for a | different process. Ensure that PAN is re-enabled when returning after an unhandled user fault from our uaccess routines. Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Fixes: 338d4f49d6f7 ("arm64: kernel: Add support for Privileged Access Never") Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> [will: rewrote commit message] Signed-off-by: Will Deacon <will@kernel.org>
2019-11-20s390/cpumf: Adjust registration of s390 PMU device driversThomas Richter
Linux-next commit titled "perf/core: Optimize perf_init_event()" changed the semantics of PMU device driver registration. It was done to speed up the lookup/handling of PMU device driver specific events. It also enforces that only one PMU device driver will be registered of type PERF_EVENT_RAW. This change added these line in function perf_pmu_register(): ... + ret = idr_alloc(&pmu_idr, pmu, max, 0, GFP_KERNEL); + if (ret < 0) goto free_pdc; + + WARN_ON(type >= 0 && ret != type); The warn_on generates a message. We have 3 PMU device drivers, each registered as type PERF_TYPE_RAW. The cf_diag device driver (arch/s390/kernel/perf_cpumf_cf_diag.c) always hits the WARN_ON because it is the second PMU device driver (after sampling device driver arch/s390/kernel/perf_cpumf_sf.c) which is registered as type 4 (PERF_TYPE_RAW). So when the sampling device driver is registered, ret has value 4. When cf_diag device driver is registered with type 4, ret has value of 5 and WARN_ON fires. Adjust the PMU device drivers for s390 to support the new semantics required by perf_pmu_register(). Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-20s390/smp: fix physical to logical CPU map for SMTHeiko Carstens
If an SMT capable system is not IPL'ed from the first CPU the setup of the physical to logical CPU mapping is broken: the IPL core gets CPU number 0, but then the next core gets CPU number 1. Correct would be that all SMT threads of CPU 0 get the subsequent logical CPU numbers. This is important since a lot of code (like e.g. the CPU topology code) assumes that CPU maps are setup like this. If the mapping is broken the system will not IPL due to broken topology masks: [ 1.716341] BUG: arch topology broken [ 1.716342] the SMT domain not a subset of the MC domain [ 1.716343] BUG: arch topology broken [ 1.716344] the MC domain not a subset of the BOOK domain This scenario can usually not happen since LPARs are always IPL'ed from CPU 0 and also re-IPL is intiated from CPU 0. However older kernels did initiate re-IPL on an arbitrary CPU. If therefore a re-IPL from an old kernel into a new kernel is initiated this may lead to crash. Fix this by setting up the physical to logical CPU mapping correctly. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-20s390/early: move access registers setup in C codeVasily Gorbik
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-20s390/head64: remove unnecessary vdso_per_cpu_data setupVasily Gorbik
vdso_per_cpu_data lowcore value is only needed for fully functional exception handlers, which are activated in setup_lowcore_dat_off. The same function does init vdso_per_cpu_data via vdso_alloc_boot_cpu. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-20s390/early: move control registers setup in C codeVasily Gorbik
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-20s390/kasan: support memcpy_real with TRACE_IRQFLAGSVasily Gorbik
Currently if the kernel is built with CONFIG_TRACE_IRQFLAGS and KASAN and used as crash kernel it crashes itself due to trace_hardirqs_off/trace_hardirqs_on being called with DAT off. This happens because trace_hardirqs_off/trace_hardirqs_on are instrumented and kasan code tries to perform access to shadow memory to validate memory accesses. Kasan shadow memory is populated with vmemmap, so all accesses require DAT on. memcpy_real could be called with DAT on or off (with kasan enabled DAT is set even before early code is executed). Make sure that trace_hardirqs_off/trace_hardirqs_on are called with DAT on and only actual __memcpy_real is called with DAT off. Also annotate __memcpy_real and _memcpy_real with __no_sanitize_address to avoid further problems due to switching DAT off. Reviewed-by: Philipp Rudo <prudo@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-20s390/crypto: Fix unsigned variable compared with zeroYueHaibing
s390_crypto_shash_parmsize() return type is int, it should not be stored in a unsigned variable, which compared with zero. Reported-by: Hulk Robot <hulkci@huawei.com> Fixes: 3c2eb6b76cab ("s390/crypto: Support for SHA3 via CPACF (MSA6)") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Joerg Schmidbauer <jschmidb@linux.vnet.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-16Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Two fixes: disable unreliable HPET on Intel Coffe Lake platforms, and fix a lockdep splat in the resctrl code" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/resctrl: Fix potential lockdep warning x86/quirks: Disable HPET on Intel Coffe Lake platforms
2019-11-15Merge tag 'mips_fixes_5.4_4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux Pull MIPS fixes from Paul Burton: "A fix and simplification for SGI IP27 exception handlers, and a small MAINTAINERS update for Broadcom MIPS systems" * tag 'mips_fixes_5.4_4' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux: MAINTAINERS: Remove Kevin as maintainer of BMIPS generic platforms MIPS: SGI-IP27: fix exception handler replication
2019-11-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull more KVM fixes from Paolo Bonzini: - fixes for CONFIG_KVM_COMPAT=n - two updates to the IFU erratum - selftests build fix - brown paper bag fix * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: Add a comment describing the /dev/kvm no_compat handling KVM: x86/mmu: Take slots_lock when using kvm_mmu_zap_all_fast() KVM: Forbid /dev/kvm being opened by a compat task when CONFIG_KVM_COMPAT=n KVM: X86: Reset the three MSR list number variables to 0 in kvm_init_msr_list() selftests: kvm: fix build with glibc >= 2.30 kvm: x86: disable shattered huge page recovery for PREEMPT_RT.
2019-11-14Merge tag 'kbuild-fixes-v5.4-3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild fixes from Masahiro Yamada: - fix build error when compiling SPARC VDSO with CONFIG_COMPAT=y - pass correct --arch option to Sparse * tag 'kbuild-fixes-v5.4-3' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: tell sparse about the $ARCH sparc: vdso: fix build error of vdso32
2019-11-14KVM: x86/mmu: Take slots_lock when using kvm_mmu_zap_all_fast()Sean Christopherson
Acquire the per-VM slots_lock when zapping all shadow pages as part of toggling nx_huge_pages. The fast zap algorithm relies on exclusivity (via slots_lock) to identify obsolete vs. valid shadow pages, because it uses a single bit for its generation number. Holding slots_lock also obviates the need to acquire a read lock on the VM's srcu. Failing to take slots_lock when toggling nx_huge_pages allows multiple instances of kvm_mmu_zap_all_fast() to run concurrently, as the other user, KVM_SET_USER_MEMORY_REGION, does not take the global kvm_lock. (kvm_mmu_zap_all_fast() does take kvm->mmu_lock, but it can be temporarily dropped by kvm_zap_obsolete_pages(), so it is not enough to enforce exclusivity). Concurrent fast zap instances causes obsolete shadow pages to be incorrectly identified as valid due to the single bit generation number wrapping, which results in stale shadow pages being left in KVM's MMU and leads to all sorts of undesirable behavior. The bug is easily confirmed by running with CONFIG_PROVE_LOCKING and toggling nx_huge_pages via its module param. Note, until commit 4ae5acbc4936 ("KVM: x86/mmu: Take slots_lock when using kvm_mmu_zap_all_fast()", 2019-11-13) the fast zap algorithm used an ulong-sized generation instead of relying on exclusivity for correctness, but all callers except the recently added set_nx_huge_pages() needed to hold slots_lock anyways. Therefore, this patch does not have to be backported to stable kernels. Given that toggling nx_huge_pages is by no means a fast path, force it to conform to the current approach instead of reintroducing the previous generation count. Fixes: b8e8c8303ff28 ("kvm: mmu: ITLB_MULTIHIT mitigation", but NOT FOR STABLE) Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-15sparc: vdso: fix build error of vdso32Masahiro Yamada
Since commit 54b8ae66ae1a ("kbuild: change *FLAGS_<basetarget>.o to take the path relative to $(obj)"), sparc allmodconfig fails to build as follows: CC arch/sparc/vdso/vdso32/vclock_gettime.o unrecognized e_machine 18 arch/sparc/vdso/vdso32/vclock_gettime.o arch/sparc/vdso/vdso32/vclock_gettime.o: failed The cause of the breakage is that -pg flag not being dropped. The vdso32 files are located in the vdso32/ subdirectory, but I missed to update the Makefile. I removed the meaningless CFLAGS_REMOVE_vdso-note.o since it is only effective for C file. vdso-note.o is compiled from assembly file: arch/sparc/vdso/vdso-note.S arch/sparc/vdso/vdso32/vdso-note.S Fixes: 54b8ae66ae1a ("kbuild: change *FLAGS_<basetarget>.o to take the path relative to $(obj)") Reported-by: Anatoly Pugachev <matorola@gmail.com> Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Tested-by: Anatoly Pugachev <matorola@gmail.com> Acked-by: David S. Miller <davem@davemloft.net>
2019-11-14arm64: Kconfig: add a choice for endiannessAnders Roxell
When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig CONFIG_CPU_BIG_ENDIAN gets enabled. Which tends not to be what most people want. Another concern that has come up is that ACPI isn't built for an allmodconfig kernel today since that also depends on !CPU_BIG_ENDIAN. Rework so that we introduce a 'choice' and default the choice to CPU_LITTLE_ENDIAN. That means that when we build an allmodconfig kernel it will default to CPU_LITTLE_ENDIAN that most people tends to want. Reviewed-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-13KVM: X86: Reset the three MSR list number variables to 0 in kvm_init_msr_list()Xiaoyao Li
When applying commit 7a5ee6edb42e ("KVM: X86: Fix initialization of MSR lists"), it forgot to reset the three MSR lists number varialbes to 0 while removing the useless conditionals. Fixes: 7a5ee6edb42e (KVM: X86: Fix initialization of MSR lists) Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-13kvm: x86: disable shattered huge page recovery for PREEMPT_RT.Paolo Bonzini
If a huge page is recovered (and becomes no executable) while another thread is executing it, the resulting contention on mmu_lock can cause latency spikes. Disabling recovery for PREEMPT_RT kernels fixes this issue. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-13x86/resctrl: Fix potential lockdep warningXiaochen Shen
rdtgroup_cpus_write() and mkdir_rdt_prepare() call rdtgroup_kn_lock_live() -> kernfs_to_rdtgroup() to get 'rdtgrp', and then call the rdt_last_cmd_{clear,puts,...}() functions which will check if rdtgroup_mutex is held/requires its caller to hold rdtgroup_mutex. But if 'rdtgrp' returned from kernfs_to_rdtgroup() is NULL, rdtgroup_mutex is not held and calling rdt_last_cmd_{clear,puts,...}() will result in a self-incurred, potential lockdep warning. Remove the rdt_last_cmd_{clear,puts,...}() calls in these two paths. Just returning error should be sufficient to report to the user that the entry doesn't exist any more. [ bp: Massage. ] Fixes: 94457b36e8a5 ("x86/intel_rdt: Add diagnostics when writing the cpus file") Fixes: cfd0f34e4cd5 ("x86/intel_rdt: Add diagnostics when making directories") Signed-off-by: Xiaochen Shen <xiaochen.shen@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Tony Luck <tony.luck@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: pei.p.jia@intel.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/1573079796-11713-1-git-send-email-xiaochen.shen@intel.com
2019-11-12Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull kvm fixes from Paolo Bonzini: "Fix unwinding of KVM_CREATE_VM failure, VT-d posted interrupts, DAX/ZONE_DEVICE, and module unload/reload" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: MMU: Do not treat ZONE_DEVICE pages as being reserved KVM: VMX: Introduce pi_is_pir_empty() helper KVM: VMX: Do not change PID.NDST when loading a blocked vCPU KVM: VMX: Consider PID.PIR to determine if vCPU has pending interrupts KVM: VMX: Fix comment to specify PID.ON instead of PIR.ON KVM: X86: Fix initialization of MSR lists KVM: fix placement of refcount initialization KVM: Fix NULL-ptr deref after kvm_create_vm fails
2019-11-12Merge branch 'x86-pti-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 TSX Async Abort and iTLB Multihit mitigations from Thomas Gleixner: "The performance deterioration departement is not proud at all of presenting the seventh installment of speculation mitigations and hardware misfeature workarounds: 1) TSX Async Abort (TAA) - 'The Annoying Affair' TAA is a hardware vulnerability that allows unprivileged speculative access to data which is available in various CPU internal buffers by using asynchronous aborts within an Intel TSX transactional region. The mitigation depends on a microcode update providing a new MSR which allows to disable TSX in the CPU. CPUs which have no microcode update can be mitigated by disabling TSX in the BIOS if the BIOS provides a tunable. Newer CPUs will have a bit set which indicates that the CPU is not vulnerable, but the MSR to disable TSX will be available nevertheless as it is an architected MSR. That means the kernel provides the ability to disable TSX on the kernel command line, which is useful as TSX is a truly useful mechanism to accelerate side channel attacks of all sorts. 2) iITLB Multihit (NX) - 'No eXcuses' iTLB Multihit is an erratum where some Intel processors may incur a machine check error, possibly resulting in an unrecoverable CPU lockup, when an instruction fetch hits multiple entries in the instruction TLB. This can occur when the page size is changed along with either the physical address or cache type. A malicious guest running on a virtualized system can exploit this erratum to perform a denial of service attack. The workaround is that KVM marks huge pages in the extended page tables as not executable (NX). If the guest attempts to execute in such a page, the page is broken down into 4k pages which are marked executable. The workaround comes with a mechanism to recover these shattered huge pages over time. Both issues come with full documentation in the hardware vulnerabilities section of the Linux kernel user's and administrator's guide. Thanks to all patch authors and reviewers who had the extraordinary priviledge to be exposed to this nuisance. Special thanks to Borislav Petkov for polishing the final TAA patch set and to Paolo Bonzini for shepherding the KVM iTLB workarounds and providing also the backports to stable kernels for those!" * 'x86-pti-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/speculation/taa: Fix printing of TAA_MSG_SMT on IBRS_ALL CPUs Documentation: Add ITLB_MULTIHIT documentation kvm: x86: mmu: Recovery of shattered NX large pages kvm: Add helper function for creating VM worker threads kvm: mmu: ITLB_MULTIHIT mitigation cpu/speculation: Uninline and export CPU mitigations helpers x86/cpu: Add Tremont to the cpu vulnerability whitelist x86/bugs: Add ITLB_MULTIHIT bug infrastructure x86/tsx: Add config options to set tsx=on|off|auto x86/speculation/taa: Add documentation for TSX Async Abort x86/tsx: Add "auto" option to the tsx= cmdline parameter kvm/x86: Export MDS_NO=0 to guests when TSX is enabled x86/speculation/taa: Add sysfs reporting for TSX Async Abort x86/speculation/taa: Add mitigation for TSX Async Abort x86/cpu: Add a "tsx=" cmdline option with TSX disabled by default x86/cpu: Add a helper function x86_read_arch_cap_msr() x86/msr: Add the IA32_TSX_CTRL MSR
2019-11-12x86/quirks: Disable HPET on Intel Coffe Lake platformsKai-Heng Feng
Some Coffee Lake platforms have a skewed HPET timer once the SoCs entered PC10, which in consequence marks TSC as unstable because HPET is used as watchdog clocksource for TSC. Harry Pan tried to work around it in the clocksource watchdog code [1] thereby creating a circular dependency between HPET and TSC. This also ignores the fact, that HPET is not only unsuitable as watchdog clocksource on these systems, it becomes unusable in general. Disable HPET on affected platforms. Suggested-by: Feng Tang <feng.tang@intel.com> Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203183 Link: https://lore.kernel.org/lkml/20190516090651.1396-1-harry.pan@intel.com/ [1] Link: https://lkml.kernel.org/r/20191016103816.30650-1-kai.heng.feng@canonical.com
2019-11-12s390/disassembler: don't hide instruction addressesIlya Leoshkevich
Due to kptr_restrict, JITted BPF code is now displayed like this: 000000000b6ed1b2: ebdff0800024 stmg %r13,%r15,128(%r15) 000000004cde2ba0: 41d0f040 la %r13,64(%r15) 00000000fbad41b0: a7fbffa0 aghi %r15,-96 Leaking kernel addresses to dmesg is not a concern in this case, because this happens only when JIT debugging is explicitly activated, which only root can do. Use %px in this particular instance, and also to print an instruction address in show_code and PCREL (e.g. brasl) arguments in print_insn. While at present functionally equivalent to %016lx, %px is recommended by Documentation/core-api/printk-formats.rst for such cases. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-12s390/cpum_sf: Assign error value to err variableThomas Richter
When starting the CPU Measurement sampling facility using qsi() function, this function may return an error value. This error value is referenced in the else part of the if statement to dump its value in a debug statement. Right now this value is always zero because it has not been assigned a value. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-12s390/cpum_sf: Replace function name in debug statementsThomas Richter
Replace hard coded function names in debug statements by the "%s ...", __func__ construct suggested by checkpatch.pl script. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-12s390/cpum_sf: Use consistant debug print format for samplingThomas Richter
Use consistant debug print format of the form variable blank value. Also add leading 0x for all hex values. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2019-11-12KVM: MMU: Do not treat ZONE_DEVICE pages as being reservedSean Christopherson
Explicitly exempt ZONE_DEVICE pages from kvm_is_reserved_pfn() and instead manually handle ZONE_DEVICE on a case-by-case basis. For things like page refcounts, KVM needs to treat ZONE_DEVICE pages like normal pages, e.g. put pages grabbed via gup(). But for flows such as setting A/D bits or shifting refcounts for transparent huge pages, KVM needs to to avoid processing ZONE_DEVICE pages as the flows in question lack the underlying machinery for proper handling of ZONE_DEVICE pages. This fixes a hang reported by Adam Borowski[*] in dev_pagemap_cleanup() when running a KVM guest backed with /dev/dax memory, as KVM straight up doesn't put any references to ZONE_DEVICE pages acquired by gup(). Note, Dan Williams proposed an alternative solution of doing put_page() on ZONE_DEVICE pages immediately after gup() in order to simplify the auditing needed to ensure is_zone_device_page() is called if and only if the backing device is pinned (via gup()). But that approach would break kvm_vcpu_{un}map() as KVM requires the page to be pinned from map() 'til unmap() when accessing guest memory, unlike KVM's secondary MMU, which coordinates with mmu_notifier invalidations to avoid creating stale page references, i.e. doesn't rely on pages being pinned. [*] http://lkml.kernel.org/r/20190919115547.GA17963@angband.pl Reported-by: Adam Borowski <kilobyte@angband.pl> Analyzed-by: David Hildenbrand <david@redhat.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Cc: stable@vger.kernel.org Fixes: 3565fce3a659 ("mm, x86: get_user_pages() for dax mappings") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-12KVM: VMX: Introduce pi_is_pir_empty() helperJoao Martins
Streamline the PID.PIR check and change its call sites to use the newly added helper. Suggested-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-12KVM: VMX: Do not change PID.NDST when loading a blocked vCPUJoao Martins
When vCPU enters block phase, pi_pre_block() inserts vCPU to a per pCPU linked list of all vCPUs that are blocked on this pCPU. Afterwards, it changes PID.NV to POSTED_INTR_WAKEUP_VECTOR which its handler (wakeup_handler()) is responsible to kick (unblock) any vCPU on that linked list that now has pending posted interrupts. While vCPU is blocked (in kvm_vcpu_block()), it may be preempted which will cause vmx_vcpu_pi_put() to set PID.SN. If later the vCPU will be scheduled to run on a different pCPU, vmx_vcpu_pi_load() will clear PID.SN but will also *overwrite PID.NDST to this different pCPU*. Instead of keeping it with original pCPU which vCPU had entered block phase on. This results in an issue because when a posted interrupt is delivered, as the wakeup_handler() will be executed and fail to find blocked vCPU on its per pCPU linked list of all vCPUs that are blocked on this pCPU. Which is due to the vCPU being placed on a *different* per pCPU linked list i.e. the original pCPU in which it entered block phase. The regression is introduced by commit c112b5f50232 ("KVM: x86: Recompute PID.ON when clearing PID.SN"). Therefore, partially revert it and reintroduce the condition in vmx_vcpu_pi_load() responsible for avoiding changing PID.NDST when loading a blocked vCPU. Fixes: c112b5f50232 ("KVM: x86: Recompute PID.ON when clearing PID.SN") Tested-by: Nathan Ni <nathan.ni@oracle.com> Co-developed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-12KVM: VMX: Consider PID.PIR to determine if vCPU has pending interruptsJoao Martins
Commit 17e433b54393 ("KVM: Fix leak vCPU's VMCS value into other pCPU") introduced vmx_dy_apicv_has_pending_interrupt() in order to determine if a vCPU have a pending posted interrupt. This routine is used by kvm_vcpu_on_spin() when searching for a a new runnable vCPU to schedule on pCPU instead of a vCPU doing busy loop. vmx_dy_apicv_has_pending_interrupt() determines if a vCPU has a pending posted interrupt solely based on PID.ON. However, when a vCPU is preempted, vmx_vcpu_pi_put() sets PID.SN which cause raised posted interrupts to only set bit in PID.PIR without setting PID.ON (and without sending notification vector), as depicted in VT-d manual section 5.2.3 "Interrupt-Posting Hardware Operation". Therefore, checking PID.ON is insufficient to determine if a vCPU has pending posted interrupts and instead we should also check if there is some bit set on PID.PIR if PID.SN=1. Fixes: 17e433b54393 ("KVM: Fix leak vCPU's VMCS value into other pCPU") Reviewed-by: Jagannathan Raman <jag.raman@oracle.com> Co-developed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-12KVM: VMX: Fix comment to specify PID.ON instead of PIR.ONLiran Alon
The Outstanding Notification (ON) bit is part of the Posted Interrupt Descriptor (PID) as opposed to the Posted Interrupts Register (PIR). The latter is a bitmap for pending vectors. Reviewed-by: Joao Martins <joao.m.martins@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-12KVM: X86: Fix initialization of MSR listsChenyi Qiang
The three MSR lists(msrs_to_save[], emulated_msrs[] and msr_based_features[]) are global arrays of kvm.ko, which are adjusted (copy supported MSRs forward to override the unsupported MSRs) when insmod kvm-{intel,amd}.ko, but it doesn't reset these three arrays to their initial value when rmmod kvm-{intel,amd}.ko. Thus, at the next installation, kvm-{intel,amd}.ko will do operations on the modified arrays with some MSRs lost and some MSRs duplicated. So define three constant arrays to hold the initial MSR lists and initialize msrs_to_save[], emulated_msrs[] and msr_based_features[] based on the constant arrays. Cc: stable@vger.kernel.org Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com> [Remove now useless conditionals. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-11-11arm64: Kconfig: make CMDLINE_FORCE depend on CMDLINEAnders Roxell
When building allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig CONFIG_CMDLINE_FORCE gets enabled. Which forces the user to pass the full cmdline to CONFIG_CMDLINE="...". Rework so that CONFIG_CMDLINE_FORCE gets set only if CONFIG_CMDLINE is set to something except an empty string. Suggested-by: John Garry <john.garry@huawei.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-10Merge tag 'armsoc-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc Pull ARM SoC fixes from Olof Johansson: "A set of fixes that have trickled in over the last couple of weeks: - MAINTAINER update for Cavium/Marvell ThunderX2 - stm32 tweaks to pinmux for Joystick/Camera, and RAM allocation for CAN interfaces - i.MX fixes for voltage regulator GPIO mappings, fixes voltage scaling issues - More i.MX fixes for various issues on i.MX eval boards: interrupt storm due to u-boot leaving pins in new states, fixing power button config, a couple of compatible-string corrections. - Powerdown and Suspend/Resume fixes for Allwinner A83-based tablets - A few documentation tweaks and a fix of a memory leak in the reset subsystem" * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: MAINTAINERS: update Cavium ThunderX2 maintainers ARM: dts: stm32: change joystick pinctrl definition on stm32mp157c-ev1 ARM: dts: stm32: remove OV5640 pinctrl definition on stm32mp157c-ev1 ARM: dts: stm32: Fix CAN RAM mapping on stm32mp157c ARM: dts: stm32: relax qspi pins slew-rate for stm32mp157 arm64: dts: zii-ultra: fix ARM regulator GPIO handle ARM: sunxi: Fix CPU powerdown on A83T ARM: dts: sun8i-a83t-tbs-a711: Fix WiFi resume from suspend arm64: dts: imx8mn: fix compatible string for sdma arm64: dts: imx8mm: fix compatible string for sdma reset: fix reset_control_ops kerneldoc comment ARM: dts: imx6-logicpd: Re-enable SNVS power key soc: imx: gpc: fix initialiser format ARM: dts: imx6qdl-sabreauto: Fix storm of accelerometer interrupts arm64: dts: ls1028a: fix a compatible issue reset: fix reset_control_get_exclusive kerneldoc comment reset: fix reset_control_lookup kerneldoc comment reset: fix of_reset_control_get_count kerneldoc comment reset: fix of_reset_simple_xlate kerneldoc comment reset: Fix memory leak in reset_control_array_put()
2019-11-10Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A small set of fixes for x86: - Make the tsc=reliable/nowatchdog command line parameter work again. It was broken with the introduction of the early TSC clocksource. - Prevent the evaluation of exception stacks before they are set up. This causes a crash in dumpstack because the stack walk termination gets screwed up. - Prevent a NULL pointer dereference in the rescource control file system. - Avoid bogus warnings about APIC id mismatch related to the LDR which can happen when the LDR is not in use and therefore not initialized. Only evaluate that when the APIC is in logical destination mode" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/tsc: Respect tsc command line paraemeter for clocksource_tsc_early x86/dumpstack/64: Don't evaluate exception stacks before setup x86/apic/32: Avoid bogus LDR warnings x86/resctrl: Prevent NULL pointer dereference when reading mondata
2019-11-10Merge branch 'timers-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer fixes from Thomas Gleixner: "A small set of fixes for timekeepoing and clocksource drivers: - VDSO data was updated conditional on the availability of a VDSO capable clocksource. This causes the VDSO functions which do not depend on a VDSO capable clocksource to operate on stale data. Always update unconditionally. - Prevent a double free in the mediatek driver - Use the proper helper in the sh_mtu2 driver so it won't attempt to initialize non-existing interrupts" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: timekeeping/vsyscall: Update VDSO data unconditionally clocksource/drivers/sh_mtu2: Do not loop using platform_get_irq_by_name() clocksource/drivers/mediatek: Fix error handling
2019-11-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds
Pull networking fixes from David Miller: 1) BPF sample build fixes from Björn Töpel 2) Fix powerpc bpf tail call implementation, from Eric Dumazet. 3) DCCP leaks jiffies on the wire, fix also from Eric Dumazet. 4) Fix crash in ebtables when using dnat target, from Florian Westphal. 5) Fix port disable handling whne removing bcm_sf2 driver, from Florian Fainelli. 6) Fix kTLS sk_msg trim on fallback to copy mode, from Jakub Kicinski. 7) Various KCSAN fixes all over the networking, from Eric Dumazet. 8) Memory leaks in mlx5 driver, from Alex Vesker. 9) SMC interface refcounting fix, from Ursula Braun. 10) TSO descriptor handling fixes in stmmac driver, from Jose Abreu. 11) Add a TX lock to synchonize the kTLS TX path properly with crypto operations. From Jakub Kicinski. 12) Sock refcount during shutdown fix in vsock/virtio code, from Stefano Garzarella. 13) Infinite loop in Intel ice driver, from Colin Ian King. * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (108 commits) ixgbe: need_wakeup flag might not be set for Tx i40e: need_wakeup flag might not be set for Tx igb/igc: use ktime accessors for skb->tstamp i40e: Fix for ethtool -m issue on X722 NIC iavf: initialize ITRN registers with correct values ice: fix potential infinite loop because loop counter being too small qede: fix NULL pointer deref in __qede_remove() net: fix data-race in neigh_event_send() vsock/virtio: fix sock refcnt holding during the shutdown net: ethernet: octeon_mgmt: Account for second possible VLAN header mac80211: fix station inactive_time shortly after boot net/fq_impl: Switch to kvmalloc() for memory allocation mac80211: fix ieee80211_txq_setup_flows() failure path ipv4: Fix table id reference in fib_sync_down_addr ipv6: fixes rt6_probe() and fib6_nh->last_probe init net: hns: Fix the stray netpoll locks causing deadlock in NAPI path net: usb: qmi_wwan: add support for DW5821e with eSIM support CDC-NCM: handle incomplete transfer of MTU nfc: netlink: fix double device reference drop NFC: st21nfca: fix double free ...
2019-11-08Merge branches 'for-next/elf-hwcap-docs', 'for-next/smccc-conduit-cleanup', ↵Catalin Marinas
'for-next/zone-dma', 'for-next/relax-icc_pmr_el1-sync', 'for-next/double-page-fault', 'for-next/misc', 'for-next/kselftest-arm64-signal' and 'for-next/kaslr-diagnostics' into for-next/core * for-next/elf-hwcap-docs: : Update the arm64 ELF HWCAP documentation docs/arm64: cpu-feature-registers: Rewrite bitfields that don't follow [e, s] docs/arm64: cpu-feature-registers: Documents missing visible fields docs/arm64: elf_hwcaps: Document HWCAP_SB docs/arm64: elf_hwcaps: sort the HWCAP{, 2} documentation by ascending value * for-next/smccc-conduit-cleanup: : SMC calling convention conduit clean-up firmware: arm_sdei: use common SMCCC_CONDUIT_* firmware/psci: use common SMCCC_CONDUIT_* arm: spectre-v2: use arm_smccc_1_1_get_conduit() arm64: errata: use arm_smccc_1_1_get_conduit() arm/arm64: smccc/psci: add arm_smccc_1_1_get_conduit() * for-next/zone-dma: : Reintroduction of ZONE_DMA for Raspberry Pi 4 support arm64: mm: reserve CMA and crashkernel in ZONE_DMA32 dma/direct: turn ARCH_ZONE_DMA_BITS into a variable arm64: Make arm64_dma32_phys_limit static arm64: mm: Fix unused variable warning in zone_sizes_init mm: refresh ZONE_DMA and ZONE_DMA32 comments in 'enum zone_type' arm64: use both ZONE_DMA and ZONE_DMA32 arm64: rename variables used to calculate ZONE_DMA32's size arm64: mm: use arm64_dma_phys_limit instead of calling max_zone_dma_phys() * for-next/relax-icc_pmr_el1-sync: : Relax ICC_PMR_EL1 (GICv3) accesses when ICC_CTLR_EL1.PMHE is clear arm64: Document ICC_CTLR_EL3.PMHE setting requirements arm64: Relax ICC_PMR_EL1 accesses when ICC_CTLR_EL1.PMHE is clear * for-next/double-page-fault: : Avoid a double page fault in __copy_from_user_inatomic() if hw does not support auto Access Flag mm: fix double page fault on arm64 if PTE_AF is cleared x86/mm: implement arch_faults_on_old_pte() stub on x86 arm64: mm: implement arch_faults_on_old_pte() on arm64 arm64: cpufeature: introduce helper cpu_has_hw_af() * for-next/misc: : Various fixes and clean-ups arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist arm64: mm: Remove MAX_USER_VA_BITS definition arm64: mm: simplify the page end calculation in __create_pgd_mapping() arm64: print additional fault message when executing non-exec memory arm64: psci: Reduce the waiting time for cpu_psci_cpu_kill() arm64: pgtable: Correct typo in comment arm64: docs: cpu-feature-registers: Document ID_AA64PFR1_EL1 arm64: cpufeature: Fix typos in comment arm64/mm: Poison initmem while freeing with free_reserved_area() arm64: use generic free_initrd_mem() arm64: simplify syscall wrapper ifdeffery * for-next/kselftest-arm64-signal: : arm64-specific kselftest support with signal-related test-cases kselftest: arm64: fake_sigreturn_misaligned_sp kselftest: arm64: fake_sigreturn_bad_size kselftest: arm64: fake_sigreturn_duplicated_fpsimd kselftest: arm64: fake_sigreturn_missing_fpsimd kselftest: arm64: fake_sigreturn_bad_size_for_magic0 kselftest: arm64: fake_sigreturn_bad_magic kselftest: arm64: add helper get_current_context kselftest: arm64: extend test_init functionalities kselftest: arm64: mangle_pstate_invalid_mode_el[123][ht] kselftest: arm64: mangle_pstate_invalid_daif_bits kselftest: arm64: mangle_pstate_invalid_compat_toggle and common utils kselftest: arm64: extend toplevel skeleton Makefile * for-next/kaslr-diagnostics: : Provide diagnostics on boot for KASLR arm64: kaslr: Check command line before looking for a seed arm64: kaslr: Announce KASLR status on boot
2019-11-08Merge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Will Deacon: "Fix pte_same() to avoid getting stuck on write fault. This single arm64 fix is a revert of 747a70e60b72 ("arm64: Fix copy-on-write referencing in HugeTLB"), not because that patch was wrong, but because it was broken by aa57157be69f ("arm64: Ensure VM_WRITE|VM_SHARED ptes are clean by default") which we merged in -rc6. We spotted the issue in Android (AOSP), where one of the JIT threads gets stuck on a write fault during boot because the faulting pte is marked as PTE_DIRTY | PTE_WRITE | PTE_RDONLY and the fault handler decides that there's nothing to do thanks to pte_same() masking out PTE_RDONLY. Thanks to John Stultz for reporting this and testing this so quickly, and to Steve Capper for confirming that the HugeTLB tests continue to pass" * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: Do not mask out PTE_RDONLY in pte_same()
2019-11-08arm64: kaslr: Check command line before looking for a seedMark Brown
Now that we print diagnostics at boot the reason why we do not initialise KASLR matters. Currently we check for a seed before we check if the user has explicitly disabled KASLR on the command line which will result in misleading diagnostics so reverse the order of those checks. We still parse the seed from the DT early so that if the user has both provided a seed and disabled KASLR on the command line we still mask the seed on the command line. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-08arm64: kaslr: Announce KASLR status on bootMark Brown
Currently the KASLR code is silent at boot unless it forces on KPTI in which case a message will be printed for that. This can lead to users incorrectly believing their system has the feature enabled when it in fact does not, and if they notice the problem the lack of any diagnostics makes it harder to understand the problem. Add an initcall which prints a message showing the status of KASLR during boot to make the status clear. This is particularly useful in cases where we don't have a seed. It seems to be a relatively common error for system integrators and administrators to enable KASLR in their configuration but not provide the seed at runtime, often due to seed provisioning breaking at some later point after it is initially enabled and verified. Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-11-08Merge branch 'for-next/perf' into for-next/coreCatalin Marinas
- Support for additional PMU topologies on HiSilicon platforms - Support for CCN-512 interconnect PMU - Support for AXI ID filtering in the IMX8 DDR PMU - Support for the CCPI2 uncore PMU in ThunderX2 - Driver cleanup to use devm_platform_ioremap_resource() * for-next/perf: drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform perf/imx_ddr: Dump AXI ID filter info to userspace docs/perf: Add AXI ID filter capabilities information perf/imx_ddr: Add driver for DDR PMU in i.MX8MPlus perf/imx_ddr: Add enhanced AXI ID filter support bindings: perf: imx-ddr: Add new compatible string docs/perf: Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk arm64: perf: Simplify the ARMv8 PMUv3 event attributes drivers/perf: Add CCPI2 PMU support in ThunderX2 UNCORE driver. Documentation: perf: Update documentation for ThunderX2 PMU uncore driver Documentation: Add documentation for CCN-512 DTS binding perf: arm-ccn: Enable stats for CCN-512 interconnect perf/smmuv3: use devm_platform_ioremap_resource() to simplify code perf/arm-cci: use devm_platform_ioremap_resource() to simplify code perf/arm-ccn: use devm_platform_ioremap_resource() to simplify code perf: xgene: use devm_platform_ioremap_resource() to simplify code perf: hisi: use devm_platform_ioremap_resource() to simplify code
2019-11-07x86/speculation/taa: Fix printing of TAA_MSG_SMT on IBRS_ALL CPUsJosh Poimboeuf
For new IBRS_ALL CPUs, the Enhanced IBRS check at the beginning of cpu_bugs_smt_update() causes the function to return early, unintentionally skipping the MDS and TAA logic. This is not a problem for MDS, because there appears to be no overlap between IBRS_ALL and MDS-affected CPUs. So the MDS mitigation would be disabled and nothing would need to be done in this function anyway. But for TAA, the TAA_MSG_SMT string will never get printed on Cascade Lake and newer. The check is superfluous anyway: when 'spectre_v2_enabled' is SPECTRE_V2_IBRS_ENHANCED, 'spectre_v2_user' is always SPECTRE_V2_USER_NONE, and so the 'spectre_v2_user' switch statement handles it appropriately by doing nothing. So just remove the check. Fixes: 1b42f017415b ("x86/speculation/taa: Add mitigation for TSX Async Abort") Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Borislav Petkov <bp@suse.de>
2019-11-07Merge branch 'arm64/ftrace-with-regs' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux into for-next/core FTRACE_WITH_REGS support for arm64. * 'arm64/ftrace-with-regs' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: ftrace: minimize ifdeffery arm64: implement ftrace with regs arm64: asm-offsets: add S_FP arm64: insn: add encoder for MOV (register) arm64: module/ftrace: intialize PLT at load time arm64: module: rework special section handling module/ftrace: handle patchable-function-entry ftrace: add ftrace_init_nop()
2019-11-07arm64: mm: reserve CMA and crashkernel in ZONE_DMA32Nicolas Saenz Julienne
With the introduction of ZONE_DMA in arm64 we moved the default CMA and crashkernel reservation into that area. This caused a regression on big machines that need big CMA and crashkernel reservations. Note that ZONE_DMA is only 1GB big. Restore the previous behavior as the wide majority of devices are OK with reserving these in ZONE_DMA32. The ones that need them in ZONE_DMA will configure it explicitly. Fixes: 1a8e1cef7603 ("arm64: use both ZONE_DMA and ZONE_DMA32") Reported-by: Qian Cai <cai@lca.pw> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>