From cd5d5e602f502895e47e18cd46804d6d7014e65c Mon Sep 17 00:00:00 2001 From: Christophe Leroy Date: Thu, 1 Jul 2021 11:17:08 +0000 Subject: powerpc/mm: Fix lockup on kernel exec fault The powerpc kernel is not prepared to handle exec faults from kernel. Especially, the function is_exec_fault() will return 'false' when an exec fault is taken by kernel, because the check is based on reading current->thread.regs->trap which contains the trap from user. For instance, when provoking a LKDTM EXEC_USERSPACE test, current->thread.regs->trap is set to SYSCALL trap (0xc00), and the fault taken by the kernel is not seen as an exec fault by set_access_flags_filter(). Commit d7df2443cd5f ("powerpc/mm: Fix spurious segfaults on radix with autonuma") made it clear and handled it properly. But later on commit d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults") removed that handling, introducing test based on error_code. And here is the problem, because on the 603 all upper bits of SRR1 get cleared when the TLB instruction miss handler bails out to ISI. Until commit cbd7e6ca0210 ("powerpc/fault: Avoid heavy search_exception_tables() verification"), an exec fault from kernel at a userspace address was indirectly caught by the lack of entry for that address in the exception tables. But after that commit the kernel mainly relies on KUAP or on core mm handling to catch wrong user accesses. Here the access is not wrong, so mm handles it. It is a minor fault because PAGE_EXEC is not set, set_access_flags_filter() should set PAGE_EXEC and voila. But as is_exec_fault() returns false as explained in the beginning, set_access_flags_filter() bails out without setting PAGE_EXEC flag, which leads to a forever minor exec fault. As the kernel is not prepared to handle such exec faults, the thing to do is to fire in bad_kernel_fault() for any exec fault taken by the kernel, as it was prior to commit d3ca587404b3. Fixes: d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute faults") Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Christophe Leroy Acked-by: Nicholas Piggin Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/024bb05105050f704743a0083fe3548702be5706.1625138205.git.christophe.leroy@csgroup.eu --- arch/powerpc/mm/fault.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 34f641d4a2fe..a8d0ce85d39a 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -199,9 +199,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code, { int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE; - /* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */ - if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT | - DSISR_PROTFAULT))) { + if (is_exec) { pr_crit_ratelimited("kernel tried to execute %s page (%lx) - exploit attempt? (uid: %d)\n", address >= TASK_SIZE ? "exec-protected" : "user", address, -- cgit v1.2.3 From 419ac821766cbdb9fd85872bb3f1a589df05c94c Mon Sep 17 00:00:00 2001 From: "Naveen N. Rao" Date: Thu, 1 Jul 2021 20:38:58 +0530 Subject: powerpc/bpf: Fix detecting BPF atomic instructions Commit 91c960b0056672 ("bpf: Rename BPF_XADD and prepare to encode other atomics in .imm") converted BPF_XADD to BPF_ATOMIC and added a way to distinguish instructions based on the immediate field. Existing JIT implementations were updated to check for the immediate field and to reject programs utilizing anything more than BPF_ADD (such as BPF_FETCH) in the immediate field. However, the check added to powerpc64 JIT did not look at the correct BPF instruction. Due to this, such programs would be accepted and incorrectly JIT'ed resulting in soft lockups, as seen with the atomic bounds test. Fix this by looking at the correct immediate value. Fixes: 91c960b0056672 ("bpf: Rename BPF_XADD and prepare to encode other atomics in .imm") Reported-by: Jiri Olsa Signed-off-by: Naveen N. Rao Tested-by: Jiri Olsa Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/4117b430ffaa8cd7af042496f87fd7539e4f17fd.1625145429.git.naveen.n.rao@linux.vnet.ibm.com --- arch/powerpc/net/bpf_jit_comp64.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 5cad5b5a7e97..de8595880fee 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -667,7 +667,7 @@ emit_clear: * BPF_STX ATOMIC (atomic ops) */ case BPF_STX | BPF_ATOMIC | BPF_W: - if (insn->imm != BPF_ADD) { + if (imm != BPF_ADD) { pr_err_ratelimited( "eBPF filter atomic op code %02x (@%d) unsupported\n", code, i); @@ -689,7 +689,7 @@ emit_clear: PPC_BCC_SHORT(COND_NE, tmp_idx); break; case BPF_STX | BPF_ATOMIC | BPF_DW: - if (insn->imm != BPF_ADD) { + if (imm != BPF_ADD) { pr_err_ratelimited( "eBPF filter atomic op code %02x (@%d) unsupported\n", code, i); -- cgit v1.2.3 From 307e5042c7bdae15308ef2e9b848833b84122eb0 Mon Sep 17 00:00:00 2001 From: "Naveen N. Rao" Date: Thu, 1 Jul 2021 20:38:59 +0530 Subject: powerpc/bpf: Reject atomic ops in ppc32 JIT Commit 91c960b0056672 ("bpf: Rename BPF_XADD and prepare to encode other atomics in .imm") converted BPF_XADD to BPF_ATOMIC and updated all JIT implementations to reject JIT'ing instructions with an immediate value different from BPF_ADD. However, ppc32 BPF JIT was implemented around the same time and didn't include the same change. Update the ppc32 JIT accordingly. Fixes: 51c66ad849a7 ("powerpc/bpf: Implement extended BPF on PPC32") Cc: stable@vger.kernel.org # v5.13+ Signed-off-by: Naveen N. Rao Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/426699046d89fe50f66ecf74bd31c01eda976ba5.1625145429.git.naveen.n.rao@linux.vnet.ibm.com --- arch/powerpc/net/bpf_jit_comp32.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c index cbe5b399ed86..34bb1583fc0c 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -773,9 +773,17 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * break; /* - * BPF_STX XADD (atomic_add) + * BPF_STX ATOMIC (atomic ops) */ - case BPF_STX | BPF_XADD | BPF_W: /* *(u32 *)(dst + off) += src */ + case BPF_STX | BPF_ATOMIC | BPF_W: + if (imm != BPF_ADD) { + pr_err_ratelimited("eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + return -ENOTSUPP; + } + + /* *(u32 *)(dst + off) += src */ + bpf_set_seen_register(ctx, tmp_reg); /* Get offset into TMP_REG */ EMIT(PPC_RAW_LI(tmp_reg, off)); @@ -789,7 +797,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context * PPC_BCC_SHORT(COND_NE, (ctx->idx - 3) * 4); break; - case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */ + case BPF_STX | BPF_ATOMIC | BPF_DW: /* *(u64 *)(dst + off) += src */ return -EOPNOTSUPP; /* -- cgit v1.2.3 From 3f601608b71c3ca1e199898cd16f09d707fedb56 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= Date: Thu, 1 Jul 2021 17:24:12 +0200 Subject: powerpc/xive: Fix error handling when allocating an IPI MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This is a smatch warning: arch/powerpc/sysdev/xive/common.c:1161 xive_request_ipi() warn: unsigned 'xid->irq' is never less than zero. Fixes: fd6db2892eba ("powerpc/xive: Modernize XIVE-IPI domain with an 'alloc' handler") Cc: stable@vger.kernel.org # v5.13 Reported-by: kernel test robot Signed-off-by: Cédric Le Goater Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20210701152412.1507612-1-clg@kaod.org --- arch/powerpc/sysdev/xive/common.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c index a8304327072d..dbdbbc2f1dc5 100644 --- a/arch/powerpc/sysdev/xive/common.c +++ b/arch/powerpc/sysdev/xive/common.c @@ -1153,11 +1153,10 @@ static int __init xive_request_ipi(void) * Since the HW interrupt number doesn't have any meaning, * simply use the node number. */ - xid->irq = irq_domain_alloc_irqs(ipi_domain, 1, node, &info); - if (xid->irq < 0) { - ret = xid->irq; + ret = irq_domain_alloc_irqs(ipi_domain, 1, node, &info); + if (ret < 0) goto out_free_xive_ipis; - } + xid->irq = ret; snprintf(xid->name, sizeof(xid->name), "IPI-%d", node); -- cgit v1.2.3 From 1df3af6dc3cfe643f43d46f202bd44861ccbdb99 Mon Sep 17 00:00:00 2001 From: Nicholas Piggin Date: Tue, 6 Jul 2021 15:13:10 +1000 Subject: powerpc/64e: Fix system call illegal mtmsrd instruction BookE does not have mtmsrd, switch to use wrteei to enable MSR[EE]. Fixes: dd152f70bdc1 ("powerpc/64s: system call avoid setting MSR[RI] until we set MSR[EE]") Reported-by: Christian Zigotzky Signed-off-by: Nicholas Piggin Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20210706051310.608992-1-npiggin@gmail.com --- arch/powerpc/kernel/interrupt_64.S | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S index 4063e8a3f704..d4212d2ff0b5 100644 --- a/arch/powerpc/kernel/interrupt_64.S +++ b/arch/powerpc/kernel/interrupt_64.S @@ -311,9 +311,13 @@ END_BTB_FLUSH_SECTION * trace_hardirqs_off(). */ li r11,IRQS_ALL_DISABLED - li r12,-1 /* Set MSR_EE and MSR_RI */ stb r11,PACAIRQSOFTMASK(r13) +#ifdef CONFIG_PPC_BOOK3S + li r12,-1 /* Set MSR_EE and MSR_RI */ mtmsrd r12,1 +#else + wrteei 1 +#endif /* Calling convention has r9 = orig r0, r10 = regs */ mr r9,r0 -- cgit v1.2.3 From 2c669ef6979c370f98d4b876e54f19613c81e075 Mon Sep 17 00:00:00 2001 From: Valentin Schneider Date: Wed, 7 Jul 2021 19:38:31 +0100 Subject: powerpc/preempt: Don't touch the idle task's preempt_count during hotplug Powerpc currently resets a CPU's idle task preempt_count to 0 before said task starts executing the secondary startup routine (and becomes an idle task proper). This conflicts with commit f1a0a376ca0c ("sched/core: Initialize the idle task with preemption disabled"). which initializes all of the idle tasks' preempt_count to PREEMPT_DISABLED during smp_init(). Note that this was superfluous before said commit, as back then the hotplug machinery would invoke init_idle() via idle_thread_get(), which would have already reset the CPU's idle task's preempt_count to PREEMPT_ENABLED. Get rid of this preempt_count write. Fixes: f1a0a376ca0c ("sched/core: Initialize the idle task with preemption disabled") Reported-by: Bharata B Rao Signed-off-by: Valentin Schneider Tested-by: Guenter Roeck Tested-by: Bharata B Rao Signed-off-by: Michael Ellerman Link: https://lore.kernel.org/r/20210707183831.2106509-1-valentin.schneider@arm.com --- arch/powerpc/platforms/cell/smp.c | 3 --- arch/powerpc/platforms/pseries/smp.c | 3 --- 2 files changed, 6 deletions(-) diff --git a/arch/powerpc/platforms/cell/smp.c b/arch/powerpc/platforms/cell/smp.c index c855a0aeb49c..d7ab868aab54 100644 --- a/arch/powerpc/platforms/cell/smp.c +++ b/arch/powerpc/platforms/cell/smp.c @@ -78,9 +78,6 @@ static inline int smp_startup_cpu(unsigned int lcpu) pcpu = get_hard_smp_processor_id(lcpu); - /* Fixup atomic count: it exited inside IRQ handler. */ - task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count = 0; - /* * If the RTAS start-cpu token does not exist then presume the * cpu is already spinning. diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c index 096629f54576..f47429323eee 100644 --- a/arch/powerpc/platforms/pseries/smp.c +++ b/arch/powerpc/platforms/pseries/smp.c @@ -105,9 +105,6 @@ static inline int smp_startup_cpu(unsigned int lcpu) return 1; } - /* Fixup atomic count: it exited inside IRQ handler. */ - task_thread_info(paca_ptrs[lcpu]->__current)->preempt_count = 0; - /* * If the RTAS start-cpu token does not exist then presume the * cpu is already spinning. -- cgit v1.2.3