summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-12-04powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash translation.Aneesh Kumar K.V
Radix use AMR Key 0 and hash translation use AMR key 3. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-18-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap: Improve error reporting with KUAPAneesh Kumar K.V
With hash translation use DSISR_KEYFAULT to identify a wrong access. With Radix we look at the AMR value and type of fault. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-17-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap: Restrict access to userspace based on userspace AMRAneesh Kumar K.V
If an application has configured address protection such that read/write is denied using pkey even the kernel should receive a FAULT on accessing the same. This patch use user AMR value stored in pt_regs.amr to achieve the same. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-16-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.Aneesh Kumar K.V
Now that kernel correctly store/restore userspace AMR/IAMR values, avoid manipulating AMR and IAMR from the kernel on behalf of userspace. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-15-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one.Aneesh Kumar K.V
We will remove thread.amr/iamr/uamor in a later patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-14-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Reset userspace AMR correctly on execAneesh Kumar K.V
On fork, we inherit from the parent and on exec, we should switch to default_amr values. Also, avoid changing the AMR register value within the kernel. The kernel now runs with different AMR values. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-13-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Inherit correctly on fork.Aneesh Kumar K.V
Child thread.kuap value is inherited from the parent in copy_thread_tls. We still need to make sure when the child returns from a fork in the kernel we start with the kernel default AMR value. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-12-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry ↵Aneesh Kumar K.V
and exit from kernel This prepare kernel to operate with a different value than userspace AMR/IAMR. For this, AMR/IAMR need to be saved and restored on entry and return from the kernel. With KUAP we modify kernel AMR when accessing user address from the kernel via copy_to/from_user interfaces. We don't need to modify IAMR value in similar fashion. If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on entering kernel from userspace. If not we can assume that AMR/IAMR is not modified from userspace. We need to save AMR if we have MMU_FTR_BOOK3S_KUAP feature enabled and we are interrupted within kernel. This is required so that if we get interrupted within copy_to/from_user we continue with the right AMR value. If we hae MMU_FTR_BOOK3S_KUEP enabled we need to restore IAMR on return to userspace beause kernel will be running with a different IAMR value. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-11-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/exec: Set thread.regs early during execAneesh Kumar K.V
In later patches during exec, we would like to access default regs.amr to control access to the user mapping. Having thread.regs set early makes the code changes simpler. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-10-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translationAneesh Kumar K.V
This patch updates kernel hash page table entries to use storage key 3 for its mapping. This implies all kernel access will now use key 3 to control READ/WRITE. The patch also prevents the allocation of key 3 from userspace and UAMOR value is updated such that userspace cannot modify key 3. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-9-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP and MMU_FTR_KUEPAneesh Kumar K.V
This is in preparation to adding support for kuap with hash translation. In preparation for that rename/move kuap related functions to non radix names. Also move the feature bit closer to MMU_FTR_KUEP. MMU_FTR_KUEP is renamed to MMU_FTR_BOOK3S_KUEP to indicate the feature is only relevant to BOOK3S_64 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-8-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuep: Move KUEP related function outside radixAneesh Kumar K.V
The next set of patches adds support for kuep with hash translation. In preparation for that rename/move kuap related functions to non radix names. Also set MMU_FTR_KUEP and add the missing isync(). Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-7-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap: Move KUAP related function outside radixAneesh Kumar K.V
The next set of patches adds support for kuap with hash translation. In preparation for that rename/move kuap related functions to non radix names. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-6-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap/kuep: Move uamor setup to pkey initAneesh Kumar K.V
This patch consolidates UAMOR update across pkey, kuap and kuep features. The boot cpu initialize UAMOR via pkey init and both radix/hash do the secondary cpu UAMOR init in early_init_mmu_secondary. We don't check for mmu_feature in radix secondary init because UAMOR is a supported SPRN with all CPUs supporting radix translation. The old code was not updating UAMOR if we had smap disabled and smep enabled. This change handles that case. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-5-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/book3s64/kuap/kuep: Add PPC_PKEY config on book3s64Aneesh Kumar K.V
The config CONFIG_PPC_PKEY is used to select the base support that is required for PPC_MEM_KEYS, KUAP, and KUEP. Adding this dependency reduces the code complexity(in terms of #ifdefs) and enables us to move some of the initialization code to pkeys.c Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-4-aneesh.kumar@linux.ibm.com
2020-12-04KVM: PPC: BOOK3S: PR: Ignore UAMOR SPRAneesh Kumar K.V
With power7 and above we expect the cpu to support keys. The number of keys are firmware controlled based on device tree. PR KVM do not expose key details via device tree. Hence when running with PR KVM we do run with MMU_FTR_KEY support disabled. But we can still get updates on UAMOR. Hence ignore access to them and for mfstpr return 0 indicating no AMR/IAMR update is no allowed. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-3-aneesh.kumar@linux.ibm.com
2020-12-04powerpc: Add new macro to handle NESTED_IFCLRAneesh Kumar K.V
This will be used by the following patches Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201127044424.40686-2-aneesh.kumar@linux.ibm.com
2020-12-04powerpc/64s: Tidy machine check SLB loggingNicholas Piggin
Since ISA v3.0, SLB no longer uses the slb_cache, and stab_rr is no longer correlated with SLB allocation. Move those to pre-3.0. While here, improve some alignments and reduce whitespace. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-9-npiggin@gmail.com
2020-12-04powerpc/64s: Remove "Host" from MCE loggingNicholas Piggin
"Host" caused machine check is printed when the kernel sees a MCE hit in this kernel or userspace, and "Guest" if it hit one of its guests. This is confusing when a guest kernel handles a hypervisor- delivered MCE, it also prints "Host". Just remove "Host". "Guest" is adequate to make the distinction. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-8-npiggin@gmail.com
2020-12-04powerpc/64s/pseries: Add ERAT specific machine check handlerNicholas Piggin
Don't treat ERAT MCEs as SLB, don't save the SLB and use a specific ERAT flush to recover it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-7-npiggin@gmail.com
2020-12-04powerpc/64s/powernv: Ratelimit harmless HMI error printingNicholas Piggin
Harmless HMI errors can be triggered by guests in some cases, and don't contain much useful information anyway. Ratelimit these to avoid flooding the console/logs. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Use dedicated ratelimit state, not printk_ratelimit()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-6-npiggin@gmail.com
2020-12-04KVM: PPC: Book3S HV: Ratelimit machine check messages coming from guestsNicholas Piggin
A number of machine check exceptions are triggerable by the guest. Ratelimit these to avoid a guest flooding the host console and logs. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Use dedicated ratelimit state, not printk_ratelimit()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-5-npiggin@gmail.com
2020-12-04KVM: PPC: Book3S HV: Don't attempt to recover machine checks for FWNMI ↵Nicholas Piggin
enabled guests Guests that can deal with machine checks would actually prefer the hypervisor not to try recover for them. For example if SLB multi-hits are recovered by the hypervisor by clearing the SLB then the guest will not be able to log the contents and debug its programming error. If guests don't register for FWNMI, they may not be so capable and so the hypervisor will continue to recover for those. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-4-npiggin@gmail.com
2020-12-04powerpc/64s/powernv: Allow KVM to handle guest machine check detailsNicholas Piggin
KVM has strategies to perform machine check recovery. If a MCE hits in a guest, have the low level handler just decode and save the MCE but not try to recover anything, so KVM can deal with it. The host does not own SLBs and does not need to report the SLB state in case of a multi-hit for example, or know about the virtual memory map of the guest. UE and memory poisoning of guest pages in the host is one thing that is possibly not completely robust at the moment, but this too needs to go via KVM (possibly via the guest and back out to host via hcall) rather than being handled at a low level in the host handler. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201128070728.825934-3-npiggin@gmail.com
2020-12-04powerpc/ps3: make system bus's remove and shutdown callbacks return voidUwe Kleine-König
The driver core ignores the return value of struct device_driver::remove because there is only little that can be done. For the shutdown callback it's ps3_system_bus_shutdown() which ignores the return value. To simplify the quest to make struct device_driver::remove return void, let struct ps3_system_bus_driver::remove return void, too. All users already unconditionally return 0, this commit makes it obvious that returning an error code is a bad idea and ensures future users behave accordingly. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126165950.2554997-2-u.kleine-koenig@pengutronix.de
2020-12-04ALSA: ppc: drop if block with always false conditionUwe Kleine-König
The remove callback is only called for devices that were probed successfully before. As the matching probe function cannot complete without error if dev->match_id != PS3_MATCH_ID_SOUND, we don't have to check this here. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201126165950.2554997-1-u.kleine-koenig@pengutronix.de
2020-12-04powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted()Srikar Dronamraju
If its a shared LPAR but not a KVM guest, then see if the vCPU is related to the calling vCPU. On PowerVM, only cores can be preempted. So if one vCPU is a non-preempted state, we can decipher that all other vCPUs sharing the same core are in non-preempted state. Performance results: $ perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better) powerpc/next 35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% ) 23,655,348 context-switches # 0.674 K/sec ( +- 3.72% ) 14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% ) 82,463 page-faults # 0.002 K/sec ( +- 8.40% ) 1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%) 78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%) 654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%) 834,013,059,242 instructions # 0.74 insn per cycle # 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%) 132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%) 2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%) 137.195 +- 0.419 seconds time elapsed ( +- 0.31% ) powerpc/next + patchset 29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% ) 40,162,456 context-switches # 0.001 M/sec ( +- 0.01% ) 1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% ) 62,616 page-faults # 0.002 K/sec ( +- 3.93% ) 1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%) 83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%) 744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%) 940,138,418,674 instructions # 0.66 insn per cycle # 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%) 146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%) 3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%) 117.17 +- 1.52 seconds time elapsed ( +- 1.30% ) This is around 14.6% improvement in performance. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> [mpe: Fold in performance results from cover letter] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-5-srikar@linux.vnet.ibm.com
2020-12-04powerpc: Reintroduce is_kvm_guest() as a fast-path checkSrikar Dronamraju
Introduce a static branch that would be set during boot if the OS happens to be a KVM guest. Subsequent checks to see if we are on KVM will rely on this static branch. This static branch would be used in vcpu_is_preempted() in a subsequent patch. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-4-srikar@linux.vnet.ibm.com
2020-12-04powerpc: Rename is_kvm_guest() to check_kvm_guest()Srikar Dronamraju
We want to reuse the is_kvm_guest() name in a subsequent patch but with a new body. Hence rename is_kvm_guest() to check_kvm_guest(). No additional changes. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: kernel test robot <lkp@intel.com> # int -> bool fix [mpe: Fold in fix from lkp to use true/false not 0/1] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-3-srikar@linux.vnet.ibm.com
2020-12-04powerpc: Refactor is_kvm_guest() declaration to new headerSrikar Dronamraju
Only code/declaration movement, in anticipation of doing a KVM-aware vcpu_is_preempted(). No additional changes. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201202050456.164005-2-srikar@linux.vnet.ibm.com
2020-12-04powerpc: show registers when unwinding interrupt framesNicholas Piggin
It's often useful to know the register state for interrupts in the stack frame. In the below example (with this patch applied), the important information is the state of the page fault. A blatant case like this probably rather should have the page fault regs passed down to the warning, but quite often there are less obvious cases where an interrupt shows up that might give some more clues. The downside is longer and more complex bug output. Bug: Write fault blocked by AMR! WARNING: CPU: 0 PID: 72 at arch/powerpc/include/asm/book3s/64/kup-radix.h:164 __do_page_fault+0x880/0xa90 Modules linked in: CPU: 0 PID: 72 Comm: systemd-gpt-aut Not tainted NIP: c00000000006e2f0 LR: c00000000006e2ec CTR: 0000000000000000 REGS: c00000000a4f3420 TRAP: 0700 MSR: 8000000000021033 <SF,ME,IR,DR,RI,LE> CR: 28002840 XER: 20040000 CFAR: c000000000128be0 IRQMASK: 3 GPR00: c00000000006e2ec c00000000a4f36c0 c0000000014f0700 0000000000000020 GPR04: 0000000000000001 c000000001290f50 0000000000000001 c000000001290f80 GPR08: c000000001612b08 0000000000000000 0000000000000000 00000000ffffe0f7 GPR12: 0000000048002840 c0000000016e0000 c00c000000021c80 c000000000fd6f60 GPR16: 0000000000000000 c00000000a104698 0000000000000003 c0000000087f0000 GPR20: 0000000000000100 c0000000070330b8 0000000000000000 0000000000000004 GPR24: 0000000002000000 0000000000000300 0000000002000000 c00000000a5b0c00 GPR28: 0000000000000000 000000000a000000 00007fffb2a90038 c00000000a4f3820 NIP [c00000000006e2f0] __do_page_fault+0x880/0xa90 LR [c00000000006e2ec] __do_page_fault+0x87c/0xa90 Call Trace: [c00000000a4f36c0] [c00000000006e2ec] __do_page_fault+0x87c/0xa90 (unreliable) [c00000000a4f3780] [c000000000e1c034] do_page_fault+0x34/0x90 [c00000000a4f37b0] [c000000000008908] data_access_common_virt+0x158/0x1b0 --- interrupt: 300 at __copy_tofrom_user_base+0x9c/0x5a4 NIP: c00000000009b028 LR: c000000000802978 CTR: 0000000000000800 REGS: c00000000a4f3820 TRAP: 0300 MSR: 800000000280b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE> CR: 24004840 XER: 00000000 CFAR: c00000000009aff4 DAR: 00007fffb2a90038 DSISR: 0a000000 IRQMASK: 0 GPR00: 0000000000000000 c00000000a4f3ac0 c0000000014f0700 00007fffb2a90028 GPR04: c000000008720010 0000000000010000 0000000000000000 0000000000000000 GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000001 GPR12: 0000000000004000 c0000000016e0000 c00c000000021c80 c000000000fd6f60 GPR16: 0000000000000000 c00000000a104698 0000000000000003 c0000000087f0000 GPR20: 0000000000000100 c0000000070330b8 0000000000000000 0000000000000004 GPR24: c00000000a4f3c80 c000000008720000 0000000000010000 0000000000000000 GPR28: 0000000000010000 0000000008720000 0000000000010000 c000000001515b98 NIP [c00000000009b028] __copy_tofrom_user_base+0x9c/0x5a4 LR [c000000000802978] copyout+0x68/0xc0 --- interrupt: 300 [c00000000a4f3af0] [c0000000008074b8] copy_page_to_iter+0x188/0x540 [c00000000a4f3b50] [c00000000035c678] generic_file_buffered_read+0x358/0xd80 [c00000000a4f3c40] [c0000000004c1e90] blkdev_read_iter+0x50/0x80 [c00000000a4f3c60] [c00000000045733c] new_sync_read+0x12c/0x1c0 [c00000000a4f3d00] [c00000000045a1f0] vfs_read+0x1d0/0x240 [c00000000a4f3d50] [c00000000045a7f4] ksys_read+0x84/0x140 [c00000000a4f3da0] [c000000000033a60] system_call_exception+0x100/0x280 [c00000000a4f3e10] [c00000000000c508] system_call_common+0xf8/0x2f8 Instruction dump: eae10078 3be0000b 4bfff890 60420000 792917e1 4182ff18 3c82ffab 3884a5e0 3c62ffab 3863a6e8 480ba891 60000000 <0fe00000> 3be0000b 4bfff860 e93c0938 Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201107023305.2384874-1-npiggin@gmail.com
2020-12-04powerpc/perf: Invoke per-CPU variable access with disabled interruptsAthira Rajeev
The power_pmu_event_init() callback access per-cpu variable (cpu_hw_events) to check for event constraints and Branch Stack (BHRB). Current usage is to disable preemption when accessing the per-cpu variable, but this does not prevent timer callback from interrupting event_init. Fix this by using 'local_irq_save/restore' to make sure the code path is invoked with disabled interrupts. This change is tested in mambo simulator to ensure that, if a timer interrupt comes in during the per-cpu access in event_init, it will be soft masked and replayed later. For testing purpose, introduced a udelay() in power_pmu_event_init() to make sure a timer interrupt arrives while in per-cpu variable access code between local_irq_save/resore. As expected the timer interrupt was replayed later during local_irq_restore called from power_pmu_event_init. This was confirmed by adding breakpoint in mambo and checking the backtrace when timer_interrupt was hit. Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606814880-1720-1-git-send-email-atrajeev@linux.vnet.ibm.com
2020-12-04selftests/powerpc: Fix uninitialized variable warningHarish
Patch fixes uninitialized variable warning in bad_accesses test which causes the selftests build to fail in older distibutions bad_accesses.c: In function ‘bad_access’: bad_accesses.c:52:9: error: ‘x’ may be used uninitialized in this function [-Werror=maybe-uninitialized] printf("Bad - no SEGV! (%c)\n", x); ^ cc1: all warnings being treated as errors Signed-off-by: Harish <harish@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201092403.238182-1-harish@linux.ibm.com
2020-12-04selftests/powerpc: update .gitignoreDaniel Axtens
I did an in-place build of the self-tests and found that it left the tree dirty. Add missed test binaries to .gitignore Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144427.1228745-1-dja@axtens.net
2020-12-04powerpc/feature-fixups: use a semicolon rather than a commaDaniel Axtens
In a bunch of our security flushes, we use a comma rather than a semicolon to 'terminate' an assignment. Nothing breaks, but checkpatch picks it up if you copy it into another flush. Switch to semicolons for ending statements. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201144344.1228421-1-dja@axtens.net
2020-12-04powerpc/pseries: Define PCI bus speed for Gen4 and Gen5Frederic Barrat
Update bus speed definition for PCI Gen4 and 5. Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130152949.26467-1-fbarrat@linux.ibm.com
2020-12-04powerpc: Allow relative pointers in bug table entriesJordan Niethe
This enables GENERIC_BUG_RELATIVE_POINTERS on Power so that 32-bit offsets are stored in the bug entries rather than 64-bit pointers. While this doesn't save space for 32-bit machines, use it anyway so there is only one code path. Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201201005203.15210-1-jniethe5@gmail.com
2020-12-04powerpc/xmon: Fix build failure for 8xxRavi Bangoria
With CONFIG_PPC_8xx and CONFIG_XMON set, kernel build fails with arch/powerpc/xmon/xmon.c:1379:12: error: 'find_free_data_bpt' defined but not used [-Werror=unused-function] Fix it by enclosing find_free_data_bpt() inside #ifndef CONFIG_PPC_8xx. Fixes: 30df74d67d48 ("powerpc/watchpoint/xmon: Support 2nd DAWR") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130034406.288047-1-ravi.bangoria@linux.ibm.com
2020-12-04powerpc: Use common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macroYouling Tang
Use the common STABS_DEBUG and DWARF_DEBUG and ELF_DETAILS macro rule for the linker script in an effort. Signed-off-by: Youling Tang <tangyouling@loongson.cn> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606460857-2723-1-git-send-email-tangyouling@loongson.cn
2020-12-04powerpc/64: Fix an EMIT_BUG_ENTRY in head_64.SJordan Niethe
Commit 63ce271b5e37 ("powerpc/prom: convert PROM_BUG() to standard trap") added an EMIT_BUG_ENTRY for the trap after the branch to start_kernel(). The EMIT_BUG_ENTRY was for the address "0b", however the trap was not labeled with "0". Hence the address used for bug is in relative_toc() where the previous "0" label is. Label the trap as "0" so the correct address is used. Fixes: 63ce271b5e37 ("powerpc/prom: convert PROM_BUG() to standard trap") Signed-off-by: Jordan Niethe <jniethe5@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201130004404.30953-1-jniethe5@gmail.com
2020-12-04powerpc/vdso: Cleanup vdso.hChristophe Leroy
Rename the guard define to _ASM_POWERPC_VDSO_H And remove useless #ifdef __KERNEL__ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9902590d410cd1c2afa48b83b277faf0711f07b2.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove VDSO32_LBASE and VDSO64_LBASEChristophe Leroy
VDSO32_LBASE and VDSO64_LBASE are 0. Remove them to simplify code. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6c4d6570d886bbe1cc471e8ca01602e4b4d9beb5.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove DBG()Christophe Leroy
DBG() is not used anymore. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e11a9b50e709f197bb3aa2ed1d80d2dee8714afc.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso_readyChristophe Leroy
There is no way to get out of vdso_init() prematuraly anymore. Remove vdso_ready as it will always be 1. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0e1e18c6329b848aa3edeeba76509b4d76182e7d.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso_setup()Christophe Leroy
vdso_fixup_features() cannot fail anymore and that's the only function called by vdso_setup(). vdso_setup() has become trivial and can be removed. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/11522eec6140f510a8c89c63cbb739277d097fdc.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove lib32_elfinfo and lib64_elfinfoChristophe Leroy
lib32_elfinfo and lib64_elfinfo are not used anymore, remove them. Also remove vdso32_kbase and vdso64_kbase while removing the last use. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/01ac65abf22f0428f8f764525a7d84459c54d806.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove symbol section information in struct lib32/64_elfinfoChristophe Leroy
The members related to the symbol section in struct lib32_elfinfo and struct lib64_elfinfo are not used anymore, removed them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b779e5b7cc0354e2f87fd407fe5b02f4a8a73825.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove unused text member in struct lib32/64_elfinfoChristophe Leroy
The text member in struct lib32_elfinfo and struct lib64_elfinfo is not used, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f53dcc9bb1946a7854d15b34d03d3d2e2003848c.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove vdso_patches[] and associated functionsChristophe Leroy
vdso_patches[] is now empty, remove it and remove all functions that depends on it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/27d75debd6e4ddeaffe1d66ffed1e7526684a004.1601197618.git.christophe.leroy@csgroup.eu
2020-12-04powerpc/vdso: Remove runtime generated sigtramp offsetsChristophe Leroy
Signal trampoline offsets are now generated at buildtime. Runtime generated offsets are not used anymore, remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7c192d35a437151837cf4c48aeccb42380d6daac.1601197618.git.christophe.leroy@csgroup.eu