summaryrefslogtreecommitdiff
path: root/arch/powerpc/include
AgeCommit message (Collapse)Author
2021-06-21powerpc/pseries: Get entry and uaccess flush required bits from ↵Nicholas Piggin
H_GET_CPU_CHARACTERISTICS This allows the hypervisor / firmware to describe these workarounds to the guest. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210503130243.891868-2-npiggin@gmail.com
2021-06-21KVM: PPC: Book3S HV: Fix TLB management on SMT8 POWER9 and POWER10 processorsSuraj Jitindar Singh
The POWER9 vCPU TLB management code assumes all threads in a core share a TLB, and that TLBIEL execued by one thread will invalidate TLBs for all threads. This is not the case for SMT8 capable POWER9 and POWER10 (big core) processors, where the TLB is split between groups of threads. This results in TLB multi-hits, random data corruption, etc. Fix this by introducing cpu_first_tlb_thread_sibling etc., to determine which siblings share TLBs, and use that in the guest TLB flushing code. [npiggin@gmail.com: add changelog and comment] Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210602040441.3984352-1-npiggin@gmail.com
2021-06-20powerpc/pseries/vas: Integrate API with open/close windowsHaren Myneni
This patch adds VAS window allocatioa/close with the corresponding hcalls. Also changes to integrate with the existing user space VAS API and provide register/unregister functions to NX pseries driver. The driver register function is used to create the user space interface (/dev/crypto/nx-gzip) and unregister to remove this entry. The user space process opens this device node and makes an ioctl to allocate VAS window. The close interface is used to deallocate window. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e8d956bace3f182c4d2e66e343ff37cb0391d1fd.camel@linux.ibm.com
2021-06-20powerpc/vas: Define QoS credit flag to allocate windowHaren Myneni
PowerVM introduces two different type of credits: Default and Quality of service (QoS). The total number of default credits available on each LPAR depends on CPU resources configured. But these credits can be shared or over-committed across LPARs in shared mode which can result in paste command failure (RMA_busy). To avoid NX HW contention, the hypervisor ntroduces QoS credit type which makes sure guaranteed access to NX esources. The system admins can assign QoS credits or each LPAR via HMC. Default credit type is used to allocate a VAS window by default as on PowerVM implementation. But the process can pass VAS_TX_WIN_FLAG_QOS_CREDIT flag with VAS_TX_WIN_OPEN ioctl to open QoS type window. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/aa950b7b8e8077364267720274a7b9ec34e76e73.camel@linux.ibm.com
2021-06-20powerpc/pseries/vas: Define VAS/NXGZIP hcalls and structsHaren Myneni
This patch adds hcalls and other definitions. Also define structs that are used in VAS implementation on PowerVM. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b4b8c594c27ee4aa6be9dc6dc4ee7331571cbbe8.camel@linux.ibm.com
2021-06-20powerpc/vas: Define and use common vas_window structHaren Myneni
Many elements in vas_struct are used on PowerNV and PowerVM platforms. vas_window is used for both TX and RX windows on PowerNV and for TX windows on PowerVM. So some elements are specific to these platforms. So this patch defines common vas_window and platform specific window structs (pnv_vas_window on PowerNV). Also adds the corresponding changes in PowerNV vas code. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1698c35c158dfe52c6d2166667823d3d4a463353.camel@linux.ibm.com
2021-06-20powerpc/vas: Move update_csb/dump_crb to common book3s platformHaren Myneni
If a coprocessor encounters an error translating an address, the VAS will cause an interrupt in the host. The kernel processes the fault by updating CSB. This functionality is same for both powerNV and pseries. So this patch moves these functions to common vas-api.c and the actual functionality is not changed. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bf8d5b0770fa1ef5cba88c96580caa08d999d3b5.camel@linux.ibm.com
2021-06-20powerpc/vas: Create take/drop pid and mm reference functionsHaren Myneni
Take pid and mm references when each window opens and drops during close. This functionality is needed for powerNV and pseries. So this patch defines the existing code as functions in common book3s platform vas-api.c Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2fa40df962250a737c804e58202924717b39e381.camel@linux.ibm.com
2021-06-20powerpc/vas: Add platform specific user window operationsHaren Myneni
PowerNV uses registers to open/close VAS windows, and getting the paste address. Whereas the hypervisor calls are used on PowerVM. This patch adds the platform specific user space window operations and register with the common VAS user space interface. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f85091f4ace67f951ac04d60394d67b21e2f5d3c.camel@linux.ibm.com
2021-06-20powerpc/powernv/vas: Rename register/unregister functionsHaren Myneni
powerNV and pseries drivers register / unregister to the corresponding platform specific VAS separately. Then these VAS functions call the common API with the specific window operations. So rename powerNV VAS API register/unregister functions. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9db00d58dbdcb7cfc07a1df95f3d2a9e3e5d746a.camel@linux.ibm.com
2021-06-20powerpc/vas: Move VAS API to book3s common platformHaren Myneni
The pseries platform will share vas and nx code and interfaces with the PowerNV platform, so create the arch/powerpc/platforms/book3s/ directory and move VAS API code there. Functionality is not changed. Signed-off-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e05c8db17b9eabe3545b902d034238e4c6c08180.camel@linux.ibm.com
2021-06-19Merge tag 'powerpc-5.13-6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "Fix initrd corruption caused by our recent change to use relative jump labels. Fix a crash using perf record on systems without a hardware PMU backend. Rework our 64-bit signal handling slighty to make it more closely match the old behaviour, after the recent change to use unsafe user accessors. Thanks to Anastasia Kovaleva, Athira Rajeev, Christophe Leroy, Daniel Axtens, Greg Kurz, and Roman Bolshakov" * tag 'powerpc-5.13-6' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/perf: Fix crash in perf_instruction_pointer() when ppmu is not set powerpc: Fix initrd corruption with relative jump labels powerpc/signal64: Copy siginfo before changing regs->nip powerpc/mem: Add back missing header to fix 'no previous prototype' error
2021-06-17KVM: switch per-VM stats to u64Paolo Bonzini
Make them the same type as vCPU stats. There is no reason to limit the counters to unsigned long. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-17Merge branch 'topic/ppc-kvm' into nextMichael Ellerman
Merge some powerpc KVM patches from our topic branch. In particular this brings in Nick's big series rewriting parts of the guest entry/exit path in C. Conflicts: arch/powerpc/kernel/security.c arch/powerpc/kvm/book3s_hv_rmhandlers.S
2021-06-17powerpc/64: drop redundant defination of spin_until_condSudeep Holla
linux/processor.h has exactly same defination for spin_until_cond. Drop the redundant defination in asm/processor.h Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1fff2054e5dfc00329804dbd3f2a91667c9a8aff.1623438544.git.christophe.leroy@csgroup.eu
2021-06-17powerpc: Move update_power8_hid0() into its only userChristophe Leroy
update_power8_hid0() is used only by powernv platform subcore.c Move it there. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/37f41d74faa0c66f90b373e243e8b1ee37a1f6fa.1623219019.git.christophe.leroy@csgroup.eu
2021-06-17powerpc: Remove proc_trap()Christophe Leroy
proc_trap() has never been used, remove it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/827944ea12d470c2f862635f48b5ee6c1520351f.1623217909.git.christophe.leroy@csgroup.eu
2021-06-17powerpc: Define swapper_pg_dir[] in CChristophe Leroy
Don't duplicate swapper_pg_dir[] in each platform's head.S Define it in mm/pgtable.c Define MAX_PTRS_PER_PGD because on book3s/64 PTRS_PER_PGD is not a constant. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5e3f1b8a4695c33ccc80aa3870e016bef32b85e1.1623063174.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/nohash: Convert set_context() to CChristophe Leroy
ppc8xx already has set_context() in C. Other ones have it in assembly. The only thing it does is to write the context id into SPRN_PID. Do it in C. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/a5d0759064f3831c6b88af49ef5d3b05ba1c4dad.1622712515.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/kuap: Force inlining of all first level KUAP helpers.Christophe Leroy
All KUAP helpers defined in asm/kup.h are single line functions that should be inlined. But on book3s/32 build, we get many instances of <prevent_write_to_user.constprop.0>. Force inlining of those helpers. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8479a862e165a57a855292d47e24c259a578f5a0.1622711627.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/kuap: Remove to/from/size parameters of prevent_user_access()Christophe Leroy
prevent_user_access() doesn't use anymore to/from/size parameters. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/b7113662fd2c26e4c33e9d705de324bd3860822e.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/kuap: Remove KUAP_CURRENT_XXXChristophe Leroy
book3s/32 was the only user of KUAP_CURRENT_XXX. After rework of book3s/32 KUAP, it is not used anymore. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/549214ecf6887d965645e664520d4886663c5ffb.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: Rework Kernel Userspace Access ProtectionChristophe Leroy
On book3s/32, KUAP is provided by toggling Ks bit in segment registers. One segment register addresses 256M of virtual memory. At the time being, KUAP implements a complex logic to apply the unlock/lock on the exact number of segments covering the user range to access, with saving the boundaries of the range of segments in a member of thread struct. But most if not all user accesses are within a single segment. Rework KUAP with a different approach: - Open only one segment, the one corresponding to the starting address of the range to be accessed. - If a second segment is involved, it will generate a page fault. The segment will then be open by the page fault handler. The kuap member of thread struct will now contain: - The start address of the current on going user access, that will be used to know which segment to lock at the end of the user access. - ~0 when no user access is open - ~1 when additionnal segments are opened by a page fault. Then, at lock time - When only one segment is open, close it. - When several segments are open, close all user segments. Almost 100% of the time, only one segment will be involved. In interrupts, inline the function that unlock/lock all segments, because not inlining them implies a lot of register save/restore. With the patch, writing value 128 in userspace in perf_copy_attr() is done with 16 instructions: 3890: 93 82 04 dc stw r28,1244(r2) 3894: 7d 20 e5 26 mfsrin r9,r28 3898: 55 29 00 80 rlwinm r9,r9,0,2,0 389c: 7d 20 e1 e4 mtsrin r9,r28 38a0: 4c 00 01 2c isync 38a4: 39 20 00 80 li r9,128 38a8: 91 3c 00 00 stw r9,0(r28) 38ac: 81 42 04 dc lwz r10,1244(r2) 38b0: 39 00 ff ff li r8,-1 38b4: 91 02 04 dc stw r8,1244(r2) 38b8: 2c 0a ff fe cmpwi r10,-2 38bc: 41 82 00 88 beq 3944 <perf_copy_attr+0x36c> 38c0: 7d 20 55 26 mfsrin r9,r10 38c4: 65 29 40 00 oris r9,r9,16384 38c8: 7d 20 51 e4 mtsrin r9,r10 38cc: 4c 00 01 2c isync ... 3944: 48 00 00 01 bl 3944 <perf_copy_attr+0x36c> 3944: R_PPC_REL24 kuap_lock_all_ool Before the patch it was 118 instructions. In reality only 42 are executed in most cases, but GCC is not able to see that a properly aligned user access cannot involve more than one segment. 5060: 39 1d 00 04 addi r8,r29,4 5064: 3d 20 b0 00 lis r9,-20480 5068: 7c 08 48 40 cmplw r8,r9 506c: 40 81 00 08 ble 5074 <perf_copy_attr+0x2cc> 5070: 3d 00 b0 00 lis r8,-20480 5074: 39 28 ff ff addi r9,r8,-1 5078: 57 aa 00 06 rlwinm r10,r29,0,0,3 507c: 55 29 27 3e rlwinm r9,r9,4,28,31 5080: 39 29 00 01 addi r9,r9,1 5084: 7d 29 53 78 or r9,r9,r10 5088: 91 22 04 dc stw r9,1244(r2) 508c: 7d 20 ed 26 mfsrin r9,r29 5090: 55 29 00 80 rlwinm r9,r9,0,2,0 5094: 7c 08 50 40 cmplw r8,r10 5098: 40 81 00 c0 ble 5158 <perf_copy_attr+0x3b0> 509c: 7d 46 50 f8 not r6,r10 50a0: 7c c6 42 14 add r6,r6,r8 50a4: 54 c6 27 be rlwinm r6,r6,4,30,31 50a8: 7d 20 51 e4 mtsrin r9,r10 50ac: 3c ea 10 00 addis r7,r10,4096 50b0: 39 29 01 11 addi r9,r9,273 50b4: 7f 88 38 40 cmplw cr7,r8,r7 50b8: 55 29 02 06 rlwinm r9,r9,0,8,3 50bc: 40 9d 00 9c ble cr7,5158 <perf_copy_attr+0x3b0> 50c0: 2f 86 00 00 cmpwi cr7,r6,0 50c4: 41 9e 00 4c beq cr7,5110 <perf_copy_attr+0x368> 50c8: 2f 86 00 01 cmpwi cr7,r6,1 50cc: 41 9e 00 2c beq cr7,50f8 <perf_copy_attr+0x350> 50d0: 2f 86 00 02 cmpwi cr7,r6,2 50d4: 41 9e 00 14 beq cr7,50e8 <perf_copy_attr+0x340> 50d8: 7d 20 39 e4 mtsrin r9,r7 50dc: 39 29 01 11 addi r9,r9,273 50e0: 3c e7 10 00 addis r7,r7,4096 50e4: 55 29 02 06 rlwinm r9,r9,0,8,3 50e8: 7d 20 39 e4 mtsrin r9,r7 50ec: 39 29 01 11 addi r9,r9,273 50f0: 3c e7 10 00 addis r7,r7,4096 50f4: 55 29 02 06 rlwinm r9,r9,0,8,3 50f8: 7d 20 39 e4 mtsrin r9,r7 50fc: 3c e7 10 00 addis r7,r7,4096 5100: 39 29 01 11 addi r9,r9,273 5104: 7f 88 38 40 cmplw cr7,r8,r7 5108: 55 29 02 06 rlwinm r9,r9,0,8,3 510c: 40 9d 00 4c ble cr7,5158 <perf_copy_attr+0x3b0> 5110: 7d 20 39 e4 mtsrin r9,r7 5114: 39 29 01 11 addi r9,r9,273 5118: 3c c7 10 00 addis r6,r7,4096 511c: 55 29 02 06 rlwinm r9,r9,0,8,3 5120: 7d 20 31 e4 mtsrin r9,r6 5124: 39 29 01 11 addi r9,r9,273 5128: 3c c6 10 00 addis r6,r6,4096 512c: 55 29 02 06 rlwinm r9,r9,0,8,3 5130: 7d 20 31 e4 mtsrin r9,r6 5134: 39 29 01 11 addi r9,r9,273 5138: 3c c7 30 00 addis r6,r7,12288 513c: 55 29 02 06 rlwinm r9,r9,0,8,3 5140: 7d 20 31 e4 mtsrin r9,r6 5144: 3c e7 40 00 addis r7,r7,16384 5148: 39 29 01 11 addi r9,r9,273 514c: 7f 88 38 40 cmplw cr7,r8,r7 5150: 55 29 02 06 rlwinm r9,r9,0,8,3 5154: 41 9d ff bc bgt cr7,5110 <perf_copy_attr+0x368> 5158: 4c 00 01 2c isync 515c: 39 20 00 80 li r9,128 5160: 91 3d 00 00 stw r9,0(r29) 5164: 38 e0 00 00 li r7,0 5168: 90 e2 04 dc stw r7,1244(r2) 516c: 7d 20 ed 26 mfsrin r9,r29 5170: 65 29 40 00 oris r9,r9,16384 5174: 40 81 00 c0 ble 5234 <perf_copy_attr+0x48c> 5178: 7d 47 50 f8 not r7,r10 517c: 7c e7 42 14 add r7,r7,r8 5180: 54 e7 27 be rlwinm r7,r7,4,30,31 5184: 7d 20 51 e4 mtsrin r9,r10 5188: 3d 4a 10 00 addis r10,r10,4096 518c: 39 29 01 11 addi r9,r9,273 5190: 7c 08 50 40 cmplw r8,r10 5194: 55 29 02 06 rlwinm r9,r9,0,8,3 5198: 40 81 00 9c ble 5234 <perf_copy_attr+0x48c> 519c: 2c 07 00 00 cmpwi r7,0 51a0: 41 82 00 4c beq 51ec <perf_copy_attr+0x444> 51a4: 2c 07 00 01 cmpwi r7,1 51a8: 41 82 00 2c beq 51d4 <perf_copy_attr+0x42c> 51ac: 2c 07 00 02 cmpwi r7,2 51b0: 41 82 00 14 beq 51c4 <perf_copy_attr+0x41c> 51b4: 7d 20 51 e4 mtsrin r9,r10 51b8: 39 29 01 11 addi r9,r9,273 51bc: 3d 4a 10 00 addis r10,r10,4096 51c0: 55 29 02 06 rlwinm r9,r9,0,8,3 51c4: 7d 20 51 e4 mtsrin r9,r10 51c8: 39 29 01 11 addi r9,r9,273 51cc: 3d 4a 10 00 addis r10,r10,4096 51d0: 55 29 02 06 rlwinm r9,r9,0,8,3 51d4: 7d 20 51 e4 mtsrin r9,r10 51d8: 3d 4a 10 00 addis r10,r10,4096 51dc: 39 29 01 11 addi r9,r9,273 51e0: 7c 08 50 40 cmplw r8,r10 51e4: 55 29 02 06 rlwinm r9,r9,0,8,3 51e8: 40 81 00 4c ble 5234 <perf_copy_attr+0x48c> 51ec: 7d 20 51 e4 mtsrin r9,r10 51f0: 39 29 01 11 addi r9,r9,273 51f4: 3c ea 10 00 addis r7,r10,4096 51f8: 55 29 02 06 rlwinm r9,r9,0,8,3 51fc: 7d 20 39 e4 mtsrin r9,r7 5200: 39 29 01 11 addi r9,r9,273 5204: 3c e7 10 00 addis r7,r7,4096 5208: 55 29 02 06 rlwinm r9,r9,0,8,3 520c: 7d 20 39 e4 mtsrin r9,r7 5210: 39 29 01 11 addi r9,r9,273 5214: 3c ea 30 00 addis r7,r10,12288 5218: 55 29 02 06 rlwinm r9,r9,0,8,3 521c: 7d 20 39 e4 mtsrin r9,r7 5220: 3d 4a 40 00 addis r10,r10,16384 5224: 39 29 01 11 addi r9,r9,273 5228: 7c 08 50 40 cmplw r8,r10 522c: 55 29 02 06 rlwinm r9,r9,0,8,3 5230: 41 81 ff bc bgt 51ec <perf_copy_attr+0x444> 5234: 4c 00 01 2c isync Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Export the ool handlers to fix build errors] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d9121f96a7c4302946839a0771f5d1daeeb6968c.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: Allow disabling KUAP at boot timeChristophe Leroy
PPC64 uses MMU features to enable/disable KUAP at boot time. But feature fixups are applied way too early on PPC32. Now that all KUAP related actions are in C following the conversion of KUAP initial setup and context switch in C, static branches can be used to enable/disable KUAP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Export disable_kuap_key to fix build errors] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/cd79e8008455fba5395d099f9bb1305c039b931c.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: Allow disabling KUEP at boot timeChristophe Leroy
PPC64 uses MMU features to enable/disable KUEP at boot time. But feature fixups are applied way too early on PPC32. Now that all KUEP related actions are in C following the conversion of KUEP initial setup and context switch in C, static branches can be used to enable/disable KUEP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7745a2c3a08ec46302920a3f48d1cb9b5469dbbb.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: Simplify calculation of segment register contentChristophe Leroy
segment register has VSID on bits 8-31. Bits 4-7 are reserved, there is no requirement to set them to 0. VSIDs are calculated from VSID of SR0 by adding 0x111. Even with highest possible VSID which would be 0xFFFFF0, adding 16 times 0x111 results in 0x1001100. So, the reserved bits are never overflowed, no need to clear the reserved bits after each calculation. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ddc1cfd2ec8f3b2395c6a4d7f2b0c1aa1b1e64fb.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: Convert switch_mmu_context() to CChristophe Leroy
switch_mmu_context() does things that can easily be done in C. For updating user segments, we have update_user_segments(). As mentionned in commit b5efec00b671 ("powerpc/32s: Move KUEP locking/unlocking in C"), update_user_segments() has the loop unrolled which is a significant performance gain. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/05c0875ad8220c03452c3a334946e207c6ca04d6.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: move CTX_TO_VSID() into mmu-hash.hChristophe Leroy
In order to reuse it in switch_mmu_context(), this patch moves CTX_TO_VSID() macro into asm/book3s/32/mmu-hash.h Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/26b36ef2939234a04b37baf6ffe50cba81f5d1b7.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/32s: Refactor update of user segment registersChristophe Leroy
KUEP implements the update of user segment registers. Move it into mmu-hash.h in order to use it from other places. And inline kuep_lock() and kuep_unlock(). Inlining kuep_lock() is important for system_call_exception(), otherwise system_call_exception() has to save into stack the system call parameters that are used just after, and doing that takes more instructions than kuep_lock() itself. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/24591ca480d14a62ef910e38a5273d551262c4a2.1622708530.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/8xx: Allow disabling KUAP at boot timeChristophe Leroy
PPC64 uses MMU features to enable/disable KUAP at boot time. But feature fixups are applied way too early on PPC32. But since commit c16728835eec ("powerpc/32: Manage KUAP in C"), all KUAP is in C so it is now possible to use static branches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3dca510ce555335261a47c4799167da698f569c0.1622782111.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/44x: Implement Kernel Userspace Exec Protection (KUEP)Christophe Leroy
Powerpc 44x has two bits for exec protection in TLBs: one for user (UX) and one for superviser (SX). Clear SX on user pages in TLB miss handlers to provide KUEP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/169310e08152aa1d96c979770291d165ec6896ae.1622616032.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/optprobes: use PPC_RAW_ macrosChristophe Leroy
Use PPC_RAW_ macros to simplify the code. And use PPC_LO/PPC_HI instead of IMM_L/IMM_H which are for internal use inside ppc-opcode.h Those macros are self explanatory, comments can go as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5a167b8ba4d33a5c09cd504f0c862e25ffe85459.1621516826.git.christophe.leroy@csgroup.eu
2021-06-17powerpc/inst: Refactor PPC32 and PPC64 versionsChristophe Leroy
ppc_inst() ppc_inst_prefixed() ppc_inst_swab() can easily be made common to both PPC32 and PPC64. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d54c63dcac6d190e1cc0d2fe3259d6e621928cdf.1621516826.git.christophe.leroy@csgroup.eu
2021-06-17powerpc: Don't use 'struct ppc_inst' to reference instruction locationChristophe Leroy
'struct ppc_inst' is an internal representation of an instruction, but in-memory instructions are and will remain a table of 'u32' forever. Replace all 'struct ppc_inst *' used for locating an instruction in memory by 'u32 *'. This removes a lot of undue casts to 'struct ppc_inst *'. It also helps locating ab-use of 'struct ppc_inst' dereference. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix ppc_inst_next(), use u32 instead of unsigned int] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/7062722b087228e42cbd896e39bfdf526d6a340a.1621516826.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/lib/code-patching: Make instr_is_branch_to_addr() staticChristophe Leroy
instr_is_branch_to_addr() is only used in code-patching.c Make it static. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5f6b9c8c83170ed310953eac2f5b14539bfc964a.1621516826.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/inst: Avoid pointer dereferencing in ppc_inst_equal()Christophe Leroy
Avoid casting/dereferencing ppc_inst() as u64* , check each member of the struct when relevant. And remove the 0xff initialisation of the suffix for non prefixed instruction. An instruction with 0xff as a suffix might be invalid, but still is a prefixed instruction and has to be considered as this. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/d8b155e930b7a9708ca110e8ff0ace6713a7af75.1621516826.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/inst: Improve readability of get_user_instr() and friendsChristophe Leroy
Remove unneeded line splits. And remove unneeded local variable initialisation. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fb097fda78cc6852905ef00f8f7bf371b6cc66f7.1621516826.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/inst: Reduce casts in get_user_instr()Christophe Leroy
Declare __gui_ptr as 'u32 *' instead of casting it at each use to 'unsigned int *' (which is an equivalent type). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Use u32 * instead of unsigned int *] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/2c2123998e05535d08ba03a96ea1eea921d06a86.1621516826.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/inst: Fix sparse detection on get_user_instr()Christophe Leroy
get_user_instr() lacks sparse detection for the __user tag. This is because __gui_ptr is assigned with a cast. Fix that by adding a __chk_user_ptr() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0320e5b41a794fd456ab8c5993bbfadcf9e1d8b4.1621516826.git.christophe.leroy@csgroup.eu
2021-06-16powerpc: Replace PPC_INST_NOP by PPC_RAW_NOP()Christophe Leroy
On the road to removing all PPC_INST_xx defines in asm/ppc-opcodes.h, change PPC_INST_NOP to PPC_RAW_NOP(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ad46c195ca1b8572629ef07ba6bfe247585239a6.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/traps: Start using PPC_RAW_xx() macrosChristophe Leroy
Start using PPC_RAW_xx() macros where relevant. PPC_INST_SYNC is used to both represent the 'sync' instruction and the family of synchronisation instructions. Keep it for the later, maybe we'll change the name in the future to avoid confusion. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0945c155d6cb113431185fc1296ac127359fe29b.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/lib/feature-fixups: Use PPC_RAW_xxx() macrosChristophe Leroy
Use PPC_RAW_xxx() macros instead of open coding assembly opcodes. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Fix bad converison in do_stf_exit_barrier_fixups()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e79cd8e111ca13bf8c61a384bac365aa7e207647.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/ebpf64: Use PPC_RAW_MFLR()Christophe Leroy
Use PPC_RAW_MFLR() instead of open coding with PPC_INST_MFLR. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c1887623e91e8b4da36e669e4c74de86320a5092.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/security: Use PPC_RAW_BLR() and PPC_RAW_NOP()Christophe Leroy
On the road to remove all use of PPC_INST_xxx, replace PPC_INST_BLR by PPC_RAW_BLR(). Same for PPC_INST_NOP. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c04f88d0e53d2122fbbe92226892a01ebc668b6a.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/modules: Use PPC_RAW_xx() macrosChristophe Leroy
To improve readability, use PPC_RAW_xx() macros instead of open coding. Those macros are self-explanatory so the comments can go as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/99d9ee8849d3992beeadb310a665aae01c3abfb1.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/signal: Use PPC_RAW_xx() macrosChristophe Leroy
To improve readability, use PPC_RAW_xx() macros instead of open coding. Those macros are self-explanatory so the comments can go as well. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/4ca2bfdca2f47a293d05f61eb3c4e487ee170f1f.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/lib/code-patching: Use PPC_RAW_() macrosChristophe Leroy
Instead of open coding with PPC_INST_ defines, use PPC_RAW_() macros. It improves readability. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8c92f1d9e825ee47c6f88fe43ad42d2a8cc2ab4a.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc/opcodes: Add shorter macros for registers for use with PPC_RAW_xx()Christophe Leroy
Today we have __REG_Rx macros . They are mainly meant for internal use by macros __PPC_RA() and friends macros which allows uses like __PPC_RA(R12). When used with PPC_RAW_xx() macros, it gives a result which is not very readable. Add shorter macros _Rx in order to improve readability when used with PPC_RAW_xx() macros. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ec34d92b7c2f810622261acfeeed4b0a0f4d01bd.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc: Rework PPC_RAW_xxx() macros for prefixed instructionsChristophe Leroy
At the time being, we have PPC_RAW_PLXVP() and PPC_RAW_PSTXVP() which provide a 64 bits value, and then it gets split by open coding to format it into a 'struct ppc_inst' instruction. Instead, define a PPC_RAW_xxx_P() and a PPC_RAW_xxx_S() to be used as is. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5d146b31b943e7ad674894421db4feef54804b9b.1621506159.git.christophe.leroy@csgroup.eu
2021-06-16powerpc: Force inlining of csum_add()Christophe Leroy
Commit 328e7e487a46 ("powerpc: force inlining of csum_partial() to avoid multiple csum_partial() with GCC10") inlined csum_partial(). Now that csum_partial() is inlined, GCC outlines csum_add() when called by csum_partial(). c064fb28 <csum_add>: c064fb28: 7c 63 20 14 addc r3,r3,r4 c064fb2c: 7c 63 01 94 addze r3,r3 c064fb30: 4e 80 00 20 blr c0665fb8 <csum_add>: c0665fb8: 7c 63 20 14 addc r3,r3,r4 c0665fbc: 7c 63 01 94 addze r3,r3 c0665fc0: 4e 80 00 20 blr c066719c: 7c 9a c0 2e lwzx r4,r26,r24 c06671a0: 38 60 00 00 li r3,0 c06671a4: 7f 1a c2 14 add r24,r26,r24 c06671a8: 4b ff ee 11 bl c0665fb8 <csum_add> c06671ac: 80 98 00 04 lwz r4,4(r24) c06671b0: 4b ff ee 09 bl c0665fb8 <csum_add> c06671b4: 80 98 00 08 lwz r4,8(r24) c06671b8: 4b ff ee 01 bl c0665fb8 <csum_add> c06671bc: a0 98 00 0c lhz r4,12(r24) c06671c0: 4b ff ed f9 bl c0665fb8 <csum_add> c06671c4: 7c 63 18 f8 not r3,r3 c06671c8: 81 3f 00 68 lwz r9,104(r31) c06671cc: 81 5f 00 a0 lwz r10,160(r31) c06671d0: 7d 29 18 14 addc r9,r9,r3 c06671d4: 7d 29 01 94 addze r9,r9 c06671d8: 91 3f 00 68 stw r9,104(r31) c06671dc: 7d 1a 50 50 subf r8,r26,r10 c06671e0: 83 01 00 10 lwz r24,16(r1) c06671e4: 83 41 00 18 lwz r26,24(r1) The sum with 0 is useless, should have been skipped. And there is even one completely unused instance of csum_add(). In file included from ./include/net/checksum.h:22, from ./include/linux/skbuff.h:28, from ./include/linux/icmp.h:16, from net/ipv6/ip6_tunnel.c:23: ./arch/powerpc/include/asm/checksum.h: In function '__ip6_tnl_rcv': ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:172:31: note: called from here 172 | sum = csum_add(sum, (__force __wsum)*(const u32 *)buff); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:177:31: note: called from here 177 | sum = csum_add(sum, (__force __wsum) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 178 | *(const u32 *)(buff + 4)); | ~~~~~~~~~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:183:31: note: called from here 183 | sum = csum_add(sum, (__force __wsum) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 184 | *(const u32 *)(buff + 8)); | ~~~~~~~~~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/checksum.h:94:22: warning: inlining failed in call to 'csum_add': call is unlikely and code size would grow [-Winline] 94 | static inline __wsum csum_add(__wsum csum, __wsum addend) | ^~~~~~~~ ./arch/powerpc/include/asm/checksum.h:186:31: note: called from here 186 | sum = csum_add(sum, (__force __wsum) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 187 | *(const u16 *)(buff + 12)); | ~~~~~~~~~~~~~~~~~~~~~~~~~~ Force inlining of csum_add(). 94c: 80 df 00 a0 lwz r6,160(r31) 950: 7d 28 50 2e lwzx r9,r8,r10 954: 7d 48 52 14 add r10,r8,r10 958: 80 aa 00 04 lwz r5,4(r10) 95c: 80 ff 00 68 lwz r7,104(r31) 960: 7d 29 28 14 addc r9,r9,r5 964: 7d 29 01 94 addze r9,r9 968: 7d 08 30 50 subf r8,r8,r6 96c: 80 aa 00 08 lwz r5,8(r10) 970: a1 4a 00 0c lhz r10,12(r10) 974: 7d 29 28 14 addc r9,r9,r5 978: 7d 29 01 94 addze r9,r9 97c: 7d 29 50 14 addc r9,r9,r10 980: 7d 29 01 94 addze r9,r9 984: 7d 29 48 f8 not r9,r9 988: 7c e7 48 14 addc r7,r7,r9 98c: 7c e7 01 94 addze r7,r7 990: 90 ff 00 68 stw r7,104(r31) In the non-inlined version, the first sum with 0 was performed. Here it is skipped. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f7f4d4e364de6e473da874468b903da6e5d97adc.1620713272.git.christophe.leroy@csgroup.eu