summaryrefslogtreecommitdiff
path: root/arch/powerpc/lib
AgeCommit message (Collapse)Author
2015-03-16powerpc: Change vsrX register defines to vsX to match gcc and glibcAnton Blanchard
As our various loops (copy, string, crypto etc) get more complicated, we want to share implementations between userspace (eg glibc) and the kernel. We also want to write userspace test harnesses to put in tools/testing/selftest. One gratuitous difference between userspace and the kernel is the VSX register definitions - the kernel uses vsrX whereas gcc uses vsX. Change the kernel to match userspace. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-03-16powerpc: Change vrX register defines to vX to match gcc and glibcAnton Blanchard
As our various loops (copy, string, crypto etc) get more complicated, we want to share implementations between userspace (eg glibc) and the kernel. We also want to write userspace test harnesses to put in tools/testing/selftest. One gratuitous difference between userspace and the kernel is the VMX register definitions - the kernel uses vrX whereas both gcc and glibc use vX. Change the kernel to match userspace. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-28powerpc/lib: Makefile, use obj64-y to consolidate 64-bit rulesMichael Ellerman
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-28powerpc/lib: Makefile, consolidate obj-y sectionsMichael Ellerman
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-23powerpc: Add 64bit optimised memcmpAnton Blanchard
I noticed ksm spending quite a lot of time in memcmp on a large KVM box. The current memcmp loop is very unoptimised - byte at a time compares with no loop unrolling. We can do much much better. Optimise the loop in a few ways: - Unroll the byte at a time loop - For large (at least 32 byte) comparisons that are also 8 byte aligned, use an unrolled modulo scheduled loop using 8 byte loads. This is similar to our glibc memcmp. A simple microbenchmark testing 10000000 iterations of an 8192 byte memcmp was used to measure the performance: baseline: 29.93 s modified: 1.70 s Just over 17x faster. v2: Incorporated some suggestions from Segher: - Use andi. instead of rdlicl. - Convert bdnzt eq, to bdnz. It's just duplicating the earlier compare and was a relic from a previous version. - Don't use cr5, we have plans to use that CR field for fast local atomics. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-29powerpc/lib: Do not include string.o in obj-y twiceAndreas Ruprecht
In the Makefile, string.o (which is generated from string.S) is included into the list of objects being built unconditionally (obj-y) in line 12. Additionally, if CONFIG_PPC64 is set, it is included again in line 17. This patch removes the latter unnecessary inclusion. Signed-off-by: Andreas Ruprecht <rupran@einserver.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-12-12Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial Pull trivial tree update from Jiri Kosina: "Usual stuff: documentation updates, printk() fixes, etc" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (24 commits) intel_ips: fix a type in error message cpufreq: cpufreq-dt: Move newline to end of error message ps3rom: fix error return code treewide: fix typo in printk and Kconfig ARM: dts: bcm63138: change "interupts" to "interrupts" Replace mentions of "list_struct" to "list_head" kernel: trace: fix printk message scsi: mpt2sas: fix ioctl in comment zbud, zswap: change module author email clocksource: Fix 'clcoksource' typo in comment arm: fix wording of "Crotex" in CONFIG_ARCH_EXYNOS3 help gpio: msm-v1: make boolean argument more obvious usb: Fix typo in usb-serial-simple.c PCI: Fix comment typo 'COMFIG_PM_OPS' powerpc: Fix comment typo 'CONIFG_8xx' powerpc: Fix comment typos 'CONFiG_ALTIVEC' clk: st: Spelling s/stucture/structure/ isci: Spelling s/stucture/structure/ usb: gadget: zero: Spelling s/infrastucture/infrastructure/ treewide: Fix company name in module descriptions ...
2014-11-20Merge Linus' tree to be be to apply submitted patches to newer code thanJiri Kosina
current trivial.git base
2014-11-19powerpc: Remove more traces of bootmemMichael Ellerman
Although we are now selecting NO_BOOTMEM, we still have some traces of bootmem lying around. That is because even with NO_BOOTMEM there is still a shim that converts bootmem calls into memblock calls, but ultimately we want to remove all traces of bootmem. Most of the patch is conversions from alloc_bootmem() to memblock_virt_alloc(). In general a call such as: p = (struct foo *)alloc_bootmem(x); Becomes: p = memblock_virt_alloc(x, 0); We don't need the cast because memblock_virt_alloc() returns a void *. The alignment value of zero tells memblock to use the default alignment, which is SMP_CACHE_BYTES, the same value alloc_bootmem() uses. We remove a number of NULL checks on the result of memblock_virt_alloc(). That is because memblock_virt_alloc() will panic if it can't allocate, in exactly the same way as alloc_bootmem(), so the NULL checks are and always have been redundant. The memory returned by memblock_virt_alloc() is already zeroed, so we remove several memsets of the result of memblock_virt_alloc(). Finally we convert a few uses of __alloc_bootmem(x, y, MAX_DMA_ADDRESS) to just plain memblock_virt_alloc(). We don't use memblock_alloc_base() because MAX_DMA_ADDRESS is ~0ul on powerpc, so limiting the allocation to that is pointless, 16XB ought to be enough for anyone. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-11-12powerpc: Fix compilation of emulate_step()Paul Mackerras
Commit be96f63375a1 ("powerpc: Split out instruction analysis part of emulate_step()") added some calls to do_fp_load() and do_fp_store(), which fail to compile on configs with CONFIG_PPC_FPU=n and CONFIG_PPC_EMULATE_SSTEP=y. This fixes the compile by adding #ifdef CONFIG_PPC_FPU around the code that calls these functions. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-11-10powerpc: Remove unused devm_ioremap_prot()Kyle McMartin
Added in 2008, but has never had any in-tree users, and no other architectures provide it. Signed-off-by: Kyle McMartin <kyle@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-10-29powerpc: Fix comment typos 'CONFiG_ALTIVEC'Paul Bolle
Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2014-09-25powerpc: Implement emulation of string loads and storesPaul Mackerras
The size field of the op.type word is now the total number of bytes to be loaded or stored. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-09-25powerpc: Emulate icbi, mcrf and conditional-trap instructionsPaul Mackerras
This extends the instruction emulation done by analyse_instr() and emulate_step() to handle a few more instructions that are found in the kernel. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-09-25powerpc: Split out instruction analysis part of emulate_step()Paul Mackerras
This splits out the instruction analysis part of emulate_step() into a separate analyse_instr() function, which decodes the instruction, but doesn't execute any load or store instructions. It does execute integer instructions and branches which can be executed purely by updating register values in the pt_regs struct. For other instructions, it returns the instruction type and other details in a new instruction_op struct. emulate_step() then uses that information to execute loads, stores, cache operations, mfmsr, mtmsr[d], and (on 64-bit) sc instructions. The reason for doing this is so that the KVM code can use it instead of having its own separate instruction emulation code. Possibly the alignment interrupt handler could also use this. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-09-25powerpc: Make a bunch of things staticAnton Blanchard
Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-09-25powerpc: Move lib symbol exports into arch/powerpc/lib/ppc_ksyms.cAnton Blanchard
Move the lib symbol exports closer to their function definitions Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2014-08-13powerpc: Add smp_mb()s to arch_spin_unlock_wait()Michael Ellerman
Similar to the previous commit which described why we need to add a barrier to arch_spin_is_locked(), we have a similar problem with spin_unlock_wait(). We need a barrier on entry to ensure any spinlock we have previously taken is visibly locked prior to the load of lock->slock. It's also not clear if spin_unlock_wait() is intended to have ACQUIRE semantics. For now be conservative and add a barrier on exit to give it ACQUIRE semantics. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-28powerpc: Remove power3 from commentsMichael Ellerman
There are still a few occurences where it remains, because it helps to explain something that persists. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-22powerpc: use _GLOBAL_TOC for memmoveLi Zhong
memmove may be called from module code copy_pages(btrfs), and it may call memcpy, which may call back to C code, so it needs to use _GLOBAL_TOC to set up r2 correctly. This fixes following error when I tried to boot an le guest: Vector: 300 (Data Access) at [c000000073f97210] pc: c000000000015004: enable_kernel_altivec+0x24/0x80 lr: c000000000058fbc: enter_vmx_copy+0x3c/0x60 sp: c000000073f97490 msr: 8000000002009033 dar: d000000001d50170 dsisr: 40000000 current = 0xc0000000734c0000 paca = 0xc00000000fff0000 softe: 0 irq_happened: 0x01 pid = 815, comm = mktemp enter ? for help [c000000073f974f0] c000000000058fbc enter_vmx_copy+0x3c/0x60 [c000000073f97510] c000000000057d34 memcpy_power7+0x274/0x840 [c000000073f97610] d000000001c3179c copy_pages+0xfc/0x110 [btrfs] [c000000073f97660] d000000001c3c248 memcpy_extent_buffer+0xe8/0x160 [btrfs] [c000000073f97700] d000000001be4be8 setup_items_for_insert+0x208/0x4a0 [btrfs] [c000000073f97820] d000000001be50b4 btrfs_insert_empty_items+0xf4/0x140 [btrfs] [c000000073f97890] d000000001bfed30 insert_with_overflow+0x70/0x180 [btrfs] [c000000073f97900] d000000001bff174 btrfs_insert_dir_item+0x114/0x2f0 [btrfs] [c000000073f979a0] d000000001c1f92c btrfs_add_link+0x10c/0x370 [btrfs] [c000000073f97a40] d000000001c20e94 btrfs_create+0x204/0x270 [btrfs] [c000000073f97b00] c00000000026d438 vfs_create+0x178/0x210 [c000000073f97b50] c000000000270a70 do_last+0x9f0/0xe90 [c000000073f97c20] c000000000271010 path_openat+0x100/0x810 [c000000073f97ce0] c000000000272ea8 do_filp_open+0x58/0xd0 [c000000073f97dc0] c00000000025ade8 do_sys_open+0x1b8/0x300 [c000000073f97e30] c00000000000a008 syscall_exit+0x0/0x7c Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-07-22powerpc: Fix bugs in emulate_step()Paul Mackerras
This fixes some bugs in emulate_step(). First, the setting of the carry bit for the arithmetic right-shift instructions was not correct on 64-bit machines because we were masking with a mask of type int rather than unsigned long. Secondly, the sld (shift left doubleword) instruction was using the wrong instruction field for the register containing the shift count. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-06-11powerpc: fix typo 'CONFIG_PPC_CPU'Paul Bolle
Commit cd64d1697cf0 ("powerpc: mtmsrd not defined") added a check for CONFIG_PPC_CPU were a check for CONFIG_PPC_FPU was clearly intended. Fixes: cd64d1697cf0 ("powerpc: mtmsrd not defined") Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-06-05powerpc: Exported functions __clear_user and copy_page use r2 so need ↵Anton Blanchard
_GLOBAL_TOC() __clear_user and copy_page load from the TOC and are also exported to modules. This means we have to use _GLOBAL_TOC() so that we create the global entry point that sets up the TOC. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-05-05Merge remote-tracking branch 'anton/abiv2' into nextBenjamin Herrenschmidt
This series adds support for building the powerpc 64-bit LE kernel using the new ABI v2. We already supported running ABI v2 userspace programs but this adds support for building the kernel itself using the new ABI.
2014-04-30powerpc: memcpy optimization for 64bit LEPhilippe Bergheaud
Unaligned stores take alignment exceptions on POWER7 running in little-endian. This is a dumb little-endian base memcpy that prevents unaligned stores. Once booted the feature fixup code switches over to the VMX copy loops (which are already endian safe). The question is what we do before that switch over. The base 64bit memcpy takes alignment exceptions on POWER7 so we can't use it as is. Fixing the causes of alignment exception would slow it down, because we'd need to ensure all loads and stores are aligned either through rotate tricks or bytewise loads and stores. Either would be bad for all other 64bit platforms. [ I simplified the loop a bit - Anton ] Signed-off-by: Philippe Bergheaud <felix@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-04-23powerpc: Add _GLOBAL_TOC for ABIv2 assembly functions exported to modulesAnton Blanchard
If an assembly function that calls back into c code is exported to modules, we need to ensure r2 is setup correctly. There are only two places crazy enough to do it (two of which are my fault). Signed-off-by: Anton Blanchard <anton@samba.org>
2014-04-23powerpc: Fix unsafe accesses to parameter area in ELFv2Ulrich Weigand
Some of the assembler files in lib/ make use of the fact that in the ELFv1 ABI, the caller guarantees to provide stack space to save the parameter registers r3 ... r10. This guarantee is no longer present in ELFv2 for functions that have no variable argument list and no more than 8 arguments. Change the affected routines to temporarily store registers in the red zone and/or the top of their own stack frame (in the space provided to save r31 .. r29, which is actually not used in these routines). In opal_query_takeover, simply always allocate a stack frame; the routine is not performance critical. Signed-off-by: Ulrich Weigand <ulrich.weigand@de.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org>
2014-04-23powerpc: Fix ABIv2 issues with stack offsets in assembly codeAnton Blanchard
Fix STK_PARAM and use it instead of hardcoding ABIv1 offsets. Signed-off-by: Anton Blanchard <anton@samba.org>
2014-04-23powerpc: No need to use dot symbols when branching to a functionAnton Blanchard
binutils is smart enough to know that a branch to a function descriptor is actually a branch to the functions text address. Alan tells me that binutils has been doing this for 9 years. Signed-off-by: Anton Blanchard <anton@samba.org>
2014-03-07selftests/powerpc: Import Anton's memcpy / copy_tofrom_user testsMichael Ellerman
Turn Anton's memcpy / copy_tofrom_user test into something that can live in tools/testing/selftests. It requires one turd in arch/powerpc/lib/memcpy_64.S, but it's pretty harmless IMHO. We are sailing very close to the wind with the feature macros. We define them to nothing, which currently means we get a few extra nops and include the unaligned calls. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2014-01-15powerpc: Add vr save/restore functionsAndreas Schwab
GCC 4.8 now generates out-of-line vr save/restore functions when optimizing for size. They are needed for the raid6 altivec support. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-30Merge branch 'merge' into nextBenjamin Herrenschmidt
Merge a pile of fixes that went into the "merge" branch (3.13-rc's) such as Anton Little Endian fixes. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-30powerpc: Make 64-bit non-VMX __copy_tofrom_user bi-endianPaul E. McKenney
The powerpc 64-bit __copy_tofrom_user() function uses shifts to handle unaligned invocations. However, these shifts were designed for big-endian systems: On little-endian systems, they must shift in the opposite direction. This commit relies on the C preprocessor to insert the correct shifts into the assembly code. [ This is a rare but nasty LE issue. Most of the time we use the POWER7 optimised __copy_tofrom_user_power7 loop, but when it hits an exception we fall back to the base __copy_tofrom_user loop. - Anton ] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-12-02powerpc: Move the patch_exception to a common placeKevin Hao
So that it can be used by other codes. No function change. Signed-off-by: Kevin Hao <haokexin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-30powerpc: Add VMX optimised xor for RAID5Anton Blanchard
Add a VMX optimised xor, used primarily for RAID5. On a POWER7 blade this is a decent win: 32regs : 17932.800 MB/sec altivec : 19724.800 MB/sec The bigger gain is when the same test is run in SMT4 mode, as it would if there was a lot of work going on: 8regs : 8377.600 MB/sec altivec : 15801.600 MB/sec I tested this against an array created without the patch, and also verified it worked as expected on a little endian kernel. [ Fix !CONFIG_ALTIVEC build -- BenH ] Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-30powerpc: Fix Unaligned LE Floating Point Loads and StoresTom Musta
This patch addresses unaligned single precision floating point loads and stores in the single-step code. The old implementation improperly treated an 8 byte structure as an array of two 4 byte words, which is a classic little endian bug. Signed-off-by: Tom Musta <tmusta@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-30powerpc: Fix Unaligned Loads and StoresTom Musta
This patch modifies the unaligned access routines of the sstep.c module so that it properly reverses the bytes of storage operands in the little endian kernel kernel. This is implemented by breaking an unaligned little endian access into a combination of single byte accesses plus an overal byte reversal operation. Signed-off-by: Tom Musta <tmusta@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11powerpc: Use generic memcpy code in little endianAnton Blanchard
We need to fix some endian issues in our memcpy code. For now just enable the generic memcpy routine for little endian builds. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11powerpc: Use generic checksum code in little endianAnton Blanchard
We need to fix some endian issues in our checksum code. For now just enable the generic checksum routines for little endian builds. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-11powerpc: Fix endian issues in VMX copy loopsAnton Blanchard
Fix the permute loops for little endian. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-03powerpc: Restore registers on error exit from csum_partial_copy_generic()Paul E. McKenney
The csum_partial_copy_generic() function saves the PowerPC non-volatile r14, r15, and r16 registers for the main checksum-and-copy loop. Unfortunately, it fails to restore them upon error exit from this loop, which results in silent corruption of these registers in the presumably rare event of an access exception within that loop. This commit therefore restores these register on error exit from the loop. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Cc: stable@vger.kernel.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-10-03powerpc: Fix parameter clobber in csum_partial_copy_generic()Paul E. McKenney
The csum_partial_copy_generic() uses register r7 to adjust the remaining bytes to process. Unfortunately, r7 also holds a parameter, namely the address of the flag to set in case of access exceptions while reading the source buffer. Lacking a quantum implementation of PowerPC, this commit instead uses register r9 to do the adjusting, leaving r7's pointer uncorrupted. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Anton Blanchard <anton@samba.org> Cc: stable@vger.kernel.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-09-25powerpc: Remove ksp_limit on ppc64Benjamin Herrenschmidt
We've been keeping that field in thread_struct for a while, it contains the "limit" of the current stack pointer and is meant to be used for detecting stack overflows. It has a few problems however: - First, it was never actually *used* on 64-bit. Set and updated but not actually exploited - When switching stack to/from irq and softirq stacks, it's update is racy unless we hard disable interrupts, which is costly. This is fine on 32-bit as we don't soft-disable there but not on 64-bit. Thus rather than fixing 2 in order to implement 1 in some hypothetical future, let's remove the code completely from 64-bit. In order to avoid a clutter of ifdef's, we remove the updates from C code completely during interrupt stack switching, and instead maintain it from the asm helper that is used to do the stack switching in the first place. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-27powerpc: Unaligned stores and stmw are broken in emulation codeTom Musta
The stmw instruction was incorrectly decoded as an update form instruction and thus the RA register was being clobbered. Also, the utility routine to write memory to unaligned addresses breaks the operation into smaller aligned accesses but was incorrectly incrementing the address by only one; it needs to increment the address by the size of the smaller aligned chunk. Signed-off-by: Tom Musta <tmusta@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-08-14powerpc: Fix little endian lppaca, slb_shadow and dtl_entryAnton Blanchard
The lppaca, slb_shadow and dtl_entry hypervisor structures are big endian, so we have to byte swap them in little endian builds. LE KVM hosts will also need to be fixed but for now add an #error to remind us. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-06-20powerpc: Fix single step emulation of 32bit overflowed branchesMichael Neuling
Check truncate_if_32bit() on final write to nip. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-06-01powerpc/pseries: Improve stream generation comments in copypage/userMichael Neuling
No code changes, just documenting what's happening a little better. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-01-29uprobes/powerpc: Add dependency on single step emulationSuzuki K. Poulose
Uprobes uses emulate_step in sstep.c, but we haven't explicitly specified the dependency. On pseries HAVE_HW_BREAKPOINT protects us, but 44x has no such luxury. Consolidate other users that depend on sstep and create a new config option. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Cc: linuxppc-dev@ozlabs.org Cc: stable@vger.kernel.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-01-10powerpc: Build kernel with -mcmodel=mediumAnton Blanchard
Finally remove the two level TOC and build with -mcmodel=medium. Unfortunately we can't build modules with -mcmodel=medium due to the tricks the kernel module loader plays with percpu data: # -mcmodel=medium breaks modules because it uses 32bit offsets from # the TOC pointer to create pointers where possible. Pointers into the # percpu data area are created by this method. # # The kernel module loader relocates the percpu data section from the # original location (starting with 0xd...) to somewhere in the base # kernel percpu data space (starting with 0xc...). We need a full # 64bit relocation for this to work, hence -mcmodel=large. On older kernels we fall back to the two level TOC (-mminimal-toc) Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-10-04powerpc: Fix VMX fix for memcpy caseNishanth Aravamudan
In 2fae7cdb60240e2e2d9b378afbf6d9fcce8a3890 ("powerpc: Fix VMX in interrupt check in POWER7 copy loops"), Anton inadvertently introduced a regression for memcpy on POWER7 machines. copyuser and memcpy diverge slightly in their use of cr1 (copyuser doesn't use it, but memcpy does) and you end up clobbering that register with your fix. That results in (taken from an FC18 kernel): [ 18.824604] Unrecoverable VMX/Altivec Unavailable Exception f20 at c000000000052f40 [ 18.824618] Oops: Unrecoverable VMX/Altivec Unavailable Exception, sig: 6 [#1] [ 18.824623] SMP NR_CPUS=1024 NUMA pSeries [ 18.824633] Modules linked in: tg3(+) be2net(+) cxgb4(+) ipr(+) sunrpc xts lrw gf128mul dm_crypt dm_round_robin dm_multipath linear raid10 raid456 async_raid6_recov async_memcpy async_pq raid6_pq async_xor xor async_tx raid1 raid0 scsi_dh_rdac scsi_dh_hp_sw scsi_dh_emc scsi_dh_alua squashfs cramfs [ 18.824705] NIP: c000000000052f40 LR: c00000000020b874 CTR: 0000000000000512 [ 18.824709] REGS: c000001f1fef7790 TRAP: 0f20 Not tainted (3.6.0-0.rc6.git0.2.fc18.ppc64) [ 18.824713] MSR: 8000000000009032 <SF,EE,ME,IR,DR,RI> CR: 4802802e XER: 20000010 [ 18.824726] SOFTE: 0 [ 18.824728] CFAR: 0000000000000f20 [ 18.824731] TASK = c000000fa7128400[0] 'swapper/24' THREAD: c000000fa7480000 CPU: 24 GPR00: 00000000ffffffc0 c000001f1fef7a10 c00000000164edc0 c000000f9b9a8120 GPR04: c000000f9b9a8124 0000000000001438 0000000000000060 03ffffff064657ee GPR08: 0000000080000000 0000000000000010 0000000000000020 0000000000000030 GPR12: 0000000028028022 c00000000ff25400 0000000000000001 0000000000000000 GPR16: 0000000000000000 7fffffffffffffff c0000000016b2180 c00000000156a500 GPR20: c000000f968c7a90 c0000000131c31d8 c000001f1fef4000 c000000001561d00 GPR24: 000000000000000a 0000000000000000 0000000000000001 0000000000000012 GPR28: c000000fa5c04f80 00000000000008bc c0000000015c0a28 000000000000022e [ 18.824792] NIP [c000000000052f40] .memcpy_power7+0x5a0/0x7c4 [ 18.824797] LR [c00000000020b874] .pcpu_free_area+0x174/0x2d0 [ 18.824800] Call Trace: [ 18.824803] [c000001f1fef7a10] [c000000000052c14] .memcpy_power7+0x274/0x7c4 (unreliable) [ 18.824809] [c000001f1fef7b10] [c00000000020b874] .pcpu_free_area+0x174/0x2d0 [ 18.824813] [c000001f1fef7bb0] [c00000000020ba88] .free_percpu+0xb8/0x1b0 [ 18.824819] [c000001f1fef7c50] [c00000000043d144] .throtl_pd_exit+0x94/0xd0 [ 18.824824] [c000001f1fef7cf0] [c00000000043acf8] .blkg_free+0x88/0xe0 [ 18.824829] [c000001f1fef7d90] [c00000000018c048] .rcu_process_callbacks+0x2e8/0x8a0 [ 18.824835] [c000001f1fef7e90] [c0000000000a8ce8] .__do_softirq+0x158/0x4d0 [ 18.824840] [c000001f1fef7f90] [c000000000025ecc] .call_do_softirq+0x14/0x24 [ 18.824845] [c000000fa7483650] [c000000000010e80] .do_softirq+0x160/0x1a0 [ 18.824850] [c000000fa74836f0] [c0000000000a94a4] .irq_exit+0xf4/0x120 [ 18.824854] [c000000fa7483780] [c000000000020c44] .timer_interrupt+0x154/0x4d0 [ 18.824859] [c000000fa7483830] [c000000000003be0] decrementer_common+0x160/0x180 [ 18.824866] --- Exception: 901 at .plpar_hcall_norets+0x84/0xd4 [ 18.824866] LR = .check_and_cede_processor+0x48/0x80 [ 18.824871] [c000000fa7483b20] [c00000000007f018] .check_and_cede_processor+0x18/0x80 (unreliable) [ 18.824877] [c000000fa7483b90] [c00000000007f104] .dedicated_cede_loop+0x84/0x150 [ 18.824883] [c000000fa7483c50] [c0000000006bc030] .cpuidle_enter+0x30/0x50 [ 18.824887] [c000000fa7483cc0] [c0000000006bc9f4] .cpuidle_idle_call+0x104/0x720 [ 18.824892] [c000000fa7483d80] [c000000000070af8] .pSeries_idle+0x18/0x40 [ 18.824897] [c000000fa7483df0] [c000000000019084] .cpu_idle+0x1a4/0x380 [ 18.824902] [c000000fa7483ec0] [c0000000008a4c18] .start_secondary+0x520/0x528 [ 18.824907] [c000000fa7483f90] [c0000000000093f0] .start_secondary_prolog+0x10/0x14 [ 18.824911] Instruction dump: [ 18.824914] 38840008 90030000 90e30004 38630008 7ca62850 7cc300d0 78c7e102 7cf01120 [ 18.824923] 78c60660 39200010 39400020 39600030 <7e00200c> 7c0020ce 38840010 409f001c [ 18.824935] ---[ end trace 0bb95124affaaa45 ]--- [ 18.825046] Unrecoverable VMX/Altivec Unavailable Exception f20 at c000000000052d08 I believe the right fix is to make memcpy match usercopy and not use cr1. Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: <stable@kernel.org> [v3.6]