summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-12-04powerpc/mm: dump block address translation on book3s/32Christophe Leroy
This patch adds a debugfs file to dump block address translation: ~# cat /sys/kernel/debug/powerpc/block_address_translation ---[ Instruction Block Address Translations ]--- 0: - 1: - 2: 0xc0000000-0xcfffffff 0x00000000 Kernel EXEC coherent 3: 0xd0000000-0xdfffffff 0x10000000 Kernel EXEC coherent 4: - 5: - 6: - 7: - ---[ Data Block Address Translations ]--- 0: - 1: - 2: 0xc0000000-0xcfffffff 0x00000000 Kernel RW coherent 3: 0xd0000000-0xdfffffff 0x10000000 Kernel RW coherent 4: - 5: - 6: - 7: - Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: dump segment registers on book3s/32Christophe Leroy
This patch creates a debugfs file to see content of segment registers # cat /sys/kernel/debug/segment_registers ---[ User Segments ]--- 0x00000000-0x0fffffff Kern key 1 User key 1 VSID 0xade2b0 0x10000000-0x1fffffff Kern key 1 User key 1 VSID 0xade3c1 0x20000000-0x2fffffff Kern key 1 User key 1 VSID 0xade4d2 0x30000000-0x3fffffff Kern key 1 User key 1 VSID 0xade5e3 0x40000000-0x4fffffff Kern key 1 User key 1 VSID 0xade6f4 0x50000000-0x5fffffff Kern key 1 User key 1 VSID 0xade805 0x60000000-0x6fffffff Kern key 1 User key 1 VSID 0xade916 0x70000000-0x7fffffff Kern key 1 User key 1 VSID 0xadea27 0x80000000-0x8fffffff Kern key 1 User key 1 VSID 0xadeb38 0x90000000-0x9fffffff Kern key 1 User key 1 VSID 0xadec49 0xa0000000-0xafffffff Kern key 1 User key 1 VSID 0xaded5a 0xb0000000-0xbfffffff Kern key 1 User key 1 VSID 0xadee6b ---[ Kernel Segments ]--- 0xc0000000-0xcfffffff Kern key 0 User key 1 VSID 0x000ccc 0xd0000000-0xdfffffff Kern key 0 User key 1 VSID 0x000ddd 0xe0000000-0xefffffff Kern key 0 User key 1 VSID 0x000eee 0xf0000000-0xffffffff Kern key 0 User key 1 VSID 0x000fff Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Move it under /sys/kernel/debug/powerpc, make sr_init() __init] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/math-emu: Update macros from GCCJoel Stanley
The add_ssaaaa, sub_ddmmss, umul_ppmm and udiv_qrnnd macros originate from GCC's longlong.h which in turn was copied from GMP's longlong.h a few decades ago. This was found when compiling with clang: arch/powerpc/math-emu/fnmsub.c:46:2: error: invalid use of a cast in a inline asm context requiring an l-value: remove the cast or build with -fheinous-gnu-extensions FP_ADD_D(R, T, B); ^~~~~~~~~~~~~~~~~ ... ./arch/powerpc/include/asm/sfp-machine.h:283:27: note: expanded from macro 'sub_ddmmss' : "=r" ((USItype)(sh)), \ ~~~~~~~~~~^~~ Segher points out: this was fixed in GCC over 16 years ago ( https://gcc.gnu.org/r56600 ), and in GMP (where it comes from) presumably before that. Update the add_ssaaaa, sub_ddmmss, umul_ppmm and udiv_qrnnd macros to the latest GCC version in order to git rid of the invalid casts. These were taken as-is from GCC's longlong in order to make future syncs obvious. Other parts of sfp-machine.h were left as-is as the file contains more features than present in longlong.h. Link: https://github.com/ClangBuiltLinux/linux/issues/260 Signed-off-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/tools/checkpatch: Ignore DT_SPLIT_BINDING_PATCHRussell Currey
From what I've seen, every time this warning comes up it's bogus, so let's ignore it. Signed-off-by: Russell Currey <ruscur@russell.cc> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: regroup TLB handler routinesChristophe Leroy
As this is running with MMU off, the CPU only does speculative fetch for code in the same page. Following the significant size reduction of TLB handler routines, the side handlers can be brought back close to the main part, ie in the same page. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: don't use r12/SPRN_SPRG_SCRATCH2 in TLB Miss handlersChristophe Leroy
This patch reworks the TLB Miss handler in order to not use r12 register, hence avoiding having to save it into SPRN_SPRG_SCRATCH2. In the DAR Fixup code we can now use SPRN_M_TW, freeing SPRN_SPRG_SCRATCH2. Then SPRN_SPRG_SCRATCH2 may be used for something else in the future. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: reintroduce 16K pages with HW assistanceChristophe Leroy
Using this HW assistance implies some constraints on the page table structure: - Regardless of the main page size used (4k or 16k), the level 1 table (PGD) contains 1024 entries and each PGD entry covers a 4Mbytes area which is managed by a level 2 table (PTE) containing also 1024 entries each describing a 4k page. - 16k pages require 4 identifical entries in the L2 table - 512k pages PTE have to be spread every 128 bytes in the L2 table - 8M pages PTE are at the address pointed by the L1 entry and each 8M page require 2 identical entries in the PGD. In order to use hardware assistance with 16K pages, this patch does the following modifications: - Make PGD size independent of the main page size - In 16k pages mode, redefine pte_t as a struct with 4 elements, and populate those 4 elements in __set_pte_at() and pte_update() - Adapt the size of the hugepage tables. - Define a PTE_FRAGMENT_NB so that a 16k page contains 4 page tables. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: Enable 512k hugepage support with HW assistanceChristophe Leroy
For using 512k pages with hardware assistance, the PTEs have to be spread every 128 bytes in the L2 table. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: Enable 8M hugepage support with HW assistanceChristophe Leroy
HW assistance naturally supports 8M huge pages without further modifications. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: Use hardware assistance in TLB handlersChristophe Leroy
Today, on the 8xx the TLB handlers do SW tablewalk by doing all the calculation in ASM, in order to match with the Linux page table structure. The 8xx offers hardware assistance which allows significant size reduction of the TLB handlers, hence also reduces the time spent in the handlers. However, using this HW assistance implies some constraints on the page table structure: - Regardless of the main page size used (4k or 16k), the level 1 table (PGD) contains 1024 entries and each PGD entry covers a 4Mbytes area which is managed by a level 2 table (PTE) containing also 1024 entries each describing a 4k page. - 16k pages require 4 identifical entries in the L2 table - 512k pages PTE have to be spread every 128 bytes in the L2 table - 8M pages PTE are at the address pointed by the L1 entry and each 8M page require 2 identical entries in the PGD. This patch modifies the TLB handlers to use HW assistance for 4K PAGES. Before that patch, the mean time spent in TLB miss handlers is: - ITLB miss: 80 ticks - DTLB miss: 62 ticks After that patch, the mean time spent in TLB miss handlers is: - ITLB miss: 72 ticks - DTLB miss: 54 ticks So the improvement is 10% for ITLB and 13% for DTLB misses Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: Temporarily disable 16k pages and hugepagesChristophe Leroy
In preparation of making use of hardware assistance in TLB handlers, this patch temporarily disables 16K pages and hugepages. The reason is that when using HW assistance in 4K pages mode, the linux model fit with the HW model for 4K pages and 8M pages. However for 16K pages and 512K mode some additional work is needed to get linux model fit with HW model. For the 8M pages, they will naturaly come back when we switch to HW assistance, without any additional handling. In order to keep the following patch smaller, the removal of the current special handling for 8M pages gets removed here as well. Therefore the 4K pages mode will be implemented first and without support for 512k hugepages. Then the 512k hugepages will be brought back. And the 16K pages will be implemented in the following step. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: Move SW perf counters in first 32kb of memoryChristophe Leroy
In order to simplify time critical exceptions handling 8xx specific SW perf counters, this patch moves the counters into the beginning of memory. This is possible because .text is readable and the counters are never modified outside of the handlers. By doing this, we avoid having to set a second register with the upper part of the address of the counters. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: remove unnecessary test in pgtable_cache_init()Christophe Leroy
pgtable_cache_add() gracefully handles the case when a cache that size already exists by returning early with the following test: if (PGT_CACHE(shift)) return; /* Already have a cache of this size */ It is then not needed to test the existence of the cache before. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: fix a warning when a cache is common to PGD and hugepagesChristophe Leroy
While implementing TLB miss HW assistance on the 8xx, the following warning was encountered: [ 423.732965] WARNING: CPU: 0 PID: 345 at mm/slub.c:2412 ___slab_alloc.constprop.30+0x26c/0x46c [ 423.733033] CPU: 0 PID: 345 Comm: mmap Not tainted 4.18.0-rc8-00664-g2dfff9121c55 #671 [ 423.733075] NIP: c0108f90 LR: c0109ad0 CTR: 00000004 [ 423.733121] REGS: c455bba0 TRAP: 0700 Not tainted (4.18.0-rc8-00664-g2dfff9121c55) [ 423.733147] MSR: 00021032 <ME,IR,DR,RI> CR: 24224848 XER: 20000000 [ 423.733319] [ 423.733319] GPR00: c0109ad0 c455bc50 c4521910 c60053c0 007080c0 c0011b34 c7fa41e0 c455be30 [ 423.733319] GPR08: 00000001 c00103a0 c7fa41e0 c49afcc4 24282842 10018840 c079b37c 00000040 [ 423.733319] GPR16: 73f00000 00210d00 00000000 00000001 c455a000 00000100 00000200 c455a000 [ 423.733319] GPR24: c60053c0 c0011b34 007080c0 c455a000 c455a000 c7fa41e0 00000000 00009032 [ 423.734190] NIP [c0108f90] ___slab_alloc.constprop.30+0x26c/0x46c [ 423.734257] LR [c0109ad0] kmem_cache_alloc+0x210/0x23c [ 423.734283] Call Trace: [ 423.734326] [c455bc50] [00000100] 0x100 (unreliable) [ 423.734430] [c455bcc0] [c0109ad0] kmem_cache_alloc+0x210/0x23c [ 423.734543] [c455bcf0] [c0011b34] huge_pte_alloc+0xc0/0x1dc [ 423.734633] [c455bd20] [c01044dc] hugetlb_fault+0x408/0x48c [ 423.734720] [c455bdb0] [c0104b20] follow_hugetlb_page+0x14c/0x44c [ 423.734826] [c455be10] [c00e8e54] __get_user_pages+0x1c4/0x3dc [ 423.734919] [c455be80] [c00e9924] __mm_populate+0xac/0x140 [ 423.735020] [c455bec0] [c00db14c] vm_mmap_pgoff+0xb4/0xb8 [ 423.735127] [c455bf00] [c00f27c0] ksys_mmap_pgoff+0xcc/0x1fc [ 423.735222] [c455bf40] [c000e0f8] ret_from_syscall+0x0/0x38 [ 423.735271] Instruction dump: [ 423.735321] 7cbf482e 38fd0008 7fa6eb78 7fc4f378 4bfff5dd 7fe3fb78 4bfffe24 81370010 [ 423.735536] 71280004 41a2ff88 4840c571 4bffff80 <0fe00000> 4bfffeb8 81340010 712a0004 [ 423.735757] ---[ end trace e9b222919a470790 ]--- This warning occurs when calling kmem_cache_zalloc() on a cache having a constructor. In this case it happens because PGD cache and 512k hugepte cache are the same size (4k). While a cache with constructor is created for the PGD, hugepages create cache without constructor and uses kmem_cache_zalloc(). As both expect a cache with the same size, the hugepages reuse the cache created for PGD, hence the conflict. In order to avoid this conflict, this patch: - modifies pgtable_cache_add() so that a zeroising constructor is added for any cache size. - replaces calls to kmem_cache_zalloc() by kmem_cache_alloc() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: replace hugetlb_cache by PGT_CACHE(PTE_T_ORDER)Christophe Leroy
Instead of opencoding cache handling for the special case of hugepage tables having a single pte_t element, this patch makes use of the common pgtable_cache helpers Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: enable the use of page table cache of order 0Christophe Leroy
hugepages uses a cache of order 0. Lets allow page tables of order 0 in the common part in order to avoid open coding in hugetlb Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: Extend pte_fragment functionality to PPC32Christophe Leroy
In order to allow the 8xx to handle pte_fragments, this patch extends the use of pte_fragments to PPC32 platforms. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: add helpers to get/set mm.context->pte_fragChristophe Leroy
In order to handle pte_fragment functions with single fragment without adding pte_frag in all mm_context_t, this patch creates two helpers which do nothing on platforms using a single fragment. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: Move pgtable_t into platform headersChristophe Leroy
This patch move pgtable_t into platform headers. It gets rid of the CONFIG_PPC_64K_PAGES case for PPC64 as nohash/64 doesn't support CONFIG_PPC_64K_PAGES. Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: move platform specific mmu-xxx.h in platform directoriesChristophe Leroy
The purpose of this patch is to move platform specific mmu-xxx.h files in platform directories like pte-xxx.h files. In the meantime this patch creates common nohash and nohash/32 + nohash/64 mmu.h files for future common parts. Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: Avoid useless lock with single page fragmentsChristophe Leroy
There is no point in taking the page table lock as pte_frag or pmd_frag are always NULL when we have only one fragment. Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: Move pte_fragment_alloc() to a common locationChristophe Leroy
In preparation of next patch which generalises the use of pte_fragment_alloc() for all, this patch moves the related functions in a place that is common to all subarches. The 8xx will need that for supporting 16k pages, as in that mode page tables still have a size of 4k. Since pte_fragment with only once fragment is not different from what is done in the general case, we can easily migrate all subarchs to pte fragments. Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/8xx: Remove PTE_ATOMIC_UPDATESChristophe Leroy
commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and TLB miss") introduced non atomic PTE updates and started the work of removing PTE updates in TLB miss handlers, but kept PTE_ATOMIC_UPDATES for the 8xx with the following comment: /* Until my rework is finished, 8xx still needs atomic PTE updates */ commit fe11dc3f9628e ("powerpc/8xx: Update TLB asm so it behaves as linux mm expects") removed all PTE updates done in TLB miss handlers Therefore, atomic PTE updates are not needed anymore for the 8xx Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/book3s32: Remove CONFIG_BOOKE dependent codeChristophe Leroy
BOOK3S/32 cannot be BOOKE, so remove useless code Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc: annotate implicit fall throughsStephen Rothwell
There is a plan to build the kernel with -Wimplicit-fallthrough and these places in the code produced warnings, but because we build arch/powerpc with -Werror, they became errors. Fix them up. This patch produces no change in behaviour, but should be reviewed in case these are actually bugs not intentional fallthoughs. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/mm: remove unused function prototypeBreno Leitao
Commit f384796c40dc ("powerpc/mm: Add support for handling > 512TB address in SLB miss") removed function slb_miss_bad_addr(struct pt_regs *regs), but kept its declaration in the prototype file. This patch simply removes the function definition. Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/pseries/cpuidle: Fix preempt warningBreno Leitao
When booting a pseries kernel with PREEMPT enabled, it dumps the following warning: BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1 caller is pseries_processor_idle_init+0x5c/0x22c CPU: 13 PID: 1 Comm: swapper/0 Not tainted 4.20.0-rc3-00090-g12201a0128bc-dirty #828 Call Trace: [c000000429437ab0] [c0000000009c8878] dump_stack+0xec/0x164 (unreliable) [c000000429437b00] [c0000000005f2f24] check_preemption_disabled+0x154/0x160 [c000000429437b90] [c000000000cab8e8] pseries_processor_idle_init+0x5c/0x22c [c000000429437c10] [c000000000010ed4] do_one_initcall+0x64/0x300 [c000000429437ce0] [c000000000c54500] kernel_init_freeable+0x3f0/0x500 [c000000429437db0] [c0000000000112dc] kernel_init+0x2c/0x160 [c000000429437e20] [c00000000000c1d0] ret_from_kernel_thread+0x5c/0x6c This happens because the code calls get_lppaca() which calls get_paca() and it checks if preemption is disabled through check_preemption_disabled(). Preemption should be disabled because the per CPU variable may make no sense if there is a preemption (and a CPU switch) after it reads the per CPU data and when it is used. In this device driver specifically, it is not a problem, because this code just needs to have access to one lppaca struct, and it does not matter if it is the current per CPU lppaca struct or not (i.e. when there is a preemption and a CPU migration). That said, the most appropriate fix seems to be related to avoiding the debug_smp_processor_id() call at get_paca(), instead of calling preempt_disable() before get_paca(). Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-12-04powerpc/xmon: Fix invocation inside lock regionBreno Leitao
Currently xmon needs to get devtree_lock (through rtas_token()) during its invocation (at crash time). If there is a crash while devtree_lock is being held, then xmon tries to get the lock but spins forever and never get into the interactive debugger, as in the following case: int *ptr = NULL; raw_spin_lock_irqsave(&devtree_lock, flags); *ptr = 0xdeadbeef; This patch avoids calling rtas_token(), thus trying to get the same lock, at crash time. This new mechanism proposes getting the token at initialization time (xmon_init()) and just consuming it at crash time. This would allow xmon to be possible invoked independent of devtree_lock being held or not. Signed-off-by: Breno Leitao <leitao@debian.org> Reviewed-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/kconfig: remove PPC_STD_MMU_32 and PPC_STD_MMUChristophe Leroy
PPC_STD_MMU_32 and PPC_STD_MMU are not used anymore. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc: change CONFIG_PPC_STD_MMU to CONFIG_PPC_BOOK3SChristophe Leroy
Today we have: config PPC_BOOK3S def_bool y depends on PPC_BOOK3S_32 || PPC_BOOK3S_64 config PPC_STD_MMU def_bool y depends on PPC_BOOK3S PPC_STD_MMU is therefore redundant with PPC_BOOK3S. Lets remove it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc: change CONFIG_PPC_STD_MMU_32 to CONFIG_PPC_BOOK3S_32Christophe Leroy
Today we have: config PPC_BOOK3S_32 bool "512x/52xx/6xx/7xx/74xx/82xx/83xx/86xx" [depends on PPC32 within a choice] config PPC_BOOK3S def_bool y depends on PPC_BOOK3S_32 || PPC_BOOK3S_64 config PPC_STD_MMU def_bool y depends on PPC_BOOK3S config PPC_STD_MMU_32 def_bool y depends on PPC_STD_MMU && PPC32 PPC_STD_MMU_32 is therefore redundant with PPC_BOOK3S_32. In order to make the code clearer, lets use preferably PPC_BOOK3S_32. This will allow to remove CONFIG_PPC_STD_MMU_32 in a later patch. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/32: Remove #ifdef CONFIG_PPC_STD_MMU_32 in asm/book3s/32/pgtable.hChristophe Leroy
asm/book3s/32/pgtable.h is only included when CONFIG_PPC_BOOK3S_32 is set. Whenever CONFIG_PPC_BOOK3S_32 is set, CONFIG_PPC_STD_MMU_32 is set as well. This patch removes useless CONFIG_PPC_STD_MMU_32 #ifdefs Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/kconfig: remove CONFIG_6xxChristophe Leroy
CONFIG_6xx is not used anymore. Remove it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc: change CONFIG_6xx to CONFIG_PPC_BOOK3S_32Christophe Leroy
Today we have: config PPC_BOOK3S_32 bool "512x/52xx/6xx/7xx/74xx/82xx/83xx/86xx" [depends on PPC32 within a choice] config PPC_BOOK3S def_bool y depends on PPC_BOOK3S_32 || PPC_BOOK3S_64 config 6xx def_bool y depends on PPC32 && PPC_BOOK3S 6xx is therefore redundant with PPC_BOOK3S_32. In order to make the code clearer, lets use preferably PPC_BOOK3S_32. This will allow to remove CONFIG_6xx in a later patch. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26drivers/cpufreq: change CONFIG_6xx to CONFIG_PPC_BOOK3S_32Christophe Leroy
Today, powerpc has three CONFIG labels which means exactly the same: - CONFIG_6xx - CONFIG_PPC_BOOK3S_32 - CONFIG_PPC_STD_MMU_32 By consistency with PPC64, CONFIG_PPC_BOOK3S_32 is the preferred one. Using a label with includes _PPC_ also makes it clearer that it is linked to powerpc. In preparation of the removal of CONFIG_6xx, this patch replaces it by CONFIG_PPC_BOOK3S_32 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26cxl: Use device_type helpers to access the node typeRob Herring
Remove directly accessing device_node.type pointer and use the accessors instead. This will eventually allow removing the type pointer. Signed-off-by: Rob Herring <robh@kernel.org> Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26macintosh: Use device_type helpers to access the node typeRob Herring
Remove directly accessing device_node.type pointer and use the accessors instead. This will eventually allow removing the type pointer. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26macintosh: windfarm: Another convert to using %pOFn instead of device_node.nameRob Herring
In preparation to remove the node name pointer from struct device_node, convert printf users to use the %pOFn format specifier. Convert the open coded iterating thru child nodes to for_each_child_of_node() while we're here. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc: Use device_type helpers to access the node typeRob Herring
Remove directly accessing device_node.type pointer and use the accessors instead. This will eventually allow removing the type pointer. Replace the open coded iterating over child nodes with for_each_child_of_node() while we're here. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc: Rework btext_find_display to use of_stdout and device_type helpersRob Herring
Remove directly accessing device_node.type pointer and use the accessors instead. This will eventually allow removing the type pointer. In the process, the of_stdout pointer can be used instead of finding the stdout node again. Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/configs: Add KVM guest defconfigSatheesh Rajendran
This patch adds new defconfig options for powerpc KVM guest and guest.config with additional config symbols enabled, which is to build kernel to boot without initramfs and can be used as place holder for guest specific additional config symbols in future. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/configs: Add missing config symbols for ppc64_defconfigSatheesh Rajendran
This patch adds missing config symbols for ppc64_defconfig to enable cgroups, memhotplug, numa balancing and XFS in core kernel image. Signed-off-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/configs: Add CONFIG_NR_CPUS to ppc64_defconfigSatheesh Rajendran
CONFIG_NR_CPUS is not set in ppc64_defconfig, So it gets default of 32 which is quite small for modern powerpc systems. Instead set a default of 2048 like other powerpc defconfigs. Signed-off-by: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/configs: Update ppc64_defconfig with savedefconfigMichael Ellerman
Update ppc64_defconfig with savedefconfig. No symbols are added or removed, this is 100% movement. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-26powerpc/configs: Remove unnecessary ftrace symbolsMichael Ellerman
In commit 539df7fcb303 ("powerpc/configs: Enable function trace by default") we added: CONFIG_FTRACE=y CONFIG_FUNCTION_TRACER=y CONFIG_FUNCTION_GRAPH_TRACER=y To ppc64_defconfig, powernv_defconfig and pseries_defconfig. But only CONFIG_FUNCTION_TRACER=y is required, CONFIG_FTRACE is default y if DEBUG_KERNEL is enabled, which we have. And then CONFIG_FUNCTION_GRAPH_TRACER is default y when CONFIG_FUNCTION_TRACER is enabled. The extra symbols were already removed from powernv_defconfig in commit 9a018fb1e147 ("powerpc/config: powernv_defconfig updates"). Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-25powerpc/pasemi: Use dma_zalloc_coherent()Sabyasachi Gupta
Replace dma_alloc_coherent() + memset() with dma_zalloc_coherent(). Signed-off-by: Sabyasachi Gupta <sabyasachi.linux@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-25powerpc/85xx: Drop pointless static qualifierYue Haibing
There is no need to have the 'void __iomem *cpld_base' variable static since new value always be assigned before use it. Signed-off-by: Yue Haibing <yuehaibing@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-25powerpc64/ftrace: Drop pointless static qualifier in is_b_op()YueHaibing
There is no need to have the 'intoffset' variable static since new value always be assigned before use it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-25powerpc: Typo s/use use/use/Geert Uytterhoeven
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-11-25powerpc/fadump: Change to use DEFINE_SHOW_ATTRIBUTE macroYangtao Li
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code. Signed-off-by: Yangtao Li <tiny.windzz@gmail.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>