summaryrefslogtreecommitdiff
path: root/arch/arm64/mm
AgeCommit message (Collapse)Author
2014-02-05arm64: simplify pgd_allocMark Rutland
Currently pgd_alloc has a redundant NULL check in its return path that can be removed with no ill effects. With that removed it's also possible to return early and eliminate the new_pgd temporary variable. This patch applies said modifications, making the logic of pgd_alloc correspond 1-1 with that of pgd_free. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-05arm64: Invalidate the TLB when replacing pmd entries during bootCatalin Marinas
With the 64K page size configuration, __create_page_tables in head.S maps enough memory to get started but using 64K pages rather than 512M sections with a single pgd/pud/pmd entry pointing to a pte table. create_mapping() may override the pgd/pud/pmd table entry with a block (section) one if the RAM size is more than 512MB and aligned correctly. For the end of this block to be accessible, the old TLB entry must be invalidated. Cc: <stable@vger.kernel.org> Reported-by: Mark Salter <msalter@redhat.com> Tested-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-05arm64: Align CMA sizes to PAGE_SIZELaura Abbott
dma_alloc_from_contiguous takes number of pages for a size. Align up the dma size passed in to page size to avoid truncation and allocation failures on sizes less than PAGE_SIZE. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-27arm64: mm: fix the function name in comment of cpu_do_switch_mmJingoo Han
Fix the function name of comment of cpu_do_switch_mm, because cpu_do_switch_mm is the correct name. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-22arm64: mm: fix the function name in comment of __flush_dcache_areaJingoo Han
Fix the function name of comment of __flush_dcache_area, because __flush_dcache_area is the correct name. Also, the missing variable 'size' is added to the comment. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-22arm64: mm: use ubfm for dcache_line_sizeJingoo Han
Use 'ubfm' for the bitfield move instruction; thus, single instruction can be used instead of two instructions, when getting the minimum D-cache line size from CTR_EL0 register. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19Merge tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp into upstreamCatalin Marinas
* tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp: arm64: add CPU power management menu/entries arm64: kernel: add PM build infrastructure arm64: kernel: add CPU idle call arm64: enable generic clockevent broadcast arm64: kernel: implement HW breakpoints CPU PM notifier arm64: kernel: refactor code to install/uninstall breakpoints arm: kvm: implement CPU PM notifier arm64: kernel: implement fpsimd CPU PM notifier arm64: kernel: cpu_{suspend/resume} implementation arm64: kernel: suspend/resume registers save/restore arm64: kernel: build MPIDR_EL1 hash function data structure arm64: kernel: add MPIDR_EL1 accessors macros Conflicts: arch/arm64/Kconfig
2013-12-19arm64: Enable CMALaura Abbott
arm64 bit targets need the features CMA provides. Add the appropriate hooks, header files, and Kconfig to allow this to happen. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19arm64: Warn on NULL device structure for dma APIsLaura Abbott
Although parts of the DMA apis may properly check for NULL devices, there may be some places that don't. Rather than fix up all the possible locations, just require a non-NULL device structure to be used for allocating/freeing. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: s/WARN/WARN_ONCE/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-16arm64: kernel: suspend/resume registers save/restoreLorenzo Pieralisi
Power management software requires the kernel to save and restore CPU registers while going through suspend and resume operations triggered by kernel subsystems like CPU idle and suspend to RAM. This patch implements code that provides save and restore mechanism for the arm v8 implementation. Memory for the context is passed as parameter to both cpu_do_suspend and cpu_do_resume functions, and allows the callers to implement context allocation as they deem fit. The registers that are saved and restored correspond to the registers set actually required by the kernel to be up and running which represents a subset of v8 ISA. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-12-06arm64: ensure completion of TLB invalidatationMark Rutland
Currently there is no dsb between the tlbi in __cpu_setup and the write to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the TLB invalidation is not guaranteed to have completed at the point address translation is enabled, leading to a number of possible issues including incorrect translations and TLB conflict faults. This patch moves the tlbi in __cpu_setup above an existing dsb used to synchronise I-cache invalidation, ensuring that the TLBs have been invalidated at the point the MMU is enabled. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-12Merge tag 'devicetree-for-3.13' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux Pull devicetree updates from Rob Herring: "DeviceTree updates for 3.13. This is a bit larger pull request than usual for this cycle with lots of clean-up. - Cross arch clean-up and consolidation of early DT scanning code. - Clean-up and removal of arch prom.h headers. Makes arch specific prom.h optional on all but Sparc. - Addition of interrupts-extended property for devices connected to multiple interrupt controllers. - Refactoring of DT interrupt parsing code in preparation for deferred probe of interrupts. - ARM cpu and cpu topology bindings documentation. - Various DT vendor binding documentation updates" * tag 'devicetree-for-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (82 commits) powerpc: add missing explicit OF includes for ppc dt/irq: add empty of_irq_count for !OF_IRQ dt: disable self-tests for !OF_IRQ of: irq: Fix interrupt-map entry matching MIPS: Netlogic: replace early_init_devtree() call of: Add Panasonic Corporation vendor prefix of: Add Chunghwa Picture Tubes Ltd. vendor prefix of: Add AU Optronics Corporation vendor prefix of/irq: Fix potential buffer overflow of/irq: Fix bug in interrupt parsing refactor. of: set dma_mask to point to coherent_dma_mask of: add vendor prefix for PHYTEC Messtechnik GmbH DT: sort vendor-prefixes.txt of: Add vendor prefix for Cadence of: Add empty for_each_available_child_of_node() macro definition arm/versatile: Fix versatile irq specifications. of/irq: create interrupts-extended property microblaze/pci: Drop PowerPC-ism from irq parsing of/irq: Create of_irq_parse_and_map_pci() to consolidate arch code. of/irq: Use irq_of_parse_and_map() ...
2013-11-07Merge remote-tracking branch 'grant/devicetree/next' into for-nextRob Herring
2013-10-30arm64: allow ioremap_cache() to use existing RAM mappingsMark Salter
Some drivers (ACPI notably) use ioremap_cache() to map an area which could either be outside of kernel RAM or in an already mapped reserved area of RAM. To avoid aliases with different caching attributes, ioremap() does not allow RAM to be remapped. But for ioremap_cache(), the existing kernel mapping may be used. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: big-endian: set correct endianess on kernel entryMatthew Leach
The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-09arm64: remove unnecessary prom.h includeRob Herring
Remove unnecessary prom.h include in preparation to make prom.h optional. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Grant Likely <grant.likely@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2013-10-09arm64: set initrd_start/initrd_end for fdt scanRob Herring
In order to unify the initrd scanning for DT across architectures, make arm64 use initrd_start and initrd_end instead of the physical addresses. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
2013-09-25arm64: use correct register width when retrieving ASIDMatthew Leach
The ASID is represented as an unsigned int in mm_context_t and we currently use the mmid assembler macro to access this element of the struct. This should be accessed with a register of 32-bit width. If the incorrect register width is used the ASID will be returned in bits[32:63] of the register when running under big-endian. Fix a use of the mmid macro in tlb.S to use a 32-bit access. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-20arm64: Make do_bad_area() function staticCatalin Marinas
This function is only called from arch/arm64/mm/fault.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-12arch: mm: pass userspace fault flag to generic fault handlerJohannes Weiner
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-12arch: mm: do not invoke OOM killer on kernel fault OOMJohannes Weiner
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-11mm: migrate: check movability of hugepage in unmap_and_move_huge_page()Naoya Horiguchi
Currently hugepage migration works well only for pmd-based hugepages (mainly due to lack of testing,) so we had better not enable migration of other levels of hugepages until we are ready for it. Some users of hugepage migration (mbind, move_pages, and migrate_pages) do page table walk and check pud/pmd_huge() there, so they are safe. But the other users (softoffline and memory hotremove) don't do this, so without this patch they can try to migrate unexpected types of hugepages. To prevent this, we introduce hugepage_migration_support() as an architecture dependent check of whether hugepage are implemented on a pmd basis or not. And on some architecture multiple sizes of hugepages are available, so hugepage_migration_support() also checks hugepage size. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-10Merge tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linuxLinus Torvalds
Pull device tree core updates from Grant Likely: "Generally minor changes. A bunch of bug fixes, particularly for initialization and some refactoring. Most notable change if feeding the entire flattened tree into the random pool at boot. May not be significant, but shouldn't hurt either" Tim Bird questions whether the boot time cost of the random feeding may be noticeable. And "add_device_randomness()" is definitely not some speed deamon of a function. * tag 'devicetree-for-linus' of git://git.secretlab.ca/git/linux: of/platform: add error reporting to of_amba_device_create() irq/of: Fix comment typo for irq_of_parse_and_map of: Feed entire flattened device tree into the random pool of/fdt: Clean up casting in unflattening path of/fdt: Remove duplicate memory clearing on FDT unflattening gpio: implement gpio-ranges binding document fix of: call __of_parse_phandle_with_args from of_parse_phandle of: introduce of_parse_phandle_with_fixed_args of: move of_parse_phandle() of: move documentation of of_parse_phandle_with_args of: Fix missing memory initialization on FDT unflattening of: consolidate definition of early_init_dt_alloc_memory_arch() of: Make of_get_phy_mode() return int i.s.o. const int include: dt-binding: input: create a DT header defining key codes. of/platform: Staticize of_platform_device_create_pdata() of: Specify initrd location using 64-bit dt: Typo fix OF: make of_property_for_each_{u32|string}() use parameters if OF is not enabled
2013-09-03arm64: mm: permit use of tagged pointers at EL0Will Deacon
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-09-02arm64: Remove unused cpu_name ascii in arch/arm64/mm/proc.SCatalin Marinas
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-08-28arm64: Fix mapping of memory banks not ending on a PMD_SIZE boundaryCatalin Marinas
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: "Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by: "Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
2013-07-24of: Specify initrd location using 64-bitSantosh Shilimkar
On some PAE architectures, the entire range of physical memory could reside outside the 32-bit limit. These systems need the ability to specify the initrd location using 64-bit numbers. This patch globally modifies the early_init_dt_setup_initrd_arch() function to use 64-bit numbers instead of the current unsigned long. There has been quite a bit of debate about whether to use u64 or phys_addr_t. It was concluded to stick to u64 to be consistent with rest of the device tree code. As summarized by Geert, "The address to load the initrd is decided by the bootloader/user and set at that point later in time. The dtb should not be tied to the kernel you are booting" More details on the discussion can be found here: https://lkml.org/lkml/2013/6/20/690 https://lkml.org/lkml/2012/9/13/544 Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: Grant Likely <grant.likely@linaro.org>
2013-07-19arm64: mm: don't treat user cache maintenance faults as writesWill Deacon
On arm64, cache maintenance faults appear as data aborts with the CM bit set in the ESR. The WnR bit, usually used to distinguish between faulting loads and stores, always reads as 1 and (slightly confusingly) the instructions are treated as reads by the architecture. This patch fixes our fault handling code to treat cache maintenance faults in the same way as loads. Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-07-10mm: remove free_area_cacheMichel Lespinasse
Since all architectures have been converted to use vm_unmapped_area(), there is no remaining use for the free_area_cache. Signed-off-by: Michel Lespinasse <walken@google.com> Acked-by: Rik van Riel <riel@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Helge Deller <deller@gmx.de> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm/microblaze: prepare for removing num_physpages and simplify mem_init()Jiang Liu
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Michal Simek <monstr@monstr.eu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm/ARM64: prepare for removing num_physpages and simplify mem_init()Jiang Liu
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: concentrate modification of totalram_pages into the mm coreJiang Liu
Concentrate code to modify totalram_pages into the mm core, so the arch memory initialized code doesn't need to take care of it. With these changes applied, only following functions from mm core modify global variable totalram_pages: free_bootmem_late(), free_all_bootmem(), free_all_bootmem_node(), adjust_managed_page_count(). With this patch applied, it will be much more easier for us to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Acked-by: David Howells <dhowells@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm/ARM64: kill poison_init_mem()Jiang Liu
Use free_reserved_area() to poison initmem memory pages and kill poison_init_mem() on ARM64. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: enhance free_reserved_area() to support poisoning memory with zeroJiang Liu
Address more review comments from last round of code review. 1) Enhance free_reserved_area() to support poisoning freed memory with pattern '0'. This could be used to get rid of poison_init_mem() on ARM64. 2) A previous patch has disabled memory poison for initmem on s390 by mistake, so restore to the original behavior. 3) Remove redundant PAGE_ALIGN() when calling free_reserved_area(). Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-03mm: change signature of free_reserved_area() to fix building warningsJiang Liu
Change signature of free_reserved_area() according to Russell King's suggestion to fix following build warnings: arch/arm/mm/init.c: In function 'mem_init': arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default] free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL); ^ In file included from include/linux/mman.h:4:0, from arch/arm/mm/init.c:15: include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *' extern unsigned long free_reserved_area(unsigned long start, unsigned long end, mm/page_alloc.c: In function 'free_reserved_area': >> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default] In file included from arch/mips/include/asm/page.h:49:0, from include/linux/mmzone.h:20, from include/linux/gfp.h:4, from include/linux/mm.h:8, from mm/page_alloc.c:18: arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int' mm/page_alloc.c: In function 'free_area_init_nodes': mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds] Also address some minor code review comments. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Reported-by: Arnd Bergmann <arnd@arndb.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-07-01Merge branch 'for-next/hugepages' of ↵Catalin Marinas
git://git.linaro.org/people/stevecapper/linux into upstream-hugepages * 'for-next/hugepages' of git://git.linaro.org/people/stevecapper/linux: ARM64: mm: THP support. ARM64: mm: Raise MAX_ORDER for 64KB pages and THP. ARM64: mm: HugeTLB support. ARM64: mm: Move PTE_PROT_NONE bit. ARM64: mm: Make PAGE_NONE pages read only and no-execute. ARM64: mm: Restore memblock limit when map_mem finished. mm: thp: Correct the HPAGE_PMD_ORDER check. x86: mm: Remove general hugetlb code from x86. mm: hugetlb: Copy general hugetlb code from x86 to mm. x86: mm: Remove x86 version of huge_pmd_share. mm: hugetlb: Copy huge_pmd_share from x86 to mm. Conflicts: arch/arm64/Kconfig arch/arm64/include/asm/pgtable-hwdef.h arch/arm64/include/asm/pgtable.h
2013-06-14ARM64: mm: HugeTLB support.Steve Capper
Add huge page support to ARM64, different huge page sizes are supported depending on the size of normal pages: PAGE_SIZE is 4KB: 2MB - (pmds) these can be allocated at any time. 1024MB - (puds) usually allocated on bootup with the command line with something like: hugepagesz=1G hugepages=6 PAGE_SIZE is 64KB: 512MB - (pmds) usually allocated on bootup via command line. Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2013-06-14ARM64: mm: Restore memblock limit when map_mem finished.Steve Capper
In paging_init the memblock limit is set to restrict any addresses returned by early_alloc to fit within the initial direct kernel mapping in swapper_pg_dir. This allows map_mem to allocate puds, pmds and ptes from the initial direct kernel mapping. The limit stays low after paging_init() though, meaning any bootmem allocations will be from a restricted subset of memory. Gigabyte huge pages, for instance, are normally allocated from bootmem as their order (18) is too large for the default buddy allocator (MAX_ORDER = 11). This patch restores the memblock limit when map_mem has finished, allowing gigabyte huge pages (and other objects) to be allocated from all of bootmem. Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2013-06-07arm64: Remove __flush_dcache_page()Catalin Marinas
This function is only used in __sync_icache_dcache(), so remove it and call __flush_dcache_area() directly. The flush_icache_user_range() function is not used in the arm64 kernel. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2013-06-07arm64: Do not flush the D-cache for anonymous pagesCatalin Marinas
The D-cache on AArch64 is VIPT non-aliasing, so there is no need to flush it for anonymous pages. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2013-06-07arm64: Avoid cache flushing in flush_dcache_page()Catalin Marinas
The flush_dcache_page() function is called when the kernel modified a page cache page. Since the D-cache on AArch64 does not have aliases this function can simply mark the page as dirty for later flushing via set_pte_at()/__sync_icache_dcache() if the page is executable (to ensure the I-D cache coherency). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2013-05-24arm64: Do not report user faults for handled signalsCatalin Marinas
Currently user faults (page, undefined instruction) are always reported even though the user may have a signal handler for them. This patch adds unhandled_signal() check together with printk_ratelimit() for these cases. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-05-14arm64: mm: Fix operands of clz in __flush_dcache_allSukanto Ghosh
The format of the lower 32-bits of the 64-bit operand to 'dc cisw' is unchanged from ARMv7 architecture and the upper bits are RES0. This implies that the 'way' field of the operand of 'dc cisw' occupies the bit-positions [31 .. (32-A)]. Due to the use of 64-bit extended operands to 'clz', the existing implementation of __flush_dcache_all is incorrectly placing the 'way' field in the bit-positions [63 .. (64-A)]. Signed-off-by: Sukanto Ghosh <sghosh@apm.com> Tested-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
2013-05-13arm64: debug: clear mdscr_el1 instead of taking the OS lockWill Deacon
During boot, we take the debug OS lock before interrupts are enabled. This is required to prevent clearing of PSTATE.D on the interrupt entry path, which could result in spurious debug exceptions before we've got round to resetting things like the hardware breakpoints registers to a sane state. A problem with this approach is that taking the OS lock prevents an external JTAG debugger from debugging the system, which is especially irritating during boot, where JTAG debugging can be most useful. This patch clears mdscr_el1 rather than taking the lock, clearing the MDE and KDE bits and preventing self-hosted hardware debug exceptions from occurring. Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
2013-05-08Merge tag 'arm64-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 Pull arm64 update from Catalin Marinas: - Since drivers/irqchip/irq-gic.c no longer has dependencies on arm32 specifics (the 'gic' branch merged), it can be enabled on arm64. - Enable arm64 support for poweroff/restart (for code under drivers/power/reset/). - Fixes (dts file, exception handling, bitops) * tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: arm64: Treat the bitops index argument as an 'int' arm64: Ignore the 'write' ESR flag on cache maintenance faults arm64: dts: fix #address-cells for foundation-v8 arm64: vexpress: Add support for poweroff/restart arm64: Enable support for the ARM GIC interrupt controller
2013-05-08arm64: Ignore the 'write' ESR flag on cache maintenance faultsCatalin Marinas
ESR.WnR bit is always set on data cache maintenance faults even though the page is not required to have write permission. If a translation fault (page not yet mapped) happens for read-only user address range, Linux incorrectly assumes a permission fault. This patch adds the check of the ESR.CM bit during the page fault handling to ignore the 'write' flag. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Tim Northover <Tim.Northover@arm.com> Cc: stable@vger.kernel.org
2013-04-30Merge tag 'arm64-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 Pull arm64 update from Catalin Marinas: "Main features: - Versatile Express SoC (model) support - DT files and Kconfig entries (there are no arch/arm64/mach-* directories). The bulk of the code has already been moved to drivers/ as part of the ARM SoC clean-up. - Basic multi-cluster support (CPU logical map initialised from the DT) - Simple earlyprintk support for UART 8250/16550 and FastModel console output - Optimised kernel library bitops and string functions. - Automatic initialisation of the irqchip and clocks via DT" * tag 'arm64-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (26 commits) arm64: Use acquire/release semantics instead of explicit DMB arm64: klib: bitops: fix unpredictable stxr usage arm64: vexpress: Enable ARMv8 RTSM model (SoC) support arm64: vexpress: Add dts files for the ARMv8 RTSM models arm64: Survive invalid cpu enable-methods arm64: mm: Correct show_pte behaviour arm64: Fix compat types affecting struct compat_stat arm64: Execute DSB during thread switching for TLB/cache maintenance arm64: compiling issue, need add include/asm/vga.h file arm64: smp: honour #address-size when parsing CPU reg property arm64: Define cmpxchg64 and cmpxchg64_local for outside use arm64: Define readq and writeq for driver module using arm64: Fix task tracing arm64: add explicit symbols to ESR_EL1 decoding arm64: Use irqchip_init() for interrupt controller initialisation arm64: psci: Use the MPIDR values from cpu_logical_map for cpu ids. arm64: klib: Optimised atomic bitops arm64: klib: Optimised string functions arm64: klib: Optimised memory functions arm64: head: match all affinity levels in the pen of the secondaries ...
2013-04-29sparse-vmemmap: specify vmemmap population range in bytesJohannes Weiner
The sparse code, when asking the architecture to populate the vmemmap, specifies the section range as a starting page and a number of pages. This is an awkward interface, because none of the arch-specific code actually thinks of the range in terms of 'struct page' units and always translates it to bytes first. In addition, later patches mix huge page and regular page backing for the vmemmap. For this, they need to call vmemmap_populate_basepages() on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind. But these are not necessarily multiples of the 'struct page' size and so this unit is too coarse. Just translate the section range into bytes once in the generic sparse code, then pass byte ranges down the stack. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Bernhard Schmidt <Bernhard.Schmidt@lrz.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: David S. Miller <davem@davemloft.net> Tested-by: David S. Miller <davem@davemloft.net> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-29mm/ARM: use common help functions to free reserved pagesJiang Liu
Use common help functions to free reserved pages. Signed-off-by: Jiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-25arm64: mm: Correct show_pte behaviourSteve Capper
show_pte makes use of the *_none_or_clear_bad style functions. If a pgd, pud or pmd is identified as being bad, it will then be cleared. As show_pte appears to be called from either the user or kernel fault handlers this side effect can lead to unpredictable behaviour; especially as TLB entries are not invalidated. This patch removes the page table sanitisation from show_pte. If a bad pgd, pud or pmd is encountered it is left unmodified. Signed-off-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>