summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)Author
2014-07-18efi: efistub: Convert into static libraryArd Biesheuvel
This patch changes both x86 and arm64 efistub implementations from #including shared .c files under drivers/firmware/efi to building shared code as a static library. The x86 code uses a stub built into the boot executable which uncompresses the kernel at boot time. In this case, the library is linked into the decompressor. In the arm64 case, the stub is part of the kernel proper so the library is linked into the kernel proper as well. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-07efi: efistub: Refactor stub componentsArd Biesheuvel
In order to move from the #include "../../../xxxxx.c" anti-pattern used by both the x86 and arm64 versions of the stub to a static library linked into either the kernel proper (arm64) or a separate boot executable (x86), there is some prepatory work required. This patch does the following: - move forward declarations of functions shared between the arch specific and the generic parts of the stub to include/linux/efi.h - move forward declarations of functions shared between various .c files of the generic stub code to a new local header file called "efistub.h" - add #includes to all .c files which were formerly relying on the #includor to include the correct header files - remove all static modifiers from functions which will need to be externally visible once we move to a static library Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-07efi/arm64: efistub: Move shared dependencies to <asm/efi.h>Ard Biesheuvel
This moves definitions depended upon both by code under arch/arm64/boot and under drivers/firmware/efi to <asm/efi.h>. This is in preparation of turning the stub code under drivers/firmware/efi into a static library. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-07efi/arm64: Avoid EFI_ERROR as a generic return codeArd Biesheuvel
As EFI_ERROR is not a UEFI result code but a local invention only intended to allow get_dram_base() to signal failure, we should not use it elsewhere. Replace with EFI_LOAD_ERROR. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-07efi/arm64: Preserve FP/SIMD registers on UEFI runtime services callsArd Biesheuvel
According to the UEFI spec section 2.3.6.4, the use of FP/SIMD instructions is allowed, and should adhere to the AAPCS64 calling convention, which states that 'only the bottom 64 bits of each value stored in registers v8-v15 need to be preserved' (section 5.1.2). This applies equally to UEFI Runtime Services called by the kernel, so make sure the FP/SIMD register file is preserved in this case. We do this by enabling the wrappers for UEFI Runtime Services (CONFIG_EFI_RUNTIME_WRAPPERS) and inserting calls to kernel_neon_begin()and kernel_neon_end() into these wrappers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-07efi/arm64: efistub: remove local copy of linux_bannerArd Biesheuvel
The shared efistub code for ARM and arm64 contains a local copy of linux_banner, allowing it to be referenced from separate executables such as the ARM decompressor. However, this introduces a dependency on generated header files, causing unnecessary rebuilds of the stub itself and, in case of arm64, vmlinux which contains it. On arm64, the copy is not actually needed since we can reference the original symbol directly, and as it turns out, there may be better ways to deal with this for ARM as well, so let's remove it from the shared code. If it still needs to be reintroduced for ARM later, it should live under arch/arm anyway and not in shared code. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-06-06Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next Pull arm64 updates from Catalin Marinas: - Optimised assembly string/memory routines (based on the AArch64 Cortex Strings library contributed to glibc but re-licensed under GPLv2) - Optimised crypto algorithms making use of the ARMv8 crypto extensions (together with kernel API for using FPSIMD instructions in interrupt context) - Ftrace support - CPU topology parsing from DT - ESR_EL1 (Exception Syndrome Register) exposed to user space signal handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu) - 1GB section linear mapping if applicable - Barriers usage clean-up - Default pgprot clean-up Conflicts as per Catalin. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits) arm64: kernel: initialize broadcast hrtimer based clock event device arm64: ftrace: Add system call tracepoint arm64: ftrace: Add CALLER_ADDRx macros arm64: ftrace: Add dynamic ftrace support arm64: Add ftrace support ftrace: Add arm64 support to recordmcount arm64: Add 'notrace' attribute to unwind_frame() for ftrace arm64: add __ASSEMBLY__ in asm/insn.h arm64: Fix linker script entry point arm64: lib: Implement optimized string length routines arm64: lib: Implement optimized string compare routines arm64: lib: Implement optimized memcmp routine arm64: lib: Implement optimized memset routine arm64: lib: Implement optimized memmove routine arm64: lib: Implement optimized memcpy routine arm64: defconfig: enable a few more common/useful options in defconfig ftrace: Make CALLER_ADDRx macros more generic arm64: Fix deadlock scenario with smp_send_stop() arm64: Fix machine_shutdown() definition arm64: Support arch_irq_work_raise() via self IPIs ...
2014-06-05Merge branch 'arm64-efi-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into next Pull ARM64 EFI update from Peter Anvin: "By agreement with the ARM64 EFI maintainers, we have agreed to make -tip the upstream for all EFI patches. That is why this patchset comes from me :) This patchset enables EFI stub support for ARM64, like we already have on x86" * 'arm64-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64: efi: only attempt efi map setup if booting via EFI efi/arm64: ignore dtb= when UEFI SecureBoot is enabled doc: arm64: add description of EFI stub support arm64: efi: add EFI stub doc: arm: add UEFI support documentation arm64: add EFI runtime services efi: Add shared FDT related functions for ARM/ARM64 arm64: Add function to create identity mappings efi: add helper function to get UEFI params from FDT doc: efi-stub.txt updates for ARM lib: add fdt_empty_tree.c
2014-06-04Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm into nextLinus Torvalds
Pull KVM updates from Paolo Bonzini: "At over 200 commits, covering almost all supported architectures, this was a pretty active cycle for KVM. Changes include: - a lot of s390 changes: optimizations, support for migration, GDB support and more - ARM changes are pretty small: support for the PSCI 0.2 hypercall interface on both the guest and the host (the latter acked by Catalin) - initial POWER8 and little-endian host support - support for running u-boot on embedded POWER targets - pretty large changes to MIPS too, completing the userspace interface and improving the handling of virtualized timer hardware - for x86, a larger set of changes is scheduled for 3.17. Still, we have a few emulator bugfixes and support for running nested fully-virtualized Xen guests (para-virtualized Xen guests have always worked). And some optimizations too. The only missing architecture here is ia64. It's not a coincidence that support for KVM on ia64 is scheduled for removal in 3.17" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (203 commits) KVM: add missing cleanup_srcu_struct KVM: PPC: Book3S PR: Rework SLB switching code KVM: PPC: Book3S PR: Use SLB entry 0 KVM: PPC: Book3S HV: Fix machine check delivery to guest KVM: PPC: Book3S HV: Work around POWER8 performance monitor bugs KVM: PPC: Book3S HV: Make sure we don't miss dirty pages KVM: PPC: Book3S HV: Fix dirty map for hugepages KVM: PPC: Book3S HV: Put huge-page HPTEs in rmap chain for base address KVM: PPC: Book3S HV: Fix check for running inside guest in global_invalidates() KVM: PPC: Book3S: Move KVM_REG_PPC_WORT to an unused register number KVM: PPC: Book3S: Add ONE_REG register names that were missed KVM: PPC: Add CAP to indicate hcall fixes KVM: PPC: MPIC: Reset IRQ source private members KVM: PPC: Graciously fail broken LE hypercalls PPC: ePAPR: Fix hypercall on LE guest KVM: PPC: BOOK3S: Remove open coded make_dsisr in alignment handler KVM: PPC: BOOK3S: Always use the saved DAR value PPC: KVM: Make NX bit available with magic page KVM: PPC: Disable NX for old magic page using guests KVM: PPC: BOOK3S: HV: Add mixed page-size support for guest ...
2014-06-03Merge tag 'tty-3.16-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty into next Pull tty/serial driver updates from Greg KH: "Here is the big tty / serial driver pull request for 3.16-rc1. A variety of different serial driver fixes and updates and additions, nothing huge, and no real major core tty changes at all. All have been in linux-next for a while" * tag 'tty-3.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty: (84 commits) Revert "serial: imx: remove the DMA wait queue" serial: kgdb_nmi: Improve console integration with KDB I/O serial: kgdb_nmi: Switch from tasklets to real timers serial: kgdb_nmi: Use container_of() to locate private data serial: cpm_uart: No LF conversion in put_poll_char() serial: sirf: Fix compilation failure console: Remove superfluous readonly check console: Use explicit pointer type for vc_uni_pagedir* fields vgacon: Fix & cleanup refcounting ARM: tty: Move HVC DCC assembly to arch/arm tty/hvc/hvc_console: Fix wakeup of HVC thread on hvc_kick() drivers/tty/n_hdlc.c: replace kmalloc/memset by kzalloc vt: emulate 8- and 24-bit colour codes. printk/of_serial: fix serial console cessation part way through boot. serial: 8250_dma: check the result of TX buffer mapping serial: uart: add hw flow control support configuration tty/serial: at91: add interrupts for modem control lines tty/serial: at91: use mctrl_gpio helpers tty/serial: Add GPIOLIB helpers for controlling modem lines ARM: at91: gpio: implement get_direction ...
2014-05-30arm64: kernel: initialize broadcast hrtimer based clock event deviceLorenzo Pieralisi
On platforms implementing CPU power management, the CPUidle subsystem can allow CPUs to enter idle states where local timers logic is lost on power down. To keep the software timers functional the kernel relies on an always-on broadcast timer to be present in the platform to relay the interrupt signalling the timer expiries. For platforms implementing CPU core gating that do not implement an always-on HW timer or implement it in a broken way, this patch adds code to initialize the kernel hrtimer based clock event device upon boot (which can be chosen as tick broadcast device by the kernel). It relies on a dynamically chosen CPU to be always powered-up. This CPU then relays the timer interrupt to CPUs in deep-idle states through its HW local timer device. Having a CPU always-on has implications on power management platform capabilities and makes CPUidle suboptimal, since at least a CPU is kept always in a shallow idle state by the kernel to relay timer interrupts, but at least leaves the kernel with a functional system with some working power management capabilities. The hrtimer based clock event device is unconditionally registered, but has the lowest possible rating such that any broadcast-capable HW clock event device present will be chosen in preference as the tick broadcast device. Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-29arm64: ftrace: Add system call tracepointAKASHI Takahiro
This patch allows system call entry or exit to be traced as ftrace events, ie. sys_enter_*/sys_exit_*, if CONFIG_FTRACE_SYSCALLS is enabled. Those events appear and can be controlled under ${sysfs}/tracing/events/syscalls/ Please note that we can't trace compat system calls here because AArch32 mode does not share the same syscall table with AArch64. Just define ARCH_TRACE_IGNORE_COMPAT_SYSCALLS in order to avoid unexpected results (bogus syscalls reported or even hang-up). Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-29arm64: ftrace: Add CALLER_ADDRx macrosAKASHI Takahiro
CALLER_ADDRx returns caller's address at specified level in call stacks. They are used for several tracers like irqsoff and preemptoff. Strange to say, however, they are refered even without FTRACE. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-29arm64: ftrace: Add dynamic ftrace supportAKASHI Takahiro
This patch allows "dynamic ftrace" if CONFIG_DYNAMIC_FTRACE is enabled. Here we can turn on and off tracing dynamically per-function base. On arm64, this is done by patching single branch instruction to _mcount() inserted by gcc -pg option. The branch is replaced to NOP initially at kernel start up, and later on, NOP to branch to ftrace_caller() when enabled or branch to NOP when disabled. Please note that ftrace_caller() is a counterpart of _mcount() in case of 'static' ftrace. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-29arm64: Add ftrace supportAKASHI Takahiro
This patch implements arm64 specific part to support function tracers, such as function (CONFIG_FUNCTION_TRACER), function_graph (CONFIG_FUNCTION_GRAPH_TRACER) and function profiler (CONFIG_FUNCTION_PROFILER). With 'function' tracer, all the functions in the kernel are traced with timestamps in ${sysfs}/tracing/trace. If function_graph tracer is specified, call graph is generated. The kernel must be compiled with -pg option so that _mcount() is inserted at the beginning of functions. This function is called on every function's entry as long as tracing is enabled. In addition, function_graph tracer also needs to be able to probe function's exit. ftrace_graph_caller() & return_to_handler do this by faking link register's value to intercept function's return path. More details on architecture specific requirements are described in Documentation/trace/ftrace-design.txt. Reviewed-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-29arm64: Add 'notrace' attribute to unwind_frame() for ftraceAKASHI Takahiro
walk_stackframe() calls unwind_frame(), and if walk_stackframe() is "notrace", unwind_frame() should be also "notrace". Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-05-23arm64: Fix linker script entry pointGeoff Levand
Change the arm64 linker script ENTRY() command to define _text as the kernel entry point. The arm64 boot protocol specifies that the kernel must be entered at the beginning of the kernel image. The existing ENTRY() command defined the symbol stext as the entry point, which emitted an incorrect entry point, but would not cause a runtime error because the existing entry code immediately jumps to stext. Signed-off-by: Geoff Levand <geoff@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-23arm64: efi: only attempt efi map setup if booting via EFILeif Lindholm
Booting a kernel with CONFIG_EFI enabled on a non-EFI system caused an oops with the current UEFI support code. Add the required test to prevent this. Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-05-23arm64: lib: Implement optimized string length routineszhichang.yuan
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-23arm64: lib: Implement optimized string compare routineszhichang.yuan
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strcmp() and strncmp() functions. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-23arm64: lib: Implement optimized memcmp routinezhichang.yuan
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized memcmp() function. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: Fix deadlock scenario with smp_send_stop()Arun KS
If one process calls sys_reboot and that process then stops other CPUs while those CPUs are within a spin_lock() region we can potentially encounter a deadlock scenario like below. CPU 0 CPU 1 ----- ----- spin_lock(my_lock) smp_send_stop() <send IPI> handle_IPI() disable_preemption/irqs while(1); <PREEMPT> spin_lock(my_lock) <--- Waits forever We shouldn't attempt to run any other tasks after we send a stop IPI to a CPU so disable preemption so that this task runs to completion. We use local_irq_disable() here for cross-arch consistency with x86. Based-on-work-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Arun KS <getarunks@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: Fix machine_shutdown() definitionArun KS
This patch ports most of commit 19ab428f4b79 "ARM: 7759/1: decouple CPU offlining from reboot/shutdown" by Stephen Warren from arch/arm to arch/arm64. machine_shutdown() is a hook for kexec. Add a comment saying so, since it isn't obvious from the function name. Halt, power-off, and restart have different requirements re: stopping secondary CPUs than kexec has. The former simply require the secondary CPUs to be quiesced somehow, whereas kexec requires them to be completely non-operational, so that no matter where the kexec target images are written in RAM, they won't influence operation of the secondary CPUS,which could happen if the CPUs were still executing some kind of pin loop. To this end, modify machine_halt, power_off, and restart to call smp_send_stop() directly, rather than calling machine_shutdown(). In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the kexec requirements a couple paragraphs above. Signed-off-by: Arun KS <getarunks@gmail.com> Acked-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: Support arch_irq_work_raise() via self IPIsLarry Bassel
Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525fd793101df42a1344ecc48b49b62e48c9 Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: topology: Add support for topology DT bindingsMark Brown
Add support for parsing the explicit topology bindings to discover the topology of the system. Since it is not currently clear how to map multi-level clusters for the scheduler all leaf clusters are presented to the scheduler at the same level. This should be enough to provide good support for current systems. Signed-off-by: Mark Brown <broonie@linaro.org> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16arm64: topology: Initialise default topology state immediatelyMark Brown
As a legacy of the way 32 bit ARM did things the topology code uses a null topology map by default and then overwrites it by mapping cores with no information to a cluster by themselves later. In order to make it simpler to reset things as part of recovering from parse failures in firmware information directly set this configuration on init. A core will always be its own sibling so there should be no risk of confusion with firmware provided information. Signed-off-by: Mark Brown <broonie@linaro.org> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-16Merge tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm ↵Catalin Marinas
into upstream FPSIMD register bank context switching and crypto algorithms optimisations for arm64 from Ard Biesheuvel. * tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm: arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions arm64: pull in <asm/simd.h> from asm-generic arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions arm64/crypto: AES using ARMv8 Crypto Extensions arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions arm64/crypto: SHA-1 using ARMv8 Crypto Extensions arm64: add support for kernel mode NEON in interrupt context arm64: defer reloading a task's FPSIMD state to userland resume arm64: add abstractions for FPSIMD state manipulation asm-generic: allow generic unaligned access if the arch supports it Conflicts: arch/arm64/include/asm/thread_info.h
2014-05-15ARM: Check if a CPU has gone offlineAshwin Chaugule
PSCIv0.2 adds a new function called AFFINITY_INFO, which can be used to query if a specified CPU has actually gone offline. Calling this function via cpu_kill ensures that a CPU has quiesced after a call to cpu_die. This helps prevent the CPU from doing arbitrary bad things when data or instructions are clobbered (as happens with kexec) in the window between a CPU announcing that it is dead and said CPU leaving the kernel. Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-15PSCI: Add initial support for PSCIv0.2 functionsAshwin Chaugule
The PSCIv0.2 spec defines standard values of function IDs and introduces a few new functions. Detect version of PSCI and appropriately select the right PSCI functions. Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-12arm64: is_compat_task is defined both in asm/compat.h and linux/compat.hAKASHI Takahiro
Some kernel files may include both linux/compat.h and asm/compat.h directly or indirectly. Since both header files contain is_compat_task() under !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail. Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is implemented). So this patch proactively 1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h 2) replaces asm/compat.h to linux/compat.h in kernel/*.c, but asm/compat.h is still necessary in ptrace.c and process.c because they use is_compat_thread(). Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-12arm64: split syscall_trace() into separate functions for enter/exitAKASHI Takahiro
As done in arm, this change makes it easy to confirm we invoke syscall related hooks, including syscall tracepoint, audit and seccomp which would be implemented later, in correct order. That is, undoing operations in the opposite order on exit that they were done on entry. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-12arm64: make a single hook to syscall_trace() for all syscall featuresAKASHI Takahiro
Currently syscall_trace() is called only for ptrace. With additional TIF_xx flags defined, it is now called in all the cases of audit, ftrace and seccomp in addition to ptrace. Acked-by: Richard Guy Briggs <rgb@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-12arm64: debug: avoid accessing mdscr_el1 on fault paths where possibleWill Deacon
Since mdscr_el1 is part of the debug register group, it is highly likely to be trapped by a hypervisor to prevent virtual machines from debugging (buggering?) each other. Unfortunately, this absolutely destroys our performance, since we access the register on many of our low-level fault handling paths to keep track of the various debug state machines. This patch removes our dependency on mdscr_el1 in the case that debugging is not being used. More specifically we: - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and avoid disabling step in the MDSCR when we don't need to. MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from userspace. - Ensure debug exceptions are re-enabled on *all* exception entry paths, even the debug exception handling path (where we re-enable exceptions after invoking the handler). Since we can now rely on MDSCR_EL1.SS being cleared by the entry code, exception handlers can usually enable debug immediately before enabling interrupts. - Remove all debug exception unmasking from ret_to_user and el1_preempt, since we will never get here with debug exceptions masked. This results in a slight change to kernel debug behaviour, where we now step into interrupt handlers and data aborts from EL1 when debugging the kernel, which is actually a useful thing to do. A side-effect of this is that it *does* potentially prevent stepping off {break,watch}points when there is a high-frequency interrupt source (e.g. a timer), so a debugger would need to use either breakpoints or manually disable interrupts to get around this issue. With this patch applied, guest performance is restored under KVM when debug register accesses are trapped (and we get a measurable performance increase on the host on Cortex-A57 too). Cc: Ian Campbell <ian.campbell@citrix.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-12arm64: use cpu_online_mask when using forced irq_set_affinitySudeep Holla
Commit 01f8fa4f01d8("genirq: Allow forcing cpu affinity of interrupts") enabled the forced irq_set_affinity which previously refused to route an interrupt to an offline cpu. Commit ffde1de64012("irqchip: Gic: Support forced affinity setting") implements this force logic and disables the cpu online check for GIC interrupt controller. When __cpu_disable calls migrate_irqs, it disables the current cpu in cpu_online_mask and uses forced irq_set_affinity to migrate the IRQs away from the cpu but passes affinity mask with the cpu being offlined also included in it. When calling irq_set_affinity with force == true in a cpu hotplug path, the caller must ensure that the cpu being offlined is not present in the affinity mask or it may be selected as the target CPU, leading to the interrupt not being migrated. This patch uses cpu_online_mask when using forced irq_set_affinity so that the IRQs are properly migrated away. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flagWill Deacon
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: barriers: make use of barrier options with explicit barriersWill Deacon
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Clean up the default pgprot settingCatalin Marinas
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2014-05-09arm64: Expose ESR_EL1 information to user when SIGSEGV/SIGBUSCatalin Marinas
This information is useful for instruction emulators to detect read/write and access size without having to decode the faulting instruction. The current patch exports it via sigcontext (struct esr_context) and is only valid for SIGSEGV and SIGBUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Remove the aux_context structureCatalin Marinas
This patch removes the aux_context structure (and the containing file) to allow the placement of the _aarch64_ctx end magic based on the context stored on the signal stack. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Provide read/write fault information in compat signal handlersCatalin Marinas
For AArch32, bit 11 (WnR) of the FSR/ESR register is set when the fault was caused by a write access and applications like Qemu rely on such information being provided in sigcontext. This patch introduces the ESR_EL1 tracking for the arm64 kernel faults and sets bit 11 accordingly in compat sigcontext. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Remove boot thread synchronisation for spin-table release methodCatalin Marinas
The synchronisation with the boot thread already happens in __cpu_up() via wait_for_completion_timeout(). In addition, __cpu_up() calls are protected by the cpu_add_remove_lock mutex and already serialised. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-09arm64: Implement cache_line_size() based on CTR_EL0.CWGCatalin Marinas
The hardware provides the maximum cache line size in the system via the CTR_EL0.CWG bits. This patch implements the cache_line_size() function to read such information, together with a sanity check if the statically defined L1_CACHE_BYTES is smaller than the hardware value. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
2014-05-08arm64: add support for kernel mode NEON in interrupt contextArd Biesheuvel
This patch modifies kernel_neon_begin() and kernel_neon_end(), so they may be called from any context. To address the case where only a couple of registers are needed, kernel_neon_begin_partial(u32) is introduced which takes as a parameter the number of bottom 'n' NEON q-registers required. To mark the end of such a partial section, the regular kernel_neon_end() should be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-05-08arm64: defer reloading a task's FPSIMD state to userland resumeArd Biesheuvel
If a task gets scheduled out and back in again and nothing has touched its FPSIMD state in the mean time, there is really no reason to reload it from memory. Similarly, repeated calls to kernel_neon_begin() and kernel_neon_end() will preserve and restore the FPSIMD state every time. This patch defers the FPSIMD state restore to the last possible moment, i.e., right before the task returns to userland. If a task does not return to userland at all (for any reason), the existing FPSIMD state is preserved and may be reused by the owning task if it gets scheduled in again on the same CPU. This patch adds two more functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-05-08arm64: add abstractions for FPSIMD state manipulationArd Biesheuvel
There are two tacit assumptions in the FPSIMD handling code that will no longer hold after the next patch that optimizes away some FPSIMD state restores: . the FPSIMD registers of this CPU contain the userland FPSIMD state of task 'current'; . when switching to a task, its FPSIMD state will always be restored from memory. This patch adds the following functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved - fpsimd_update_current_state -> replace current's FPSIMD state Where necessary, the signal handling and fork code are updated to use the above wrappers instead of poking into the FPSIMD registers directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-05-03arm64: Use bus notifiers to set per-device coherent DMA opsCatalin Marinas
Recently, the default DMA ops have been changed to non-coherent for alignment with 32-bit ARM platforms (and DT files). This patch adds bus notifiers to be able to set the coherent DMA ops (with no cache maintenance) for devices explicitly marked as coherent via the "dma-coherent" DT property. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-03arm64: fixmap: fix missing sub-page offset for earlyprintkMarc Zyngier
Commit d57c33c5daa4 (add generic fixmap.h) added (among other similar things) set_fixmap_io to deal with early ioremap of devices. More recently, commit bf4b558eba92 (arm64: add early_ioremap support) converted the arm64 earlyprintk to use set_fixmap_io. A side effect of this conversion is that my virtual machines have stopped booting when I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel. Turns out that the new earlyprintk code doesn't care at all about sub-page offsets, and just assumes that the earlyprintk device will be page-aligned. Obviously, that doesn't play well with the above example. Further investigation shows that set_fixmap_io uses __set_fixmap instead of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that uses the latter, and to remove the superflous call to fix_to_virt (which only returns the value that set_fixmap_io has already given us). With this applied, my VMs are back in business. Tested on a Cortex-A57 platform with kvmtool as platform emulation. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Salter <msalter@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-30arm64: efi: add EFI stubMark Salter
This patch adds PE/COFF header fields to the start of the kernel Image so that it appears as an EFI application to UEFI firmware. An EFI stub is included to allow direct booting of the kernel Image. Signed-off-by: Mark Salter <msalter@redhat.com> [Add support in PE/COFF header for signed images] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-30arm64: add EFI runtime servicesMark Salter
This patch adds EFI runtime support for arm64. This runtime support allows the kernel to access various EFI runtime services provided by EFI firmware. Things like reboot, real time clock, EFI boot variables, and others. This functionality is supported for little endian kernels only. The UEFI firmware standard specifies that the firmware be little endian. A future patch is expected to add support for big endian kernels running with little endian firmware. Signed-off-by: Mark Salter <msalter@redhat.com> [ Remove unnecessary cache/tlb maintenance. ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-27Merge 3.15-rc3 into tty-nextGreg Kroah-Hartman