Age | Commit message (Collapse) | Author |
|
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).
We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.
Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code. This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.
Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
The Undef abort handler in the kernel reads the undefined instruction
from user space. If the page table was modified from another CPU, the
user access could fail and do_page_fault() will be executed with
interrupts disabled. This can potentially deadlock on ARM11MPCore or on
Cortex-A15 with erratum 798181 workaround enabled (both implying IPI for
TLB maintenance with page table lock held).
This patch enables the IRQs in __und_usr before attempting to read the
instruction from user space.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Cc: Hartley Sweeten <hsweeten@visionengravers.com>
Cc: Ryan Mallon <rmallon@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The emulation for single and double precision multiply accumulate
instructions correctly normalised any denormal values in the operand
registers, but failed to normalise the destination (accumulator)
register.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=70501
Signed-off-by: Jay Foad <jay.foad@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti
(which also gets the current thread_info) instead of open-coding them in
arch/arm/vfp/*.S files.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
asm/assembler.h is a better place for this macro since it is used by
asm files outside arch/arm/kernel/
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The CPU_DYING notifier is called by cpu stopper task which
does not own the context held in the VFP hardware. Calling
vfp_force_reload() has no effect.
Replace it with clearing vfp_current_hw_state.
Signed-off-by: Yuanyuan Zhong <zyy@motorola.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
In order to safely support the use of NEON instructions in
kernel mode, some precautions need to be taken:
- the userland context that may be present in the registers (even
if the NEON/VFP is currently disabled) must be stored under the
correct task (which may not be 'current' in the UP case),
- to avoid having to keep track of additional vfpstates for the
kernel side, disallow the use of NEON in interrupt context
and run with preemption disabled,
- after use, re-enable preemption and re-enable the lazy restore
machinery by disabling the NEON/VFP unit.
This patch adds the functions kernel_neon_begin() and
kernel_neon_end() which take care of the above. It also adds
the Kconfig symbol KERNEL_MODE_NEON to enable it.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
|
The support code in vfp_support_entry does not care whether the
exception that caused it to be invoked occurred in kernel mode or
in user mode. However, neither condition that could trigger this
exception (lazy restore and VFP bounce to support code) is
currently allowable in kernel mode.
In either case, print a message describing the condition before
letting the undefined instruction handler run its course and trigger
an oops.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
|
In order to use the NEON unit in the kernel, we should
initialize it a bit earlier in the boot process so NEON users
that like to do a quick benchmark at load time (like the
xor_blocks or RAID-6 code) find the NEON/VFP unit already
enabled.
Replaced late_initcall() with core_initcall().
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
|
Commit d3f79584a8b5 ("ARM: cleanup undefined instruction entry code")
improved the register scheduling when handling undefined instructions.
A side effect of this is that r5 is now used as a temporary, whilst the
VFP probing code relies on r5 containing a non-zero value when VFP is
not supported.
This patch fixes the VFP detection code so that we don't rely on the
contents of r5. Without this patch, Linux dies loudly on CPUs without
VFP support.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit 0cc41e4a21d43 (arch: remove direct definitions of KERN_<LEVEL>
uses) is broken - not enough thought was put into changing:
.asciz "string"
to
.asciz "string1" "string2"
The problem is that each string gets _separately_ NUL terminated, so
the result is a string containing:
"string1\0string2\0"
rather than:
"string1string2\0"
With our new printk levels, this ends up as - eg, KERN_DEBUG "string":
0x01 0x00 0x07 0x00 "string" 0x00
which produces lots of \x01 in the kernel log.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Martin Storsjö reports that the sequence:
ee312ac1 vsub.f32 s4, s3, s2
ee702ac0 vsub.f32 s5, s1, s0
e59f0028 ldr r0, [pc, #40]
ee111a90 vmov r1, s3
on Raspberry Pi (implementor 41 architecture 1 part 20 variant b rev 5)
where s3 is a denormal and s2 is zero results in incorrect behaviour -
the instruction "vsub.f32 s5, s1, s0" is not executed:
VFP: bounce: trigger ee111a90 fpexc d0000780
VFP: emulate: INST=0xee312ac1 SCR=0x00000000
...
As we can see, the instruction triggering the exception is the "vmov"
instruction, and we emulate the "vsub.f32 s4, s3, s2" but fail to
properly take account of the FPEXC_FP2V flag in FPEXC. This is because
the test for the second instruction register being valid is bogus, and
will always skip emulation of the second instruction.
Cc: <stable@vger.kernel.org>
Reported-by: Martin Storsjö <martin@martin.st>
Tested-by: Martin Storsjö <martin@martin.st>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Patrik Kluba reports that the preempt count becomes invalid due
to the preempt_enable() call being unbalanced with a
preempt_disable() call in the vfp assembly routines. This happens
because preempt_enable() and preempt_disable() update preempt
counts under PREEMPT_COUNT=y but the vfp assembly routines do so
under PREEMPT=y. In a configuration where PREEMPT=n and
DEBUG_ATOMIC_SLEEP=y, PREEMPT_COUNT=y and so the preempt_enable()
call in VFP_bounce() keeps subtracting from the preempt count
until it goes negative.
Fix this by always using PREEMPT_COUNT to decided when to update
preempt counts in the ARM assembly code.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Reported-by: Patrik Kluba <pkluba@dension.com>
Tested-by: Patrik Kluba <pkluba@dension.com>
Cc: <stable@vger.kernel.org> # 2.6.30
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
CONFIG_VFPv3 set
After commit 846a136881b8f73c1f74250bf6acfaa309cab1f2 ("ARM: vfp: fix
saving d16-d31 vfp registers on v6+ kernels"), the OMAP 2430SDP board
started crashing during boot with omap2plus_defconfig:
[ 3.875122] mmcblk0: mmc0:e624 SD04G 3.69 GiB
[ 3.915954] mmcblk0: p1
[ 4.086639] Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
[ 4.093719] Modules linked in:
[ 4.096954] CPU: 0 Not tainted (3.6.0-02232-g759e00b #570)
[ 4.103149] PC is at vfp_reload_hw+0x1c/0x44
[ 4.107666] LR is at __und_usr_fault_32+0x0/0x8
It turns out that the context save/restore fix unmasked a latent bug
in commit 5aaf254409f8d58229107b59507a8235b715a960 ("ARM: 6203/1: Make
VFPv3 usable on ARMv6"). When CONFIG_VFPv3 is set, but the kernel is
booted on a pre-VFPv3 core, the code attempts to save and restore the
d16-d31 VFP registers. These are only present on non-D16 VFPv3+, so
this results in an undefined instruction exception. The code didn't
crash before commit 846a136 because the save and restore code was
only touching d0-d15, present on all VFP.
Fix by implementing a request from Russell King to add a new HWCAP
flag that affirmatively indicates the presence of the d16-d31
registers:
http://marc.info/?l=linux-arm-kernel&m=135013547905283&w=2
and some feedback from Måns to clarify the name of the HWCAP flag.
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@linaro.org>
Cc: Måns Rullgård <mans.rullgard@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
VFPv4 support depends on the VFPv3 context save/restore code, so only
advertise support in the hwcaps if the kernel can actually handle it.
Cc: <stable@vger.kernel.org> # 3.1+
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Pull ARM fixes from Russell King:
"This fixes various issues found during July"
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7479/1: mm: avoid NULL dereference when flushing gate_vma with VIVT caches
ARM: Fix undefined instruction exception handling
ARM: 7480/1: only call smp_send_stop() on SMP
ARM: 7478/1: errata: extend workaround for erratum #720789
ARM: 7477/1: vfp: Always save VFP state in vfp_pm_suspend on UP
ARM: 7476/1: vfp: only clear vfp state for current cpu in vfp_pm_suspend
ARM: 7468/1: ftrace: Trace function entry before updating index
ARM: 7467/1: mutex: use generic xchg-based implementation for ARMv6+
ARM: 7466/1: disable interrupt before spinning endlessly
ARM: 7465/1: Handle >4GB memory sizes in device tree and mem=size@start option
|
|
While trying to get a v3.5 kernel booted on the cubox, I noticed that
VFP does not work correctly with VFP bounce handling. This is because
of the confusion over 16-bit vs 32-bit instructions, and where PC is
supposed to point to.
The rule is that FP handlers are entered with regs->ARM_pc pointing at
the _next_ instruction to be executed. However, if the exception is
not handled, regs->ARM_pc points at the faulting instruction.
This is easy for ARM mode, because we know that the next instruction and
previous instructions are separated by four bytes. This is not true of
Thumb2 though.
Since all FP instructions are 32-bit in Thumb2, it makes things easy.
We just need to select the appropriate adjustment. Do this by moving
the adjustment out of do_undefinstr() into the assembly code, as only
the assembly code knows whether it's dealing with a 32-bit or 16-bit
instruction.
Cc: <stable@vger.kernel.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
vfp_pm_suspend should save the VFP state in suspend after
any lazy context switch. If it only saves when the VFP is enabled,
the state can get lost when, on a UP system:
Thread 1 uses the VFP
Context switch occurs to thread 2, VFP is disabled but the
VFP context is not saved
Thread 2 initiates suspend
vfp_pm_suspend is called with the VFP disabled, and the unsaved
VFP context of Thread 1 in the registers
Modify vfp_pm_suspend to save the VFP context whenever
vfp_current_hw_state is not NULL.
Includes a fix from Ido Yariv <ido@wizery.com>, who pointed out that on
SMP systems, the state pointer can be pointing to a freed task struct if
a task exited on another cpu, fixed by using #ifndef CONFIG_SMP in the
new if clause.
Cc: Barry Song <bs14@csr.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ido Yariv <ido@wizery.com>
Cc: Daniel Drake <dsd@laptop.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
vfp_pm_suspend runs on each cpu, only clear the hardware state
pointer for the current cpu. Prevents a possible crash if one
cpu clears the hw state pointer when another cpu has already
checked if it is valid.
Cc: stable@vger.kernel.org
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Add #include <linux/kern_levels.h> so that the #define KERN_<LEVEL> macros
don't have to be duplicated.
Signed-off-by: Joe Perches <joe@perches.com>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Kay Sievers <kay@vrfy.org>
Acked-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Conflicts:
arch/arm/kernel/ptrace.c
|
|
Commit ff9a184c ("ARM: 7400/1: vfp: clear fpscr length and stride bits
on entry to sig handler") flushes the VFP state prior to entering a
signal handler so that a VFP operation inside the handler will trap and
force a restore of ABI-compliant registers. Reflushing and disabling VFP
on the sigreturn path is predicated on the saved thread state indicating
that VFP was used by the handler -- however for SMP platforms this is
only set on context-switch, making the check unreliable and causing VFP
register corruption in userspace since the register values are not
necessarily those restored from the sigframe.
This patch unconditionally flushes the VFP state after a signal handler.
Since we already perform the flush before the handler and the flushing
itself happens lazily, the redundant flush when VFP is not used by the
handler is essentially a nop.
Reported-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The vfp_enable function enables access to the VFP co-processor register
space (cp10 and cp11) on the current CPU and must be called with
preemption disabled. Unfortunately, the vfp_init late initcall does not
disable preemption and can lead to an oops during boot if thread
migration occurs at the wrong time and we end up attempting to access
the FPSID on a CPU with VFP access disabled.
This patch fixes the initcall to call vfp_enable from a non-preemptible
context on each CPU and adds a BUG_ON(preemptible) to ensure that any
similar problems are easily spotted in the future.
Cc: stable@vger.kernel.org
Reported-by: Hyungwoo Yang <hwoo.yang@gmail.com>
Signed-off-by: Hyungwoo Yang <hyungwooy@nvidia.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This is mainly to get rid of the "vfp_pm_suspend: saving vfp state"
message flooding the kernel message ring by default.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The ARM PCS mandates that the length and stride bits of the fpscr are
cleared on entry to and return from a public interface. Although signal
handlers run asynchronously with respect to the interrupted function,
the handler itself expects to run as though it has been called like a
normal function.
This patch updates the state mirroring the VFP hardware before entry to
a signal handler so that it adheres to the PCS. Furthermore, we disable
VFP to ensure that we trap on any floating point operation performed by
the signal handler and synchronise the hardware appropriately. A check
is inserted after the signal handler to avoid redundant flushing if VFP
was not used.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The user VFP state must be preserved (subject to ucontext modifications)
across invocation of a signal handler and this is currently handled by
vfp_{preserve,restore}_context in signal.c
Since this code requires intimate low-level knowledge of the VFP state,
this patch moves it into vfpmodule.c.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Disintegrate asm/system.h for ARM.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Russell King <linux@arm.linux.org.uk>
cc: linux-arm-kernel@lists.infradead.org
|
|
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Building these files does not reveal a hidden need for
any of these. Since module.h brings in the whole kitchen
sink, it just needlessly adds 30k+ lines to the cpp burden.
There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build. They will have
to be cleaned up and tested via using their respective configs.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm
* 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (178 commits)
ARM: 7139/1: fix compilation with CONFIG_ARM_ATAG_DTB_COMPAT and large TEXT_OFFSET
ARM: gic, local timers: use the request_percpu_irq() interface
ARM: gic: consolidate PPI handling
ARM: switch from NO_MACH_MEMORY_H to NEED_MACH_MEMORY_H
ARM: mach-s5p64x0: remove mach/memory.h
ARM: mach-s3c64xx: remove mach/memory.h
ARM: plat-mxc: remove mach/memory.h
ARM: mach-prima2: remove mach/memory.h
ARM: mach-zynq: remove mach/memory.h
ARM: mach-bcmring: remove mach/memory.h
ARM: mach-davinci: remove mach/memory.h
ARM: mach-pxa: remove mach/memory.h
ARM: mach-ixp4xx: remove mach/memory.h
ARM: mach-h720x: remove mach/memory.h
ARM: mach-vt8500: remove mach/memory.h
ARM: mach-s5pc100: remove mach/memory.h
ARM: mach-tegra: remove mach/memory.h
ARM: plat-tcc: remove mach/memory.h
ARM: mach-mmp: remove mach/memory.h
ARM: mach-cns3xxx: remove mach/memory.h
...
Fix up mostly pretty trivial conflicts in:
- arch/arm/Kconfig
- arch/arm/include/asm/localtimer.h
- arch/arm/kernel/Makefile
- arch/arm/mach-shmobile/board-ap4evb.c
- arch/arm/mach-u300/core.c
- arch/arm/mm/dma-mapping.c
- arch/arm/mm/proc-v7.S
- arch/arm/plat-omap/Kconfig
largely due to some CONFIG option renaming (ie CONFIG_PM_SLEEP ->
CONFIG_ARM_CPU_SUSPEND for the arm-specific suspend code etc) and
addition of NEED_MACH_MEMORY_H next to HAVE_IDE.
|
|
Distros are starting to ship with toolchains defaulting to
hardfloat. Using such a compiler to build the kernel fails
in the VFP directory with
arch/arm/vfp/entry.S:1:0: sorry, unimplemented: -mfloat-abi=hard and VFP
Adding -mfloat-abi=soft to the gcc command line fixes this.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
Function vfp_force_reload() clears vfp_current_hw_state, so
update the comment accordingly.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
|
When the cpu is powered down in a low power mode, the vfp
registers may be reset.
This patch uses CPU_PM_ENTER and CPU_PM_EXIT notifiers to save
and restore the cpu's vfp registers.
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Vishwanath BS <vishwanath.bs@ti.com>
|
|
Conflicts:
arch/arm/kernel/entry-armv.S
|
|
Prevent a preemption event causing the initialized VFP state being
overwritten by ensuring that the VFP hardware access is disabled
prior to starting initialization. We can then do this in safety
while still allowing preemption to occur.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Fix a hole in the VFP thread migration. Lets define two threads.
Thread 1, we'll call 'interesting_thread' which is a thread which is
running on CPU0, using VFP (so vfp_current_hw_state[0] =
&interesting_thread->vfpstate) and gets migrated off to CPU1, where
it continues execution of VFP instructions.
Thread 2, we'll call 'new_cpu0_thread' which is the thread which takes
over on CPU0. This has also been using VFP, and last used VFP on CPU0,
but doesn't use it again.
The following code will be executed twice:
cpu = thread->cpu;
/*
* On SMP, if VFP is enabled, save the old state in
* case the thread migrates to a different CPU. The
* restoring is done lazily.
*/
if ((fpexc & FPEXC_EN) && vfp_current_hw_state[cpu]) {
vfp_save_state(vfp_current_hw_state[cpu], fpexc);
vfp_current_hw_state[cpu]->hard.cpu = cpu;
}
/*
* Thread migration, just force the reloading of the
* state on the new CPU in case the VFP registers
* contain stale data.
*/
if (thread->vfpstate.hard.cpu != cpu)
vfp_current_hw_state[cpu] = NULL;
The first execution will be on CPU0 to switch away from 'interesting_thread'.
interesting_thread->cpu will be 0.
So, vfp_current_hw_state[0] points at interesting_thread->vfpstate.
The hardware state will be saved, along with the CPU number (0) that
it was executing on.
'thread' will be 'new_cpu0_thread' with new_cpu0_thread->cpu = 0.
Also, because it was executing on CPU0, new_cpu0_thread->vfpstate.hard.cpu = 0,
and so the thread migration check is not triggered.
This means that vfp_current_hw_state[0] remains pointing at interesting_thread.
The second execution will be on CPU1 to switch _to_ 'interesting_thread'.
So, 'thread' will be 'interesting_thread' and interesting_thread->cpu now
will be 1. The previous thread executing on CPU1 is not relevant to this
so we shall ignore that.
We get to the thread migration check. Here, we discover that
interesting_thread->vfpstate.hard.cpu = 0, yet interesting_thread->cpu is
now 1, indicating thread migration. We set vfp_current_hw_state[1] to
NULL.
So, at this point vfp_current_hw_state[] contains the following:
[0] = &interesting_thread->vfpstate
[1] = NULL
Our interesting thread now executes a VFP instruction, takes a fault
which loads the state into the VFP hardware. Now, through the assembly
we now have:
[0] = &interesting_thread->vfpstate
[1] = &interesting_thread->vfpstate
CPU1 stops due to ptrace (and so saves its VFP state) using the thread
switch code above), and CPU0 calls vfp_sync_hwstate().
if (vfp_current_hw_state[cpu] == &thread->vfpstate) {
vfp_save_state(&thread->vfpstate, fpexc | FPEXC_EN);
BANG, we corrupt interesting_thread's VFP state by overwriting the
more up-to-date state saved by CPU1 with the old VFP state from CPU0.
Fix this by ensuring that we have sane semantics for the various state
describing variables:
1. vfp_current_hw_state[] points to the current owner of the context
information stored in each CPUs hardware, or NULL if that state
information is invalid.
2. thread->vfpstate.hard.cpu always contains the most recent CPU number
which the state was loaded into or NR_CPUS if no CPU owns the state.
So, for a particular CPU to be a valid owner of the VFP state for a
particular thread t, two things must be true:
vfp_current_hw_state[cpu] == &t->vfpstate && t->vfpstate.hard.cpu == cpu.
and that is valid from the moment a CPU loads the saved VFP context
into the hardware. This gives clear and consistent semantics to
interpreting these variables.
This patch also fixes thread copying, ensuring that t->vfpstate.hard.cpu
is invalidated, otherwise CPU0 may believe it was the last owner. The
hole can happen thus:
- thread1 runs on CPU2 using VFP, migrates to CPU3, exits and thread_info
freed.
- New thread allocated from a previously running thread on CPU2, reusing
memory for thread1 and copying vfp.hard.cpu.
At this point, the following are true:
new_thread1->vfpstate.hard.cpu == 2
&new_thread1->vfpstate == vfp_current_hw_state[2]
Lastly, this also addresses thread flushing in a similar way to thread
copying. Hole is:
- thread runs on CPU0, using VFP, migrates to CPU1 but does not use VFP.
- thread calls execve(), so thread flush happens, leaving
vfp_current_hw_state[0] intact. This vfpstate is memset to 0 causing
thread->vfpstate.hard.cpu = 0.
- thread migrates back to CPU0 before using VFP.
At this point, the following are true:
thread->vfpstate.hard.cpu == 0
&thread->vfpstate == vfp_current_hw_state[0]
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename this branch to more accurately reflect why its taken, rather
than what the following code does. It is the only caller of this code.
This helps to clarify following changes, yet this change results in no
actual code change.
Document the VFP hardware state at the target of this branch.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Rename the slightly confusing 'last_VFP_context' variable to be more
descriptive of what it actually is. This variable stores a pointer
to the current owner's vfpstate structure for the context held in the
VFP hardware.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The presence of VFPv4 cannot be detected simply by looking at the FPSID
subarchitecture field, as a value >= 2 signifies the architecture as
VFPv3 or later.
This patch reads from MVFR1 to check whether or not the fused multiply
accumulate instructions are supported. Since these are introduced with
VFPv4, this tells us what we need to know.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Convert some ARM architecture's common code to using
struct syscore_ops objects for power management instead of sysdev
classes and sysdevs.
This simplifies the code and reduces the kernel's memory footprint.
It also is necessary for removing sysdevs from the kernel entirely in
the future.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
VFP registers d16-d31 are callee saved registers and must be preserved
during function calls, including fork(). The VFP configuration should
also be preserved. The patch copies the full VFP state to the child
process.
Reported-by: Paul Wright <paul.wright@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds THREAD_NOTIFY_COPY for calling registered handlers
during the copy_thread() function call. It also changes the VFP handler
to use a switch statement rather than if..else and ignore this event.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild-2.6
* 'trivial' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild-2.6: (25 commits)
video: change to new flag variable
scsi: change to new flag variable
rtc: change to new flag variable
rapidio: change to new flag variable
pps: change to new flag variable
net: change to new flag variable
misc: change to new flag variable
message: change to new flag variable
memstick: change to new flag variable
isdn: change to new flag variable
ieee802154: change to new flag variable
ide: change to new flag variable
hwmon: change to new flag variable
dma: change to new flag variable
char: change to new flag variable
fs: change to new flag variable
xtensa: change to new flag variable
um: change to new flag variables
s390: change to new flag variable
mips: change to new flag variable
...
Fix up trivial conflict in drivers/hwmon/Makefile
|
|
Replace EXTRA_CFLAGS with ccflags-y and EXTRA_AFLAGS with asflags-y.
Signed-off-by: matt mooney <mfm@muteddisk.com>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Michal Marek <mmarek@suse.cz>
|
|
Improve the documentation for the VFP hotplug notifier handler, so
that people better understand what's going on there and what has
been done for them.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
arch/arm/kernel/return_address.c:37:6: warning: symbol 'return_address' was not declared. Should it be static?
arch/arm/kernel/setup.c:76:14: warning: symbol 'processor_id' was not declared. Should it be static?
arch/arm/kernel/traps.c:259:1: warning: symbol 'die_lock' was not declared. Should it be static?
arch/arm/vfp/vfpmodule.c:156:6: warning: symbol 'vfp_raise_sigfpe' was not declared. Should it be static?
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Conflicts:
arch/arm/Kconfig
arch/arm/common/Makefile
arch/arm/kernel/Makefile
arch/arm/kernel/smp.c
|
|
We can not guarantee that VFP will be enabled when CPU hotplug brings
a CPU back online from a reset state. Add a hotplug CPU notifier to
ensure that the VFP coprocessor access is enabled whenever a CPU comes
back online.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Directives such as .long and .word do not magically cause the
assembler location counter to become aligned in gas. As a result,
using these directives in code sections can result in misaligned
data words when building a Thumb-2 kernel (CONFIG_THUMB2_KERNEL).
This is a Bad Thing, since the ABI permits the compiler to assume
that fundamental types of word size or above are word- aligned when
accessing them from C. If the data is not really word-aligned,
this can cause impaired performance and stray alignment faults in
some circumstances.
In general, the following rules should be applied when using data
word declaration directives inside code sections:
* .quad and .double:
.align 3
* .long, .word, .single, .float:
.align (or .align 2)
* .short:
No explicit alignment required, since Thumb-2
instructions are always 2 or 4 bytes in size.
immediately after an instruction.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|