Age | Commit message (Collapse) | Author |
|
This patch updates license to use SPDX-License-Identifier
instead of verbose license text.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas into next/cleanup
Merge "Second Round of Renesas ARM Based SoC Cleanup for v4.6" from Simon Horman:
* Remove stale comment from Kconfig
* Consolidate SCU mapping code
* tag 'renesas-cleanup2-for-v4.6' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas:
ARM: shmobile: Kconfig: Get rid of old comment
ARM: shmobile: Consolidate SCU mapping code
|
|
shmobile_scu_base is being written to, so it doesn't belong in the .text
section. Fix this by moving it from asm .text to C .bss, as it's no
longer used from asm code since commit 4f6da36f7edd5790 ("ARM: shmobile:
Remove old SCU boot code").
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
The ARM Multiprocessor Affinity Register is called "MPIDR".
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
All ARMv5 and older CPUs invalidate their caches in the early assembly
setup function, prior to enabling the MMU. This is because the L1
cache should not contain any data relevant to the execution of the
kernel at this point; all data should have been flushed out to memory.
This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed,
these typically do not search their caches when caching is disabled (as
it needs to be when the MMU is disabled) so this change should be safe.
ARMv7 allows there to be CPUs which search their caches while caching is
disabled, and it's permitted that the cache is uninitialised at boot;
for these, the architecture reference manual requires that an
implementation specific code sequence is used immediately after reset
to ensure that the cache is placed into a sane state. Such
functionality is definitely outside the remit of the Linux kernel, and
must be done by the SoC's firmware before _any_ CPU gets to the Linux
kernel.
Changing the data cache clean+invalidate to a mere invalidate allows us
to get rid of a lot of platform specific hacks around this issue for
their secondary CPU bringup paths - some of which were buggy.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com>
Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Shawn Guo <shawn.guo@linaro.org>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Michal Simek <michal.simek@xilinx.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The information is already included in the COPYING file in the kernel
sources root directory, we don't want to modify all source files when
the FSF will move to a new address, and I'm tired of seeing the related
checkpatch.pl warnings.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
next/fixes-non-critical
From Jason Cooper:
mvebu fixes-non-critical for v3.12
- dove
- fix section mismatch (all callers are already _init, so it's just a space
issue)
* tag 'fixes-non-3.12' of git://git.infradead.org/linux-mvebu:
ARM: dove: fix missing __init section of dove_mpp_gpio_mode
+ Linux 3.11-rc2
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
In thumb2 mode instructions are not align to 4 byte. This patch insert
align directives before putting 4 byte data.
Signed-off-by: Tetsuyuki Kobayashi <koba@kmckk.co.jp>
Acked-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
On KZM-A9-GT board (SMP), when CONFIG_THUMB2_KERNEL=y it fails to compile
AS arch/arm/mach-shmobile/headsmp-scu.o
/proj/koba/kernel/arm-soc/arch/arm/mach-shmobile/headsmp-scu.S: Assembler messages:
/proj/koba/kernel/arm-soc/arch/arm/mach-shmobile/headsmp-scu.S:41: Error: shift must be constant -- `bic r2,r2,r3,lsl r1'
make[2]: *** [arch/arm/mach-shmobile/headsmp-scu.o] Error 1
make[1]: *** [arch/arm/mach-shmobile] Error 2
make: *** [sub-make] Error 2
Instruction `bic r2,r2,r3,lsl r1' is not supported in thumb mode. This patch split it into 2 instructions.
Signed-off-by: Tetsuyuki Kobayashi <koba@kmckk.co.jp>
Acked-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|
Remove shmobile_secondary_vector_scu now when all SCU enabled
SMP platforms instead make use of shmobile_boot_scu. This
removes two inline virtual to physical address conversions.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
Add a shmoible_boot_scu function that assumes that the base address
of the SCU is passed in r0. This code is free from inline virtual
to physical address conversion.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
Rework the early SCU setup code in headsmp-scu.S to read
the base address in the same way as we use to fetch the
address of the invalidation function.
Reported-by: Bastian Hecht <hechtb@gmail.com>
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
Update the code in headsmp-scu.S to use a global
shmobile_scu_base variable both for convenient SCU
base address storage and for the early SCU setup
code in shmobile_secondary_vector_scu.
With this patch applied r8a7779, sh73a0 and EMEV2
all make use of the global shmobile_scu_base
variable. However only sh73a0 makes use of the SCU
bring up code in shmobile_secondary_vector_scu.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|
|
Rename headsmp-sh73a0.S into headsmp-scu.S and
introduce shmobile_secondary_vector_scu().
The goal is to be able to share the function
above between all mach-shmobile SoCs that use
SCU for SMP. So far only sh73a0 use this.
At this time the SCU base address is still hard
coded in headsmp-scu.S to 0xf0000000, but this
will be changed in the future.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
|