Age | Commit message (Collapse) | Author |
|
A MIPS specific csum_and_copy_from_user function is necessary because
the generic one from include/net/checksum.h will not work for EVA.
This is because the generic one will link to symbols from lib/checksum.c
which are not EVA aware.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In EVA mode, different instructions need to be used to read/write
from kernel and userland. In non-EVA mode, there is no functional
difference. The current address limit is checked to decide the
type of operation that will be performed.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Use EVA specific functions to read and write data to
user address space.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In preparation for EVA support, we use a macro to build the
__csum_partial_copy_user main code so it can be shared across
multiple implementations. EVA uses the same code but it replaces
the load/store/prefetch instructions with the EVA specific ones
therefore using a macro avoids unnecessary code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Each load/store macro always adds an entry to the __ex_table
using the EXC macro. There are cases where a load instruction may
never fail such as when we are sure the load happens in the kernel
address space. Therefore, we merge these the EXC and LOADX/STOREX
macros into a single one. We also expand the argument list in the EXC
macro to make the macro more flexible. The extra 'type' argument is not
used by this commit, but it will be used when EVA support is added to
memcpy.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
The 'copy_user' symbol can be used to copy from or to
userland so we will use two different symbols for these
operations. This makes no difference in the existing code,
but when the core is operating in EVA mode, different instructions
need to be used to read and write to userland address space.
The old function has also been renamed to 'copy_kernel' to denote
that it is suitable for copy data to and from kernel space.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Handle unaligned accesses when we access userspace memory
EVA mode.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Use the load/store instruction wrappers from asm/asm.h to
perform such operations when operating in EVA mode.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
The str*_user functions are used to securely access NULL terminated
strings from userland. Therefore, it's necessary to use the appropriate
EVA function. However, if the string is in kernel space, then the normal
instructions are being used to access it. The __str*_kernel_asm and
__str*_user_asm symbols are the same for non-EVA mode so there is no
functional change for the non-EVA kernels.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Use the EVA specific functions from memcpy.S to perform
userspace operations. When get_fs() == get_ds() the usual load/store
instructions are used because the destination address is located in
the kernel address space region. Otherwise, the EVA specifc load/store
instructions are used which will go through th TLB to perform the virtual
to physical translation for the userspace address.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
The {get,put}_user_asm functions can be used to load data from
kernel or the user address space so rename them to avoid
confusion.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Use the EVA instruction wrappers from asm.h to perform
read/write operations from userland.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
ulb, ulh, ulw are macros which emulate unaligned access for MIPS.
However, no such macros exist for EVA mode, so the only way to do
EVA unaligned accesses is in the ADE exception handler. As a result
of which, disable these macros for EVA.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Similar to __get_user_* functions, move common code to
__put_user_*_common so it can be shared among similar users.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In preparation for EVA support, an instruction argument is needed
for the __get_user_asm{,_ll32} functions to allow instruction overrides in
EVA mode. Even though EVA only works for MIPS 32-bit, both codepaths are
changed (32-bit and 64-bit) for consistency reasons.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Build the __bzero function using the EVA load/store instructions
when operating in the EVA mode. This function is only used when
accessing user code so there is no need to build two distinct symbols
for user and kernel operations respectively.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Build the __bzero symbol using a macor. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Add copy_{to,from,in}_user when the CPU operates in EVA mode.
This is necessary so the EVA specific instructions can be used
to perform the virtual to physical translation for user space
addresses. We will use the non-EVA functions to read from kernel
if needed.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
The code can be shared between EVA and non-EVA configurations,
therefore use a macro to build it to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In preparation for EVA support, the PREF macro is split into two
separate macros, PREFS and PREFD, for source and destination data
prefetching respectively.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Each load/store macro always adds an entry to the __ex_table
using the EXC macro. Therefore, these load/store macros are now merged
with the EXC one. The argument list is also expanded in order to make
the macro more flexible. The extra 'type' argument is not used by this
commit, but it will be used when the EVA support is added to the memcpy.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In non-EVA mode, strncpy_from_user* aliases are used for the
strncpy_from_kernel* symbols since the code is identical. In EVA
mode, new strcpy_from_user* symbols are used which use the EVA
specific instructions to load values from userspace.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Build the __strncpy_from_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In non-EVA mode, strlen_user* aliases are used for the
strlen_kernel* symbols since the code is identical. In EVA
mode, new strlen_user* symbols are used which use the EVA
specific instructions to load values from userspace.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Build the __strlen_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
In non-EVA mode, a strlen_user* alias is used for the
strlen_kernel* symbols since the code is identical. In EVA
mode, a new strlen_user* symbol is used which uses the EVA
specific instructions to load values from userspace.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Build the __strnlen_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
When a breakpoint or trap happens when operating in kernel mode but
on users behalf (eg syscall) it is necessary to change the address
limit to KERNEL_DS so any address checking can be bypassed and print
the correct stack trace.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Arguments 4-8 are stored on user's stack, so use the EVA instructions
to fetch them if EVA is enabled.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Use LLE/SCE instructions for performing an address translation for
userspace when EVA is enabled.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
EVA uses specific instructions for accessing user memory.
Instead of polluting the kernel with numerous #ifdef CONFIG_EVA
we add wrappers for all the instructions that need special
handling when EVA is enabled.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
EVA can use the PREFE instruction to perform the virtual address
translation using the user mapping of the address rather than the
kernel mapping.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Add basic Kconfig support for EVA. Not selectable by any platform
at this point.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
|
|
Add a CPU_P5600 cpu type case in oprofile_arch_init() to use the MIPS
model, and in mipsxx_init() to set the cpu_type string to "mips/P5600".
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Robert Richter <rric@kernel.org>
Cc: oprofile-list@lists.sf.net
Patchwork: https://patchwork.linux-mips.org/patch/6410/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Allow FTLB to be turned on or off for CPU_P5600 as well as CPU_PROAPTIV.
The existing if statement is converted into a switch to allow for future
expansion.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6411/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Add a case in cpu_probe_mips for the MIPS P5600 processor ID, which sets
the CPU type to the new CPU_P5600.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6409/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Add a CPU_P5600 case to various switch statements, doing the same thing
as for CPU_PROAPTIV.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6408/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Add a Processor ID and CPU type for the MIPS P5600 core.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6407/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch extends sigcontext in order to hold the most significant 64
bits of each vector register in addition to the MSA control & status
register. The least significant 64 bits are already saved as the scalar
FP context. This makes things a little awkward since the least & most
significant 64 bits of each vector register are not contiguous in
memory. Thus the copy_u & insert instructions are used to transfer the
values of the most significant 64 bits via GP registers.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6533/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
No current systems implementing MSA include support for vector register
partitioning which makes it somewhat difficult to implement support for
it in the kernel. Thus for the moment the kernel includes no such
support. However if the kernel were to be run on a system which
implemented register partitioning then it would not function correctly,
mishandling MSA disabled exceptions. Print a warning if run on a system
with vector register partitioning implemented to indicate this problem
should it occur.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6494/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch adds a simple handler for MSA FP exceptions which delivers a
SIGFPE to the running task. In the future it should probably be extended
to re-execute the instruction with the MSACSR.NX bit set in order to
generate results for any elements which did not cause an exception
before delivering the SIGFPE signal.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6432/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch adds support for context switching the MSA vector registers.
These 128 bit vector registers are aliased with the FP registers - an
FP register accesses the least significant bits of the vector register
with which it is aliased (ie. the register with the same index). Due to
both this & the requirement that the scalar FPU must be 64-bit (FR=1) if
enabled at the same time as MSA the kernel will enable MSA & scalar FP
at the same time for tasks which use MSA. If we restore the MSA vector
context then we might as well enable the scalar FPU since the reason it
was left disabled was to allow for lazy FP context restoring - but we
just restored the FP context as it's a subset of the vector context. If
we restore the FP context and have previously used MSA then we have to
restore the whole vector context anyway (see comment in
enable_restore_fp_context for details) so similarly we might as well
enable MSA.
Thus if a task does not use MSA then it will continue to behave as
without this patch - the scalar FP context will be saved & restored as
usual. But if a task executes an MSA instruction then it will save &
restore the vector context forever more.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6431/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch adds support for probing the MSAP bit within the Config3
register in order to detect the presence of the MSA ASE. Presence of the
ASE will be indicated in /proc/cpuinfo. The value of the MSA
implementation register will be displayed at boot to aid debugging and
verification of a correct setup, as is done for the FPU.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6430/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This patch introduces definitions for the MSA control registers and
functions which allow access to both the control & vector registers. If
the toolchain being used to build the kernel includes support for MSA
then this patch will make use of that support & use MSA instructions
directly. However toolchain support for MSA is very new & far from a
point where it can be reasonably expected that everyone building the
kernel uses a toolchain with support. Thus fallbacks using .word
assembler directives are also provided for now as a temporary measure.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6429/
Patchwork: https://patchwork.linux-mips.org/patch/6607/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When saving or restoring scalar FP context we want to access the least
significant 64 bits of each FP register. When the FP registers are 64
bits wide that is trivially the start of the registers value in memory.
However when the FP registers are wider this equivalence will no longer
be true for big endian systems. Define a new set of offset macros for
the least significant 64 bits of each saved FP register within thread
context, and make use of them when saving and restoring scalar FP
context.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6428/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When we want to access 64-bit FP register values we can only treat
consecutive registers as being consecutive in memory when the width of
an FP register equals 64 bits. This assumption will not remain true once
MSA support is introduced, so provide a code path which copies each 64
bit FP register value in turn when the width of an FP register differs
from 64 bits.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6427/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|