Age | Commit message (Collapse) | Author |
|
no callers, no consistent semantics, no sane way to use it...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull uaccess unification updates from Al Viro:
"This is the uaccess unification pile. It's _not_ the end of uaccess
work, but the next batch of that will go into the next cycle. This one
mostly takes copy_from_user() and friends out of arch/* and gets the
zero-padding behaviour in sync for all architectures.
Dealing with the nocache/writethrough mess is for the next cycle;
fortunately, that's x86-only. Same for cleanups in iov_iter.c (I am
sold on access_ok() in there, BTW; just not in this pile), same for
reducing __copy_... callsites, strn*... stuff, etc. - there will be a
pile about as large as this one in the next merge window.
This one sat in -next for weeks. -3KLoC"
* 'work.uaccess' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (96 commits)
HAVE_ARCH_HARDENED_USERCOPY is unconditional now
CONFIG_ARCH_HAS_RAW_COPY_USER is unconditional now
m32r: switch to RAW_COPY_USER
hexagon: switch to RAW_COPY_USER
microblaze: switch to RAW_COPY_USER
get rid of padding, switch to RAW_COPY_USER
ia64: get rid of copy_in_user()
ia64: sanitize __access_ok()
ia64: get rid of 'segment' argument of __do_{get,put}_user()
ia64: get rid of 'segment' argument of __{get,put}_user_check()
ia64: add extable.h
powerpc: get rid of zeroing, switch to RAW_COPY_USER
esas2r: don't open-code memdup_user()
alpha: fix stack smashing in old_adjtimex(2)
don't open-code kernel_setsockopt()
mips: switch to RAW_COPY_USER
mips: get rid of tail-zeroing in primitives
mips: make copy_from_user() zero tail explicitly
mips: clean and reorder the forest of macros...
mips: consolidate __invoke_... wrappers
...
|
|
The ia64 build generates many warnings like this:
WARNING: EXPORT symbol "empty_zero_page" [vmlinux] version generation failed, symbol will not be versioned.
Besides adding the necessary header this also requires fiddling with
some explicit .S -> .o rules.
Cc: IA64-ML <linux-ia64@vger.kernel.org>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Merced is fucked, so what else is new?
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)
to do the replacement at the end of the merge window.
Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Here we have another kind of deviation from the default case -
a difference between exporting functions and non-functions.
EXPORT_DATA_SYMBOL... is really different from EXPORT_SYMBOL...
on ia64, and we need to use the right one when moving exports
from *.c where C compiler has the required information to
*.S, where we need to supply it manually.
parisc64 will be another one like that.
Tested-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Collect the symbols exported by anything that goes into lib.a and
add an empty object (lib-exports.o) with explicit undefs for each
of those to obj-y.
That allows to relax the rules regarding the use of exports in
lib-* objects - right now an object with export can be in lib-*
only if we are guaranteed that there always will be users in
built-in parts of the tree, otherwise it needs to be in obj-*.
As the result, we have an unholy mix of lib- and obj- in lib/Makefile
and (especially) in arch/*/lib/Makefile. Moreover, a change in
generic part of the kernel can lead to mysteriously missing exports
on some configs. With this change we don't have to worry about
that anymore.
One side effect is that built-in.o now pulls everything with exports
from the corresponding lib.a (if such exists). That's exactly what
we want for linking vmlinux and fortunately it's almost the only thing
built-in.o is used in. arch/ia64/hp/sim/boot/bootloader is the only
exception and it's easy to get rid of now - just turn everything in
arch/ia64/lib into lib-* and don't bother with arch/ia64/lib/built-in.o
anymore.
[AV: stylistic fix from Michal folded in]
Acked-by: Michal Marek <mmarek@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Sync with Linus' tree so that patches against newer codebase can be applied.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
The URL to the book IA-64 and Elementary Functions in idiv32.S and
idiv64.S just led to a 404 page, so I updated them with a known good link
that others can reference.
Signed-off-by: Sina Hamedian <shamedian@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
This patch updates all instances of csum_tcpudp_magic and
csum_tcpudp_nofold to reflect the types that are usually used as the source
inputs. For example the protocol field is populated based on nexthdr which
is actually an unsigned 8 bit value. The length is usually populated based
on skb->len which is an unsigned integer.
This addresses an issue in which the IPv6 function csum_ipv6_magic was
generating a checksum using the full 32b of skb->len while
csum_tcpudp_magic was only using the lower 16 bits. As a result we could
run into issues when attempting to adjust the checksum as there was no
protocol agnostic way to update it.
With this change the value is still truncated as many architectures use
"(len + proto) << 8", however this truncation only occurs for values
greater than 16776960 in length and as such is unlikely to occur as we stop
the inner headers at ~64K in size.
I did have to make a few minor changes in the arm, mn10300, nios2, and
score versions of the function in order to support these changes as they
were either using things such as an OR to combine the protocol and length,
or were using ntohs to convert the length which would have truncated the
value.
I also updated a few spots in terms of whitespace and type differences for
the addresses. Most of this was just to make sure all of the definitions
were in sync going forward.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fixes generated by 'codespell' and manually reviewed.
Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
|
|
The 32-bit parameters (len and csum) of csum_ipv6_magic() are passed in 64-bit
registers in2 and in4. The high order 32 bits of the registers were never
cleared, and garbage was sometimes calculated into the checksum.
Fix this by clearing the high order 32 bits of these registers.
Signed-off-by: Jiri Bohac <jbohac@suse.cz>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
The patch contains Intel IOMMU IA64 specific code. It defines new
machvec dig_vtd, hooks for IOMMU, DMAR table detection, cache line flush
function, etc.
For a generic kernel with CONFIG_DMAR=y, if Intel IOMMU is detected,
dig_vtd is used for machinve vector. Otherwise, kernel falls back to
dig machine vector. Kernel parameter "machvec=dig" or "intel_iommu=off"
can be used to force kernel to boot dig machine vector.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
With the unionfs patch applied I get
ERROR: "copy_page" [fs/unionfs/unionfs.ko] undefined!
the other architectures (some, at least) export copy_page() so I guess ia64
should also do so.
To do this we need to move the copy_page() functions out of lib.a and into
built-in.o and add the EXPORT_SYMBOL().
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Fixes for various arch compilation problems:
(*) Missing module exports.
(*) Variable name collision when rxkad and af_rxrpc both built in
(rxrpc_debug).
(*) Large constant representation problem (AFS_UUID_TO_UNIX_TIME).
(*) Configuration dependencies.
(*) printk() format warnings.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Always build ia64 xor.o because multiple config options now depend on it.
Necessary to build .20-mm* on ia64 when, e.g., CONFIG_ASYNC_TX_DMA is
defined. Don't know if '_ASYNC_TX_DMA makes sense on ia64. If not, maybe
Kconfig should preclude it.
Could have defined a Kconfig option that defaults to true if MD_RAID456 ||
ASYNC_TX_DMA to control building of xor.o, but xor.o is only 848 bytes and
this IS ia64...
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Bob Picco <bob.picco@hp.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
While working on implementing csum_ipv6_magic, I noticed that current
version of ip_fast_csum will potentially return bits above "unsigned
short" as 1. While no harm is done right now because all call sites
will chop off the upper bits when it uses the return value. However,
this is still dangerous and buggy. Here is a patch to enforce that the
function really returns unsigned short in the native register format.
The fix is free as there are plenty open slot to add one more asm instruction.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
The asm version is 4.4 times faster than the generic C version and
10X smaller in code size.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
* sanitize prototypes, annotate
* ntohs -> shift in checksum calculations
* kill access_ok() in csum_partial_copy_from_user
* collapse do_csum_partial_copy_from_user
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
CONFIG_MD_RAID5 became CONFIG_MD_RAID456 in drivers/md/Kconfig. Make
the same change in arch/ia64
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Aron Griffis <aron@hp.com>
Acked-by: Jes Sorenson <jes@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
|
|
Bob Picco noted that 6edfba1b33c701108717f4e036320fc39abe1912
dropped the -ffreestanding compiler flag from the top level
Makefile, which allows the compiler to substitute memcpy() in
places where strcpy() is used with a known size source string.
But the ia64 memcpy() returns 0 for success, and "bytes copied"
for failure.
Fix to return the address of the destination string (like
stdlibc version, and other architectures). There are no
places where ia64 specific code makes use of the non-standard
return value.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
- remove generic_fls64()
- remove find_{next,first}{,_zero}_bit()
- remove ext2_{set,clear,test,find_first_zero,find_next_zero}_bit()
- remove minix_{test,set,test_and_clear,test,find_first_zero}_bit()
- remove sched_find_first_bit()
Signed-off-by: Akinobu Mita <mita@miraclelinux.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
changes to swiotlb.c made in commit 281dd25cdc0d6903929b79183816d151ea626341
since this file has been moved from arch/ia64/lib/swiotlb.c to
lib/swiotlb.c
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
This introduces a limit parameter to the core bootmem allocator; The new
parameter indicates that physical memory allocated by the bootmem
allocator should be within the requested limit.
We also introduce alloc_bootmem_low_pages_limit, alloc_bootmem_node_limit,
alloc_bootmem_low_pages_node_limit apis, but alloc_bootmem_low_pages_limit
is the only api used for swiotlb.
The existing alloc_bootmem_low_pages() api could instead have been
changed and made to pass right limit to the core allocator. But that
would make the patch more intrusive for 2.6.14, as other arches use
alloc_bootmem_low_pages(). We may be done that post 2.6.14 as a
cleanup.
With this, swiotlb gets memory within 4G for both x86_64 and ia64
arches.
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to lib/ and fixes-up the appropriate Makefile and
Kconfig files. No actual changes are made to swiotlb.c.
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Several implementations were essentialy a common piece of C code using
the cmpxchg() macro. Put the implementation in one spot that everyone
can share, and convert sparc64 over to using this.
Alpha is the lone arch-specific implementation, which codes up a
special fast path for the common case in order to avoid GP reloading
which a pure C version would require.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Machine vector selection has always been a bit of a hack given how
early in system boot it needs to be done. Services like ACPI namespace
are not available and there are non-trivial problems to moving them to
early boot. However, there's no reason we can't change to a different
machvec later in boot when the services we need are available. By
adding a entry point for later initialization of the swiotlb, we can add
an error path for the hpzx1 machevec initialization and fall back to the
DIG machine vector if IOMMU hardware isn't found in the system. Since
ia64 uses 4GB for zone DMA (no ISA support), it's trivial to allocate a
contiguous range from the slab for bounce buffer usage.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
arch/ia64/Kconfig
arch/ia64/kernel/acpi.c
include/asm-ia64/irq.h
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
This patch contains the ia64 architecture specific changes to prevent the
possible race conditions.
Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
The exception handler in copy user always expects fault occurs only on
user space address and the fall back recovery code is written with that
very assumption in mind. Recent source code inspection revealed that
while it worked splendid and to the expectation under normal circumstances,
It broke down under unexpected condition where some address calculation
might go outside the legal address range the original copy_user was
called for. This patch is to make copy_user exception handler more robust
and to prevent potential memory corruption.
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
|
|
copy_page.o appeared twice in arch/ia64/lib/Makefile. The
one in global lib-y is wrong where it should be just in
lib-$(CONFIG_ITANIUM).
Both copy_page.o and copy_page_mck.o are build for Itanium2
processor and the link order will pick up the low performing
copy_page function (originally written for itanium processor).
In this case, we really want the copy_page_mck.o for optimized
version.
Signed-off-by: Kenneth Chen <kenneth.w.chen@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Fix swiotlb sizing to match what the comments and the kernel
parameters documentation indicate. Given a default 16k page size kernel
(ia64) and a 2k swiotlb page size, we're off by a multiple of 8 trying
to size the swiotlb. When specified on the boot line, the swiotlb is
made 8x bigger than requested. When left to the default value, it's 8x
smaller than the comments indicate. For x86_64 the multiplier would be
2x. The patch below fixes this. Now, what's a good default swiotlb
size? Apparently we don't really need 64MB.
Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Check with PAL to see what the i-cache line size is for
each level of the cache, and so use the correct stride
when flushing the cache.
Acked-by: David Mosberger
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
This is a small patch to switch fluch_icache_range() to use fc.i
instead of fc. This would save time on processors which can establish
i-cache coherency without flushing the cache-line out to memory (not
that any current processors do). On existing processors, fc.i behaves
like fc. The only caveat is that very old assemblers may not know
about fc.i yet.
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Patch below fixes 3 trivial typos which are caught by the new
assembler (v2.169.90). Please apply.
[Note: fix to memcpy that was also part of this patch was separately
applied from patches by H.J. and Andreas ... so the delta here only
has the other two fixes. -Tony]
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
The current ia64 assembler complains about mismatching .proc/.endp pairs.
(Same patch also sent by H.J. Lu)
Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
memcpy_mck.S::__copy_user breaks in the prefetch code under these conditions :-
* src is unaligned and
* dst is near the end of a page and
* the page after dst is unmapped.
Signed-off-by: Keith Owens <kaos@sgi.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|