Age | Commit message (Collapse) | Author |
|
This directive was put in the kernel source before the "pragma
unroll" support for tilegx gcc was upstreamed. Remove it for
now, and we can put it back later if/when the compiler support
is upstreamed. This avoids a warning when building the kernel.
This routine is not on a hot path in any case, so the extra
optimization here was mostly just for its own sake.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
|
|
Pull tile architecture updates from Chris Metcalf:
"A few stray changes"
* git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
tile: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
tile: support gcc 7 optimization to use __multi3
tile 32-bit big-endian: fix bugs in syscall argument order
tile: allow disabling CONFIG_EARLY_PRINTK
|
|
The tip gcc includes an optimization mode that converts
64-bit divides into 128-bit multiplies using __multi3.
Export the symbol so that modules can find it. We just
export unconditionally without worrying about the gcc
version, since the symbol has been in libgcc forever and
the function is less than 300 bytes.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
|
|
The tilepro change wasn't ever compiled it seems (the 0day built bot
also doesn't have a toolchain for it).
Make it work.
The thing that makes the patch bigger than desired is namespace
collision with the C11 __atomic builtin functions. So rename the
tilepro functions to __atomic32.
Reported-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 1af5de9af138 ("locking/atomic, arch/tile: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()")
Link: http://lkml.kernel.org/r/20160622091649.GB30154@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Implement FETCH-OP atomic primitives, these are very similar to the
existing OP-RETURN primitives we already have, except they return the
value of the atomic variable _before_ modification.
This is especially useful for irreversible operations -- such as
bitops (because it becomes impossible to reconstruct the state prior
to modification).
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
This patch updates/fixes all spin_unlock_wait() implementations.
The update is in semantics; where it previously was only a control
dependency, we now upgrade to a full load-acquire to match the
store-release from the spin_unlock() we waited on. This ensures that
when spin_unlock_wait() returns, we're guaranteed to observe the full
critical section we waited on.
This fixes a number of spin_unlock_wait() users that (not
unreasonably) rely on this.
I also fixed a number of ticket lock versions to only wait on the
current lock holder, instead of for a full unlock, as this is
sufficient.
Furthermore; again for ticket locks; I added an smp_rmb() in between
the initial ticket load and the spin loop testing the current value
because I could not convince myself the address dependency is
sufficient, esp. if the loads are of different sizes.
I'm more than happy to remove this smp_rmb() again if people are
certain the address dependency does indeed work as expected.
Note: PPC32 will be fixed independently
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: chris@zankel.net
Cc: cmetcalf@mellanox.com
Cc: davem@davemloft.net
Cc: dhowells@redhat.com
Cc: james.hogan@imgtec.com
Cc: jejb@parisc-linux.org
Cc: linux@armlinux.org.uk
Cc: mpe@ellerman.id.au
Cc: ralf@linux-mips.org
Cc: realmz6@gmail.com
Cc: rkuo@codeaurora.org
Cc: rth@twiddle.net
Cc: schwidefsky@de.ibm.com
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Cc: ysato@users.sourceforge.jp
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Implement atomic logic ops -- atomic_{or,xor,and}.
For tilegx, these are relatively straightforward; the architecture
provides atomic "or" and "and", both 32-bit and 64-bit. To support
xor we provide a loop using "cmpexch".
For the older 32-bit tilepro architecture, we have to extend
the set of low-level assembly routines to include 32-bit "and",
as well as all three 64-bit routines. Somewhat confusingly,
some 32-bit versions are already used by the bitops inlines, with
parameter types appropriate for bitops, so we have to do a bit of
casting to match "int" to "unsigned long".
Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436474297-32187-1-git-send-email-cmetcalf@ezchip.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
The tilegx and tilepro compilers use .coldtext for their unlikely
executed text section name, so an __attribute__((cold)) function
will (when compiled with higher optimization levels) land in
the .coldtext section.
Modify modpost to add .coldtext to the set of OTHER_TEXT_SECTIONS
so we don't get warnings about referencing such a section in an
__ex_table block, and then also modify arch/tile/lib/memcpy_user_64.c
so that it uses plain ".coldtext" instead of ".coldtext.memcpy".
The latter naming is a relic of an earlier use of -ffunction-sections,
which we no longer use by default.
Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This commit fixes a number of issues with the tile backtrace code.
- Don't try to identify userspace shared object or executable paths
if we are doing a backtrace from an interrupt; it's not legal,
and also unlikely to be interesting. Likewise, don't try to do
it for other address spaces, since d_path() assumes it is being
called in "current" context.
- Move "in_backtrace" from thread_struct to thread_info.
This way we can access it even if our stack thread_info has been
clobbered, which makes backtracing more robust.
- Avoid using "current" directly when testing for is_sigreturn().
Since "current" may be corrupt, we're better off using kbt->task
explicitly to look up the vdso_base for the current task.
Conveniently, this simplifies the internal APIs (we only need
one is_sigreturn() function now).
- Avoid bogus "Odd fault" warning when pc/sp/ex1 are all zero,
as is true for kernel threads above the last frame.
- Hook into Tejun Heo's dump_stack() framework in lib/dump_stack.c.
- Write last entry in save_stack_trace() as ULONG_MAX, not zero,
since ftrace (at least) relies on finding that marker.
- Implement save_stack_trace_regs() and save_strack_trace_user(),
and set CONFIG_USER_STACKTRACE_SUPPORT.
Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
|
|
This change enables the generic strncpy_from_user() and strnlen_user()
using word-at-a-time.h. The tile implementation is trivial since
both tilepro and tilegx have SIMD operations that do byte-wise
comparisons against immediate zero for each byte, and return an
0x01 byte in each position where there is a 0x00 byte.
Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
|
|
Rather than trying to wait until all possible lockers have
unlocked the lock, we now only wait until the current locker
(if any) has released the lock.
The old code was correct, but the new code works more like the x86
code and thus hopefully is more appropriate under contention.
See commit 78bff1c8684f ("x86/ticketlock: Fix spin_unlock_wait()
livelock") for x86.
Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
|
|
related functions
atomic* value is signed value, and atomic* functions need also process
signed value (parameter value, and return value), so use 'long long'
instead of 'u64'.
After replacement, it will also fix a bug for atomic64_add_negative():
"u64 is never less than 0".
The modifications are:
in vim, use "1,% s/\<u64\>/long long/g" command.
remove redundant '__aligned(8)'.
be sure of 80 (and macro '\') columns limitation after replacement.
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com> [re-instated const cast]
|
|
The macrology in cmpxchg.h was designed to allow arbitrary pointer
and integer values to be passed through the routines. To support
cmpxchg() on 64-bit values on the 32-bit tilepro architecture, we
used the idiom "(typeof(val))(typeof(val-val))". This way, in the
"size 8" branch of the switch, when the underlying cmpxchg routine
returns a 64-bit quantity, we cast it first to a typeof(val-val)
quantity (i.e. size_t if "val" is a pointer) with no warnings about
casting between pointers and integers of different sizes, then cast
onwards to typeof(val), again with no warnings. If val is not a
pointer type, the additional cast is a no-op. We can't replace the
typeof(val-val) cast with (for example) unsigned long, since then if
"val" is really a 64-bit type, we cast away the high bits.
HOWEVER, this fails with current gcc (through 4.7 at least) if "val"
is a pointer to an incomplete type. Unfortunately gcc isn't smart
enough to realize that "val - val" will always be a size_t type
even if it's an incomplete type pointer.
Accordingly, I've reworked the way we handle the casting. We have
given up the ability to use cmpxchg() on 64-bit values on tilepro,
which is OK in the kernel since we should use cmpxchg64() explicitly
on such values anyway. As a result, I can just use simple "unsigned
long" casts internally.
As I reworked it, I realized it would be cleaner to move the
architecture-specific conditionals for cmpxchg and xchg out of the
atomic.h headers and into cmpxchg, and then use the cmpxchg() and
xchg() primitives directly in atomic.h and elsewhere. This allowed
the cmpxchg.h header to stand on its own without relying on the
implicit include of it that is performed by <asm/atomic.h>.
It also allowed collapsing the atomic_xchg/atomic_cmpxchg routines
from atomic_{32,64}.h into atomic.h.
I improved the tests that guard the allowed size of the arguments
to the routines to use a __compiletime_error() test. (By avoiding
the use of BUILD_BUG, I could include cmpxchg.h into bitops.h as
well and use the macros there, which is otherwise impossible due
to include order dependency issues.)
The tilepro _atomic_xxx internal methods were previously set up to
take atomic_t and atomic64_t arguments, which isn't as convenient
with the new model, so I modified them to take int or u64 arguments,
which is consistent with how they used the arguments internally
anyway, so provided some nice simplification there too.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This chip is no longer being actively developed for (it was superceded
by the TILEPro64 in 2008), and in any case the existing compiler and
toolchain in the community do not support it. It's unlikely that the
kernel works with TILE64 at this point as the configuration has not been
tested in years. The support is also awkward as it requires maintaining
a significant number of ifdefs. So, just remove it altogether.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Allow enabling frame pointer support; this makes it easier to hook
into the various kernel features that claim they require it without
having to add Kconfig conditionals everywhere (a la mips, ppc, s390,
and microblaze). When enabled, it basically eliminates leaf functions
as such, and stops optimizing tail and sibling calls. It adds around
3% to the size of the kernel when enabled.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
In strncpy_from_user_asm, when the destination buffer length was the
same as the actual string length, we were returning the size of the
destination buffer. But since it's a NUL terminated string, we should
return the length of the string instead.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This may fix a reported bug where an R_TILEGX_64 in a module was not
pointing to an aligned address.
Reported-by: Simon Marchi <simon.marchi@polymtl.ca>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This commit adds support for static ftrace, graph function support,
and dynamic tracer support.
Signed-off-by: Tony Lu <zlu@tilera.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This change adds support for CONFIG_PREEMPT (full kernel preemption).
In addition to the core support, this change includes a number
of places where we fix up uses of smp_processor_id() and per-cpu
variables. I also eliminate the PAGE_HOME_HERE and PAGE_HOME_UNKNOWN
values for page homing, as it turns out they weren't being used.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Using strlen as a model, add length checking to create strnlen.
Signed-off-by: Ken Steele <ken@tilera.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This change cleans up the string code in a number of ways:
- For memcpy(), fix bug in prefetch and increase distance to 3 lines;
optimize for unaligned data; do all loads before wh64 to make memcpy
safe for forward-overlapping calls; etc. Performance is improved.
- Use new copy_byte() function on tilegx to spread a single byte value
out into a full word using the shufflebytes instruction.
- Clean up header include ordering to be more canonical, and remove
spurious #undefs of function names.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
The "inv" (invalidate) instruction is generally less safe than "finv"
(flush and invalidate), as it will drop dirty data from the cache.
It turns out we have almost no need for "inv" (other than for the
older 32-bit architecture in some limited cases), so convert to
"finv" where possible and delete the extra "inv" infrastructure.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
gcc 4.7.x is emitting calls to __ffsdi2 where previously
it used to inline the appropriate ctz instructions.
While this needs to be fixed in gcc, it's also easy to avoid
having it cause build failures when building with those
compilers by exporting __ffsdi2 to modules.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: stable@kernel.org
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile
Pull tile update from Chris Metcalf:
"The interesting bug fix is support for the upcoming "4.2" release of
the Tilera hypervisor, which by default launches Linux at privilege
level 2 instead of 1. The fix lets new and old hypervisors and
Linuxes interoperate more smoothly, so I've tagged it for
stable@kernel.org so that older Linuxes will be able to boot under the
newer hypervisor."
* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
usb: tilegx: fix memleak when create hcd fail
arch/tile: remove inline marking of EXPORT_SYMBOL functions
rtc: rtc-tile: add missing platform_device_unregister() when module exit
tile: support new Tilera hypervisor
|
|
EXPORT_SYMBOL and inline directives are contradictory to each other.
The patch fixes this inconsistency.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Denis Efremov <yefremov.denis@gmail.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
The help text for this config is duplicated across the x86, parisc, and
s390 Kconfig.debug files. Arnd Bergman noted that the help text was
slightly misleading and should be fixed to state that enabling this
option isn't a problem when using pre 4.4 gcc.
To simplify the rewording, consolidate the text into lib/Kconfig.debug
and modify it there to be more explicit about when you should say N to
this config.
Also, make the text a bit more generic by stating that this option
enables compile time checks so we can cover architectures which emit
warnings vs. ones which emit errors. The details of how an
architecture decided to implement the checks isn't as important as the
concept of compile time checking of copy_from_user() calls.
While we're doing this, remove all the copy_from_user_overflow() code
that's duplicated many times and place it into lib/ so that any
architecture supporting this option can get the function for free.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Helge Deller <deller@gmx.de>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This was shown up by running with "allmodconfig". I used
EXPORT_SYMBOL() to match existing conventions in files that
were already exporting symbols, or that were exported that way
by other architectures, and otherwise EXPORT_SYMBOL_GPL().
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This makes it available to the tilegx network driver.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This change introduces new flags for the hv_install_context()
API that passes a page table pointer to the hypervisor. Clients
can explicitly request 4K, 16K, or 64K small pages when they
install a new context. In practice, the page size is fixed at
kernel compile time and the same size is always requested every
time a new page table is installed.
The <hv/hypervisor.h> header changes so that it provides more abstract
macros for managing "page" things like PFNs and page tables. For
example there is now a HV_DEFAULT_PAGE_SIZE_SMALL instead of the old
HV_PAGE_SIZE_SMALL. The various PFN routines have been eliminated and
only PA- or PTFN-based ones remain (since PTFNs are always expressed
in fixed 2KB "page" size). The page-table management macros are
renamed with a leading underscore and take page-size arguments with
the presumption that clients will use those macros in some single
place to provide the "real" macros they will use themselves.
I happened to notice the old hv_set_caching() API was totally broken
(it assumed 4KB pages) so I changed it so it would nominally work
correctly with other page sizes.
Tag modules with the page size so you can't load a module built with
a conflicting page size. (And add a test for SMP while we're at it.)
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Use direct load/store for the get_user/put_user.
Previously, we would call out to a helper routine that would do the
appropriate thing and then return, handling the possible exception
internally. Now we inline the load or store, along with a "we succeeded"
indication in a register; if the load or store faults, we write a
"we failed" indication into the same register and then return to the
following instruction. This is more efficient and gives us more compact
code, as well as being more in line with what other architectures do.
The special futex assembly source file for TILE-Gx also disappears in
this change; we just use the same inlining idiom there as well, putting
the appropriate atomic operations directly into futex_atomic_op_inuser()
(and thus into the FUTEX_WAIT function).
The underlying atomic copy_from_user, copy_to_user functions were
renamed using the (cryptic) x86 convention as copy_from_user_ll and
copy_to_user_ll.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
The toolchain supports big-endian mode now, so add support for building
the kernel to run big-endian as well.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
There were some correctness issues with this code that are now fixed
with this change. The change is likely less performant than it could
be, but it should no longer be vulnerable to any races with memory
operations on the memory network while invalidating a range of memory.
This code is run infrequently so performance isn't critical, but
correctness definitely is.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Pragmatically it couldn't be wrong to cast pointers to long to compare
them (since all kernel addresses are in the top half of VA space),
but it's more correct to cast to unsigned long.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
We were carefully computing a value to use for the number of loops
to spin for, and then ignoring it.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Add a comment explaining why this is important, and add a CFLAGS_REMOVE
clause to the Makefile to make sure it happens.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Parentheses were missing.
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
The empty_zero_page[] export is required for ZERO_PAGE() module references.
The #includes are due to changes in implicit inclusion, and should of
course have been in the sources from the beginning.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
An earlier Tilera compiler generated calls to an "__ll_mul"
function for long long multiplication. Our libgcc supported that
as an alias for the normal __muldi3 routine, so we made it available
to kernel modules as well. However, for a while now the compiler
has internally been generating only the standard __muldi3 symbol,
and the version we are giving back to the community does not have
the __ll_mul alias, so we are removing it from the kernel too.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
The 32-bit TILEPro support uses some #defines in <asm/atomic_32.h>
for atomic support routines in assembly. To make this more explicit,
I've turned those includes into includes of <asm/atomic_32.h>, which
should hopefully make it clear that they shouldn't be bombed into
<linux/atomic.h> in any cleanups.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This allows us to move duplicated code in <asm/atomic.h>
(atomic_inc_not_zero() for now) to <linux/atomic.h>
Signed-off-by: Arun Sharma <asharma@fb.com>
Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: David Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This support was partially present in the existing code (look for
"__tilegx__" ifdefs) but with this change you can build a working
kernel using the TILE-Gx toolchain and ARCH=tilegx.
Most of these files are new, generally adding a foo_64.c file
where previously there was just a foo_32.c file.
The ARCH=tilegx directive redirects to arch/tile, not arch/tilegx,
using the existing SRCARCH mechanism in the top-level Makefile.
Changes to existing files:
- <asm/bitops.h> and <asm/bitops_32.h> changed to factor the
include of <asm-generic/bitops/non-atomic.h> in the common header.
- <asm/compat.h> and arch/tile/kernel/compat.c changed to remove
the "const" markers I had put on compat_sys_execve() when trying
to match some recent similar changes to the non-compat execve.
It turns out the compat version wasn't "upgraded" to use const.
- <asm/opcode-tile_64.h> and <asm/opcode_constants_64.h> were
previously included accidentally, with the 32-bit contents. Now
they have the proper 64-bit contents.
Finally, I had to hack the existing hacky drivers/input/input-compat.h
to add yet another "#ifdef" for INPUT_COMPAT_TEST (same as x86_64).
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> [drivers/input]
|
|
Otherwise, it's possible to end up with the prefetcher pulling
data into cache that the code believes has been flushed.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This semantic was already true for atomic operations within the kernel,
and this change makes it true for the fast atomic syscalls (__NR_cmpxchg
and __NR_atomic_update) as well. Previously, user-space had to use
the fast atomic syscalls exclusively to update memory, since raw stores
could lose a race with the atomic update code even when the atomic update
hadn't actually modified the value.
With this change, we no longer write back the value to memory if it
hasn't changed. This allows certain types of idioms in user space to
work as expected, e.g. "atomic exchange" to acquire a spinlock, followed
by a raw store of zero to release the lock.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Commit 8d7718aa082aaf30a0b4989e1f04858952f941bc changed "int"
to "u32" in the prototypes but not the definition.
I missed this when I saw the patch go by on LKML.
We cast "u32 *" to "int *" since we are tying into the underlying
atomics framework, and atomic_t uses int as its value type.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Reviewed-by: Michel Lespinasse <walken@google.com>
|
|
The first issue fixed in this patch is that pending rwlock write locks
could lock out new readers; this could cause a deadlock if a read lock was
held on cpu 1, a write lock was then attempted on cpu 2 and was pending,
and cpu 1 was interrupted and attempted to re-acquire a read lock.
The write lock code was modified to not lock out new readers.
The second issue fixed is that there was a narrow race window where a tns
instruction had been issued (setting the lock value to "1") and the store
instruction to reset the lock value correctly had not yet been issued.
In this case, if an interrupt occurred and the same cpu then tried to
manipulate the lock, it would find the lock value set to "1" and spin
forever, assuming some other cpu was partway through updating it. The fix
is to enforce an interrupt critical section around the tns/store pair.
In addition, this change now arranges to always validate that after
a readlock we have not wrapped around the count of readers, which
is only eight bits.
Since these changes make the rwlock "fast path" code heavier weight,
I decided to move all the rwlock code all out of line, leaving only the
conventional spinlock code with fastpath inlines. Since the read_lock
and read_trylock implementations ended up very similar, I just expressed
read_lock in terms of read_trylock.
As part of this change I also eliminate support for the now-obsolete
tns_atomic mode.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
The Tilera architecture traditionally supports 64KB page sizes
to improve TLB utilization and improve performance when the
hardware is being used primarily to run a single application.
For more generic server scenarios, it can be beneficial to run
with 4KB page sizes, so this commit allows that to be specified
(by modifying the arch/tile/include/hv/pagesize.h header).
As part of this change, we also re-worked the PTE management
slightly so that PTE writes all go through a __set_pte() function
where we can do some additional validation. The set_pte_order()
function was eliminated since the "order" argument wasn't being used.
One bug uncovered was in the PCI DMA code, which wasn't properly
flushing the specified range. This was benign with 64KB pages,
but with 4KB pages we were getting some larger flushes wrong.
The per-cpu memory reservation code also needed updating to
conform with the newer percpu stuff; before it always chose 64KB,
and that was always correct, but with 4KB granularity we now have
to pay closer attention and reserve the amount of memory that will
be requested when the percpu code starts allocating.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This is a grab bag of changes with no actual change to generated code.
This includes whitespace and comment typos, plus a couple of stale
comments being removed.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
This adds a grab bag of symbols that have been missing for
various modules.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
It now takes an additional argument so it can be used to
flush-and-invalidate pages that are cached using hash-for-home
as well those that are cached with coherence point on a single cpu.
This allows it to be used more widely for changing the coherence
point of arbitrary pages when necessary.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|