diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2014-09-05 11:14:48 -0700 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2015-01-06 11:01:09 -0800 |
commit | 536fa402221f09633e7c5801b327055ab716a363 (patch) | |
tree | 1f52ab7292d280947e73eeed571c941f009a5d99 /kernel/test_kprobes.c | |
parent | 924df8a0117aa2725750f1ec4d282867534d9a89 (diff) |
compiler: Allow 1- and 2-byte smp_load_acquire() and smp_store_release()
CPUs without single-byte and double-byte loads and stores place some
"interesting" requirements on concurrent code. For example (adapted
from Peter Hurley's test code), suppose we have the following structure:
struct foo {
spinlock_t lock1;
spinlock_t lock2;
char a; /* Protected by lock1. */
char b; /* Protected by lock2. */
};
struct foo *foop;
Of course, it is common (and good) practice to place data protected
by different locks in separate cache lines. However, if the locks are
rarely acquired (for example, only in rare error cases), and there are
a great many instances of the data structure, then memory footprint can
trump false-sharing concerns, so that it can be better to place them in
the same cache cache line as above.
But if the CPU does not support single-byte loads and stores, a store
to foop->a will do a non-atomic read-modify-write operation on foop->b,
which will come as a nasty surprise to someone holding foop->lock2. So we
now require CPUs to support single-byte and double-byte loads and stores.
Therefore, this commit adjusts the definition of __native_word() to allow
these sizes to be used by smp_load_acquire() and smp_store_release().
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Diffstat (limited to 'kernel/test_kprobes.c')
0 files changed, 0 insertions, 0 deletions