diff options
author | Lai Jiangshan <laijs@cn.fujitsu.com> | 2010-06-03 18:26:24 +0800 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2010-07-20 22:05:34 -0400 |
commit | bc289ae98b75d93228d24f521ef02a076e506e94 (patch) | |
tree | 50d151d0fbde1b106932c0f80a2639839d261ca3 /kernel/trace | |
parent | 985023dee6e212493831431ba2e3ce8918f001b2 (diff) |
tracing: Reduce latency and remove percpu trace_seq
__print_flags() and __print_symbolic() use percpu trace_seq:
1) Its memory is allocated at compile time, it wastes memory if we don't use tracing.
2) It is percpu data and it wastes more memory for multi-cpus system.
3) It disables preemption when it executes its core routine
"trace_seq_printf(s, "%s: ", #call);" and introduces latency.
So we move this trace_seq to struct trace_iterator.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
LKML-Reference: <4C078350.7090106@cn.fujitsu.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel/trace')
-rw-r--r-- | kernel/trace/trace_output.c | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c index 57c1b4596470..1ba64d3cc567 100644 --- a/kernel/trace/trace_output.c +++ b/kernel/trace/trace_output.c @@ -16,9 +16,6 @@ DECLARE_RWSEM(trace_event_mutex); -DEFINE_PER_CPU(struct trace_seq, ftrace_event_seq); -EXPORT_PER_CPU_SYMBOL(ftrace_event_seq); - static struct hlist_head event_hash[EVENT_HASHSIZE] __read_mostly; static int next_event_type = __TRACE_LAST_TYPE + 1; |