diff options
author | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2016-03-04 03:58:22 +0100 |
---|---|---|
committer | Rafael J. Wysocki <rafael.j.wysocki@intel.com> | 2016-03-09 15:07:58 +0100 |
commit | 08f511fd41c3afe303eb9b41bff0570f7c1b6937 (patch) | |
tree | 719b4b878711f8c4312fccb9d479151a817fcbaf /kernel/events/callchain.c | |
parent | e6f036571e1f65021a442ec7aad087a6a239ecfb (diff) |
cpufreq: Reduce cpufreq_update_util() overhead a bit
Use the observation that cpufreq_update_util() is only called
by the scheduler with rq->lock held, so the callers of
cpufreq_set_update_util_data() can use synchronize_sched()
instead of synchronize_rcu() to wait for cpufreq_update_util()
to complete. Moreover, if they are updated to do that,
rcu_read_(un)lock() calls in cpufreq_update_util() might be
replaced with rcu_read_(un)lock_sched(), respectively, but
those aren't really necessary, because the scheduler calls
that function from RCU-sched read-side critical sections
already.
In addition to that, if cpufreq_set_update_util_data() checks
the func field in the struct update_util_data before setting
the per-CPU pointer to it, the data->func check may be dropped
from cpufreq_update_util() as well.
Make the above changes to reduce the overhead from
cpufreq_update_util() in the scheduler paths invoking it
and to make the cleanup after removing its callbacks less
heavy-weight somewhat.
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Diffstat (limited to 'kernel/events/callchain.c')
0 files changed, 0 insertions, 0 deletions