diff options
author | Dietmar Eggemann <dietmar.eggemann@arm.com> | 2020-05-20 15:42:40 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-06-15 14:10:05 +0200 |
commit | fc9dc698472aa460a8b3b036d9b1d0b751f12f58 (patch) | |
tree | 168ab81c67176899e5b7d8529ad9c1c7e7256009 | |
parent | c81b89329933c6c0be809d4c0d2cb57c49153ee3 (diff) |
sched/deadline: Add dl_bw_capacity()
Capacity-aware SCHED_DEADLINE Admission Control (AC) needs root domain
(rd) CPU capacity sum.
Introduce dl_bw_capacity() which for a symmetric rd w/ a CPU capacity
of SCHED_CAPACITY_SCALE simply relies on dl_bw_cpus() to return #CPUs
multiplied by SCHED_CAPACITY_SCALE.
For an asymmetric rd or a CPU capacity < SCHED_CAPACITY_SCALE it
computes the CPU capacity sum over rd span and cpu_active_mask.
A 'XXX Fix:' comment was added to highlight that if 'rq->rd ==
def_root_domain' AC should be performed against the capacity of the
CPU the task is running on rather the rd CPU capacity sum. This
issue already exists w/o capacity awareness.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lkml.kernel.org/r/20200520134243.19352-3-dietmar.eggemann@arm.com
-rw-r--r-- | kernel/sched/deadline.c | 33 |
1 files changed, 33 insertions, 0 deletions
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index ec90265e9d8e..01f474a5bd14 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -69,6 +69,34 @@ static inline int dl_bw_cpus(int i) return cpus; } + +static inline unsigned long __dl_bw_capacity(int i) +{ + struct root_domain *rd = cpu_rq(i)->rd; + unsigned long cap = 0; + + RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(), + "sched RCU must be held"); + + for_each_cpu_and(i, rd->span, cpu_active_mask) + cap += capacity_orig_of(i); + + return cap; +} + +/* + * XXX Fix: If 'rq->rd == def_root_domain' perform AC against capacity + * of the CPU the task is running on rather rd's \Sum CPU capacity. + */ +static inline unsigned long dl_bw_capacity(int i) +{ + if (!static_branch_unlikely(&sched_asym_cpucapacity) && + capacity_orig_of(i) == SCHED_CAPACITY_SCALE) { + return dl_bw_cpus(i) << SCHED_CAPACITY_SHIFT; + } else { + return __dl_bw_capacity(i); + } +} #else static inline struct dl_bw *dl_bw_of(int i) { @@ -79,6 +107,11 @@ static inline int dl_bw_cpus(int i) { return 1; } + +static inline unsigned long dl_bw_capacity(int i) +{ + return SCHED_CAPACITY_SCALE; +} #endif static inline |