diff options
author | Kirill Tkhai <ktkhai@parallels.com> | 2014-03-06 13:32:01 +0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-03-11 12:05:39 +0100 |
commit | 4c6c4e38c4e9a454889298dcc498174968d14a09 (patch) | |
tree | ffb6a22c1f3a58667b3c745fe055b6dab3065443 /kernel/sched/rt.c | |
parent | e4aa358b6c23f98b2715594f6b1e9a4996a55f04 (diff) |
sched/core: Fix endless loop in pick_next_task()
1) Single cpu machine case.
When rq has only RT tasks, but no one of them can be picked
because of throttling, we enter in endless loop.
pick_next_task_{dl,rt} return NULL.
In pick_next_task_fair() we permanently go to retry
if (rq->nr_running != rq->cfs.h_nr_running)
return RETRY_TASK;
(rq->nr_running is not being decremented when rt_rq becomes
throttled).
No chances to unthrottle any rt_rq or to wake fair here,
because of rq is locked permanently and interrupts are
disabled.
2) In case of SMP this can cause a hang too. Although we unlock
rq in idle_balance(), interrupts are still disabled.
The solution is to check for available tasks in DL and RT
classes instead of checking for sum.
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1394098321.19290.11.camel@tkhai
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/rt.c')
-rw-r--r-- | kernel/sched/rt.c | 10 |
1 files changed, 0 insertions, 10 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index f3cee0a63b76..d8cdf1618551 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -470,11 +470,6 @@ static void sched_rt_rq_dequeue(struct rt_rq *rt_rq) dequeue_rt_entity(rt_se); } -static inline int rt_rq_throttled(struct rt_rq *rt_rq) -{ - return rt_rq->rt_throttled && !rt_rq->rt_nr_boosted; -} - static int rt_se_boosted(struct sched_rt_entity *rt_se) { struct rt_rq *rt_rq = group_rt_rq(rt_se); @@ -545,11 +540,6 @@ static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq) { } -static inline int rt_rq_throttled(struct rt_rq *rt_rq) -{ - return rt_rq->rt_throttled; -} - static inline const struct cpumask *sched_rt_period_mask(void) { return cpu_online_mask; |