diff options
author | Tejun Heo <tj@kernel.org> | 2019-09-04 12:45:53 -0700 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2019-09-10 12:31:39 -0600 |
commit | e1518f63f246831af222758ead022cd40e79fab8 (patch) | |
tree | 3d7868912f57e6b10947dbf955e157ebbd3279b2 /block | |
parent | 36a524814ff3e5d5385f42d30152fe8c5e1fd2c1 (diff) |
blk-iocost: Don't let merges push vtime into the future
Merges have the same problem that forced-bios had which is fixed by
the previous patch. The cost of a merge is calculated at the time of
issue and force-advances vtime into the future. Until global vtime
catches up, how the cgroup's hweight changes in the meantime doesn't
matter and it often leads to situations where the cost is calculated
at one hweight and paid at a very different one. See the previous
patch for more details.
Fix it by never advancing vtime into the future for merges. If budget
is available, vtime is advanced. Otherwise, the cost is charged as
debt.
This brings merge cost handling in line with issue cost handling in
ioc_rqos_throttle().
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-iocost.c | 25 |
1 files changed, 18 insertions, 7 deletions
diff --git a/block/blk-iocost.c b/block/blk-iocost.c index cffed980dfac..e72e562d4aad 100644 --- a/block/blk-iocost.c +++ b/block/blk-iocost.c @@ -1784,28 +1784,39 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq, struct bio *bio) { struct ioc_gq *iocg = blkg_to_iocg(bio->bi_blkg); + struct ioc *ioc = iocg->ioc; sector_t bio_end = bio_end_sector(bio); + struct ioc_now now; u32 hw_inuse; u64 abs_cost, cost; - /* add iff the existing request has cost assigned */ - if (!rq->bio || !rq->bio->bi_iocost_cost) + /* bypass if disabled or for root cgroup */ + if (!ioc->enabled || !iocg->level) return; abs_cost = calc_vtime_cost(bio, iocg, true); if (!abs_cost) return; + ioc_now(ioc, &now); + current_hweight(iocg, NULL, &hw_inuse); + cost = abs_cost_to_cost(abs_cost, hw_inuse); + /* update cursor if backmerging into the request at the cursor */ if (blk_rq_pos(rq) < bio_end && blk_rq_pos(rq) + blk_rq_sectors(rq) == iocg->cursor) iocg->cursor = bio_end; - current_hweight(iocg, NULL, &hw_inuse); - cost = div64_u64(abs_cost * HWEIGHT_WHOLE, hw_inuse); - bio->bi_iocost_cost = cost; - - atomic64_add(cost, &iocg->vtime); + /* + * Charge if there's enough vtime budget and the existing request + * has cost assigned. Otherwise, account it as debt. See debt + * handling in ioc_rqos_throttle() for details. + */ + if (rq->bio && rq->bio->bi_iocost_cost && + time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) + iocg_commit_bio(iocg, bio, cost); + else + atomic64_add(abs_cost, &iocg->abs_vdebt); } static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio) |