summaryrefslogtreecommitdiff
path: root/block/blk-core.c
AgeCommit message (Collapse)Author
2008-05-28block: Move the second call to get_request to the end of the loopZhang, Yanmin
In function get_request_wait, the second call to get_request could be moved to the end of the while loop, because if the first call to get_request fails, the second call will fail without sleep. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-05-14Remove blkdev warning triggered by using mdNeil Brown
As setting and clearing queue flags now requires that we hold a spinlock on the queue, and as blk_queue_stack_limits is called without that lock, get the lock inside blk_queue_stack_limits. For blk_queue_stack_limits to be able to find the right lock, each md personality needs to set q->queue_lock to point to the appropriate lock. Those personalities which didn't previously use a spin_lock, us q->__queue_lock. So always initialise that lock when allocated. With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no longer cause warnings as it will be clear that the proper lock is held. Thanks to Dan Williams for review and fixing the silly bugs. Signed-off-by: NeilBrown <neilb@suse.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Alistair John Strachan <alistair@devzero.co.uk> Cc: Nick Piggin <npiggin@suse.de> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Jacek Luczak <difrost.kernel@gmail.com> Cc: Prakash Punnoor <prakash@punnoor.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-07block: avoid duplicate calls to get_part() in disk stat codeJens Axboe
get_part() is fairly expensive, as it O(N) loops over partitions to find the right one. In lots of normal IO paths we end up looking up the partition twice, to make matters even worse. Change the stat add code to accept a passed in partition instead. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-05-07block: optimize generic_unplug_device()Jens Axboe
Original patch from Mikulas Patocka <mpatocka@redhat.com> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical disks mapped to one logical device via device mapper. He found that there was a slowdown on request_queue->lock in function generic_unplug_device. The slowdown is caused by the fact that when some code calls unplug on the device mapper, device mapper calls unplug on all physical disks. These unplug calls take the lock, find that the queue is already unplugged, release the lock and exit. With the below patch, performance of the benchmark was increased by 18% (the whole OLTP application, not just block layer microbenchmarks). So I'm submitting this patch for upstream. I think the patch is correct, because when more threads call simultaneously plug and unplug, it is unspecified, if the queue is or isn't plugged (so the patch can't make this worse). And the caller that plugged the queue should unplug it anyway. (if it doesn't, there's 3ms timeout). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-05-01block: remove remaining __FUNCTION__ occurrencesHarvey Harrison
__FUNCTION__ is gcc specific, use __func__ Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29block: add large command supportFUJITA Tomonori
This patch changes rq->cmd from the static array to a pointer to support large commands. We rarely handle large commands. So for optimization, a struct request still has a static array for a command. rq_init sets rq->cmd pointer to the static array. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-04-29block: replace sizeof(rq->cmd) with BLK_MAX_CDBFUJITA Tomonori
This is a preparation for changing rq->cmd from the static array to a pointer. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Boaz Harrosh <bharrosh@panasas.com> Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-04-29block: rename and export rq_init()FUJITA Tomonori
This rename rq_init() blk_rq_init() and export it. Any path that hands the request to the block layer needs to call it to initialize the request. This is a preparation for large command support, which needs to initialize the request in a proper way (that is, just doing a memset() will not work). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-04-29block: make queue flags non-atomicNick Piggin
We can save some atomic ops in the IO path, if we clearly define the rules of how to modify the queue flags. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-04-29block: make rq_init() do a full memset()FUJITA Tomonori
This requires moving rq_init() from get_request() to blk_alloc_request(). The upside is that we can now require an rq_init() from any path that wishes to hand the request to the block layer. rq_init() will be exported for the code that uses struct request without blk_get_request. This is a preparation for large command support, which needs to initialize struct request in a proper way (that is, just doing a memset() will not work). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-03-04unexport blk_{get,put}_queueAdrian Bunk
This patch removes the unused exports of blk_{get,put}_queue. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-03-04block: restore the meaning of rq->data_len to the true data lengthFUJITA Tomonori
The meaning of rq->data_len was changed to the length of an allocated buffer from the true data length. It breaks SG_IO friends and bsg. This patch restores the meaning of rq->data_len to the true data length and adds rq->extra_len to store an extended length (due to drain buffer and padding). This patch also removes the code to update bio in blk_rq_map_user introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa. The commit adjusts bio according to memory alignment (queue_dma_alignment). However, memory alignment is NOT padding alignment. This adjustment also breaks SG_IO friends and bsg. Padding alignment needs to be fixed in a proper way (by a separate patch). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Jens Axboe <axboe@carl.home.kernel.dk>
2008-03-04block: fix kernel-docbook parameters and filesRandy Dunlap
kernel-doc for block/: - add missing parameters - fix one function's parameter list (remove blank line) - add 2 source files to docbook for non-exported kernel-doc functions Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-02-19block: add request->raw_data_lenTejun Heo
With padding and draining moved into it, block layer now may extend requests as directed by queue parameters, so now a request has two sizes - the original request size and the extended size which matches the size of area pointed to by bios and later by sgs. The latter size is what lower layers are primarily interested in when allocating, filling up DMA tables and setting up the controller. Both padding and draining extend the data area to accomodate controller characteristics. As any controller which speaks SCSI can handle underflows, feeding larger data area is safe. So, this patch makes the primary data length field, request->data_len, indicate the size of full data area and add a separate length field, request->raw_data_len, for the unmodified request size. The latter is used to report to higher layer (userland) and where the original request size should be fed to the controller or device. Signed-off-by: Tejun Heo <htejun@gmail.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-02-19make blk-core.c:request_cachep static againAdrian Bunk
request_cachep needlessly became global. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Jens Axboe <axboe@carl.home.kernel.dk>
2008-02-08Enhanced partition statistics: remove old partition statisticsJerome Marchand
Removes the now unused old partition statistic code. Signed-off-by: Jerome Marchand <jmarchan@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-02-08Enhanced partition statistics: update partition statiticsJerome Marchand
Updates the enhanced partition statistics in generic block layer besides the disk statistics. Signed-off-by: Jerome Marchand <jmarchan@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-02-08block: fixup rq_init() a bitJens Axboe
Rearrange fields in cache order and initialize some fields that we didn't previously init. Remove init of ->completion_data, it's part of a union with ->hash. Luckily clearing the rb node is the same as setting it to null! Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-02-01block: make core bits checkpatch compliantJens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-02-01block: new end request handling interface should take unsigned byte countsJens Axboe
No point in passing signed integers as the byte count, they can never be negative. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-01-29block: ll_rw_blk.c split, add blk-merge.cJens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-01-29block: remove dated (and wrong) comment in blk-core.cJens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-01-29block: get rid of unnecessary forward declarations in blk-core.cJens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-01-29block: continue ll_rw_blk.c splitupJens Axboe
Adds files for barrier handling, rq execution, io context handling, mapping data to requests, and queue settings. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-01-29block: split tag and sysfs handling from blk-core.cJens Axboe
Seperates the tag and sysfs handling from ll_rw_blk. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-01-29block: first step of splitting ll_rw_blk, rename itJens Axboe
Then we retain history in blk-core.c Signed-off-by: Jens Axboe <jens.axboe@oracle.com>