summaryrefslogtreecommitdiffstats
path: root/block/bfq-iosched.c
AgeCommit message (Collapse)Author
2017-10-03block, bfq: decrease burst size when queues in burst exitPaolo Valente
If many queues belonging to the same group happen to be created shortly after each other, then the concurrent processes associated with these queues have typically a common goal, and they get it done as soon as possible if not hampered by device idling. Examples are processes spawned by git grep, or by systemd during boot. As for device idling, this mechanism is currently necessary for weight raising to succeed in its goal: privileging I/O. In view of these facts, BFQ does not provide the above queues with either weight raising or device idling. On the other hand, a burst of queue creations may be caused also by the start-up of a complex application. In this case, these queues need usually to be served one after the other, and as quickly as possible, to maximise responsiveness. Therefore, in this case the best strategy is to weight-raise all the queues created during the burst, i.e., the exact opposite of the strategy for the above case. To distinguish between the two cases, BFQ uses an empirical burst-size threshold, found through extensive tests and monitoring of daily usage. Only large bursts, i.e., burst with a size above this threshold, are considered as generated by a high number of parallel processes. In this respect, upstart-based boot proved to be rather hard to detect as generating a large burst of queue creations, because with upstart most of the queues created in a burst exit *before* the next queues in the same burst are created. To address this issue, I changed the burst-detection mechanism so as to not decrease the size of the current burst even if one of the queues in the burst is eliminated. Unfortunately, this missing decrease causes false positives on very fast systems: on the start-up of a complex application, such as libreoffice writer, so many queues are created, served and exited shortly after each other, that a large burst of queue creations is wrongly detected as occurring. These false positives just disappear if the size of a burst is decreased when one of the queues in the burst exits. This commit restores the missing burst-size decrease, relying of the fact that upstart is apparently unlikely to be used on systems running this and future versions of the kernel. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it> Signed-off-by: Angelo Ruocco <angeloruocco90@gmail.com> Tested-by: Mirko Montanari <mirkomontanari91@gmail.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Lee Tibbert <lee.tibbert@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-03block, bfq: let early-merged queues be weight-raised on split tooPaolo Valente
A just-created bfq_queue, say Q, may happen to be merged with another bfq_queue on the very first invocation of the function __bfq_insert_request. In such a case, even if Q would clearly deserve interactive weight raising (as it has just been created), the function bfq_add_request does not make it to be invoked for Q, and thus to activate weight raising for Q. As a consequence, when the state of Q is saved for a possible future restore, after a split of Q from the other bfq_queue(s), such a state happens to be (unjustly) non-weight-raised. Then the bfq_queue will not enjoy any weight raising on the split, even if should still be in an interactive weight-raising period when the split occurs. This commit solves this problem as follows, for a just-created bfq_queue that is being early-merged: it stores directly, in the saved state of the bfq_queue, the weight-raising state that would have been assigned to the bfq_queue if not early-merged. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Tested-by: Angelo Ruocco <angeloruocco90@gmail.com> Tested-by: Mirko Montanari <mirkomontanari91@gmail.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Lee Tibbert <lee.tibbert@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-03block, bfq: check and switch back to interactive wr also on queue splitPaolo Valente
As already explained in the message of commit "block, bfq: fix wrong init of saved start time for weight raising", if a soft real-time weight-raising period happens to be nested in a larger interactive weight-raising period, then BFQ restores the interactive weight raising at the end of the soft real-time weight raising. In particular, BFQ checks whether the latter has ended only on request dispatches. Unfortunately, the above scheme fails to restore interactive weight raising in the following corner case: if a bfq_queue, say Q, 1) Is merged with another bfq_queue while it is in a nested soft real-time weight-raising period. The weight-raising state of Q is then saved, and not considered any longer until a split occurs. 2) Is split from the other bfq_queue(s) at a time instant when its soft real-time weight raising is already finished. On the split, while resuming the previous, soft real-time weight-raised state of the bfq_queue Q, BFQ checks whether the current soft real-time weight-raising period is actually over. If so, BFQ switches weight raising off for Q, *without* checking whether the soft real-time period was actually nested in a non-yet-finished interactive weight-raising period. This commit addresses this issue by adding the above missing check in bfq_queue splits, and restoring interactive weight raising if needed. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Tested-by: Angelo Ruocco <angeloruocco90@gmail.com> Tested-by: Mirko Montanari <mirkomontanari91@gmail.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Lee Tibbert <lee.tibbert@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-10-03block, bfq: fix wrong init of saved start time for weight raisingPaolo Valente
This commit fixes a bug that causes bfq to fail to guarantee a high responsiveness on some drives, if there is heavy random read+write I/O in the background. More precisely, such a failure allowed this bug to be found [1], but the bug may well cause other yet unreported anomalies. BFQ raises the weight of the bfq_queues associated with soft real-time applications, to privilege the I/O, and thus reduce latency, for these applications. This mechanism is named soft-real-time weight raising in BFQ. A soft real-time period may happen to be nested into an interactive weight raising period, i.e., it may happen that, when a bfq_queue switches to a soft real-time weight-raised state, the bfq_queue is already being weight-raised because deemed interactive too. In this case, BFQ saves in a special variable wr_start_at_switch_to_srt, the time instant when the interactive weight-raising period started for the bfq_queue, i.e., the time instant when BFQ started to deem the bfq_queue interactive. This value is then used to check whether the interactive weight-raising period would still be in progress when the soft real-time weight-raising period ends. If so, interactive weight raising is restored for the bfq_queue. This restore is useful, in particular, because it prevents bfq_queues from losing their interactive weight raising prematurely, as a consequence of spurious, short-lived soft real-time weight-raising periods caused by wrong detections as soft real-time. If, instead, a bfq_queue switches to soft-real-time weight raising while it *is not* already in an interactive weight-raising period, then the variable wr_start_at_switch_to_srt has no meaning during the following soft real-time weight-raising period. Unfortunately the handling of this case is wrong in BFQ: not only the variable is not flagged somehow as meaningless, but it is also set to the time when the switch to soft real-time weight-raising occurs. This may cause an interactive weight-raising period to be considered mistakenly as still in progress, and thus a spurious interactive weight-raising period to start for the bfq_queue, at the end of the soft-real-time weight-raising period. In particular the spurious interactive weight-raising period will be considered as still in progress, if the soft-real-time weight-raising period does not last very long. The bfq_queue will then be wrongly privileged and, if I/O bound, will unjustly steal bandwidth to truly interactive or soft real-time bfq_queues, harming responsiveness and low latency. This commit fixes this issue by just setting wr_start_at_switch_to_srt to minus infinity (farthest past time instant according to jiffies macros): when the soft-real-time weight-raising period ends, certainly no interactive weight-raising period will be considered as still in progress. [1] Background I/O Type: Random - Background I/O mix: Reads and writes - Application to start: LibreOffice Writer in http://www.phoronix.com/scan.php?page=news_item&px=Linux-4.13-IO-Laptop Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Angelo Ruocco <angeloruocco90@gmail.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Lee Tibbert <lee.tibbert@gmail.com> Tested-by: Mirko Montanari <mirkomontanari91@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-09-09Merge branch 'for-4.14/block-postmerge' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull followup block layer updates from Jens Axboe: "I ended up splitting the main pull request for this series into two, mainly because of clashes between NVMe fixes that went into 4.13 after the for-4.14 branches were split off. This pull request is mostly NVMe, but not exclusively. In detail, it contains: - Two pull request for NVMe changes from Christoph. Nothing new on the feature front, basically just fixes all over the map for the core bits, transport, rdma, etc. - Series from Bart, cleaning up various bits in the BFQ scheduler. - Series of bcache fixes, which has been lingering for a release or two. Coly sent this in, but patches from various people in this area. - Set of patches for BFQ from Paolo himself, updating both documentation and fixing some corner cases in performance. - Series from Omar, attempting to now get the 4k loop support correct. Our confidence level is higher this time. - Series from Shaohua for loop as well, improving O_DIRECT performance and fixing a use-after-free" * 'for-4.14/block-postmerge' of git://git.kernel.dk/linux-block: (74 commits) bcache: initialize dirty stripes in flash_dev_run() loop: set physical block size to logical block size bcache: fix bch_hprint crash and improve output bcache: Update continue_at() documentation bcache: silence static checker warning bcache: fix for gc and write-back race bcache: increase the number of open buckets bcache: Correct return value for sysfs attach errors bcache: correct cache_dirty_target in __update_writeback_rate() bcache: gc does not work when triggering by manual command bcache: Don't reinvent the wheel but use existing llist API bcache: do not subtract sectors_to_gc for bypassed IO bcache: fix sequential large write IO bypass bcache: Fix leak of bdev reference block/loop: remove unused field block/loop: fix use after free bfq: Use icq_to_bic() consistently bfq: Suppress compiler warnings about comparisons bfq: Check kstrtoul() return value bfq: Declare local functions static ...
2017-09-01bfq: Use icq_to_bic() consistentlyBart Van Assche
Some code uses icq_to_bic() to convert an io_cq pointer to a bfq_io_cq pointer while other code uses a direct cast. Convert the code that uses a direct cast such that it uses icq_to_bic(). Acked-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-09-01bfq: Suppress compiler warnings about comparisonsBart Van Assche
This patch avoids that the following warnings are reported when building with W=1: block/bfq-iosched.c: In function 'bfq_back_seek_max_store': block/bfq-iosched.c:4860:13: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits] if (__data < (MIN)) \ ^ block/bfq-iosched.c:4876:1: note: in expansion of macro 'STORE_FUNCTION' STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0); ^~~~~~~~~~~~~~ block/bfq-iosched.c: In function 'bfq_slice_idle_store': block/bfq-iosched.c:4860:13: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits] if (__data < (MIN)) \ ^ block/bfq-iosched.c:4879:1: note: in expansion of macro 'STORE_FUNCTION' STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 2); ^~~~~~~~~~~~~~ block/bfq-iosched.c: In function 'bfq_slice_idle_us_store': block/bfq-iosched.c:4892:13: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits] if (__data < (MIN)) \ ^ block/bfq-iosched.c:4899:1: note: in expansion of macro 'USEC_STORE_FUNCTION' USEC_STORE_FUNCTION(bfq_slice_idle_us_store, &bfqd->bfq_slice_idle, 0, ^~~~~~~~~~~~~~~~~~~ Acked-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-09-01bfq: Check kstrtoul() return valueBart Van Assche
Make sysfs writes fail for invalid numbers instead of storing uninitialized data copied from the stack. This patch removes all uninitialized_var() occurrences from the BFQ source code. Acked-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-09-01bfq: Annotate fall-through in a switch statementBart Van Assche
This patch avoids that gcc 7 issues a warning about fall-through when building with W=1. Acked-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-08-31block, bfq: make lookup_next_entity push up vtime on expirationsPaolo Valente
To provide a very smooth service, bfq starts to serve a bfq_queue only if the queue is 'eligible', i.e., if the same queue would have started to be served in the ideal, perfectly fair system that bfq simulates internally. This is obtained by associating each queue with a virtual start time, and by computing a special system virtual time quantity: a queue is eligible only if the system virtual time has reached the virtual start time of the queue. Finally, bfq guarantees that, when a new queue must be set in service, there is always at least one eligible entity for each active parent entity in the scheduler. To provide this guarantee, the function __bfq_lookup_next_entity pushes up, for each parent entity on which it is invoked, the system virtual time to the minimum among the virtual start times of the entities in the active tree for the parent entity (more precisely, the push up occurs if the system virtual time happens to be lower than all such virtual start times). There is however a circumstance in which __bfq_lookup_next_entity cannot push up the system virtual time for a parent entity, even if the system virtual time is lower than the virtual start times of all the child entities in the active tree. It happens if one of the child entities is in service. In fact, in such a case, there is already an eligible entity, the in-service one, even if it may not be not present in the active tree (because in-service entities may be removed from the active tree). Unfortunately, in the last re-design of the hierarchical-scheduling engine, the reset of the pointer to the in-service entity for a given parent entity--reset to be done as a consequence of the expiration of the in-service entity--always happens after the function __bfq_lookup_next_entity has been invoked. This causes the function to think that there is still an entity in service for the parent entity, and then that the system virtual time cannot be pushed up, even if actually such a no-more-in-service entity has already been properly reinserted into the active tree (or in some other tree if no more active). Yet, the system virtual time *had* to be pushed up, to be ready to correctly choose the next queue to serve. Because of the lack of this push up, bfq may wrongly set in service a queue that had been speculatively pre-computed as the possible next-in-service queue, but that would no more be the one to serve after the expiration and the reinsertion into the active trees of the previously in-service entities. This commit addresses this issue by making __bfq_lookup_next_entity properly push up the system virtual time if an expiration is occurring. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Tested-by: Lee Tibbert <lee.tibbert@gmail.com> Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-08-29bfq: Re-enable auto-loading when built as a moduleBen Hutchings
The block core requests modules with the "-iosched" name suffix, but bfq no longer has that suffix. Add an alias. Fixes: ea25da48086d ("block, bfq: split bfq-iosched.c into multiple ...") Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-08-28block, scheduler: convert xxx_var_store to voidweiping zhang
The last parameter "count" never be used in xxx_var_store, convert these functions to void. Signed-off-by: weiping zhang <zhangweiping@didichuxing.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-08-23block, bfq: fix error handle in bfq_initweiping zhang
if elv_register fail, bfq_pool should be free. Signed-off-by: weiping zhang <zhangweiping@didichuxing.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-08-11block, bfq: boost throughput with flash-based non-queueing devicesPaolo Valente
When a queue associated with a process remains empty, there are cases where throughput gets boosted if the device is idled to await the arrival of a new I/O request for that queue. Currently, BFQ assumes that one of these cases is when the device has no internal queueing (regardless of the properties of the I/O being served). Unfortunately, this condition has proved to be too general. So, this commit refines it as "the device has no internal queueing and is rotational". This refinement provides a significant throughput boost with random I/O, on flash-based storage without internal queueing. For example, on a HiKey board, throughput increases by up to 125%, growing, e.g., from 6.9MB/s to 15.6MB/s with two or three random readers in parallel. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Luca Miccio <lucmiccio@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-08-11block,bfq: refactor device-idling logicPaolo Valente
The logic that decides whether to idle the device is scattered across three functions. Almost all of the logic is in the function bfq_bfqq_may_idle, but (1) part of the decision is made in bfq_update_idle_window, and (2) the function bfq_bfqq_must_idle may switch off idling regardless of the output of bfq_bfqq_may_idle. In addition, both bfq_update_idle_window and bfq_bfqq_must_idle make their decisions as a function of parameters that are used, for similar purposes, also in bfq_bfqq_may_idle. This commit addresses these issues by moving all the logic into bfq_bfqq_may_idle. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-07-12bfq: dispatch request to prevent queue stalling after the request completionHou Tao
There are mq devices (eg., virtio-blk, nbd and loopback) which don't invoke blk_mq_run_hw_queues() after the completion of a request. If bfq is enabled on these devices and the slice_idle attribute or strict_guarantees attribute is set as zero, it is possible that after a request completion the remaining requests of busy bfq queue will stalled in the bfq schedule until a new request arrives. To fix the scheduler latency problem, we need to check whether or not all issued requests have completed and dispatch more requests to driver if there is no request in driver. The problem can be reproduced by running the following script on a virtio-blk device with nr_hw_queues as 1: #!/bin/sh dev=vdb # mount point for dev mp=/tmp/mnt cd $mp job=strict.job cat <<EOF > $job [global] direct=1 bs=4k size=256M rw=write ioengine=libaio iodepth=128 runtime=5 time_based [1] filename=1.data [2] new_group filename=2.data EOF echo bfq > /sys/block/$dev/queue/scheduler echo 1 > /sys/block/$dev/queue/iosched/strict_guarantees fio $job Signed-off-by: Hou Tao <houtao1@huawei.com> Reviewed-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-07-03block, bfq: don't change ioprio class for a bfq_queue on a service treePaolo Valente
On each deactivation or re-scheduling (after being served) of a bfq_queue, BFQ invokes the function __bfq_entity_update_weight_prio(), to perform pending updates of ioprio, weight and ioprio class for the bfq_queue. BFQ also invokes this function on I/O-request dispatches, to raise or lower weights more quickly when needed, thereby improving latency. However, the entity representing the bfq_queue may be on the active (sub)tree of a service tree when this happens, and, although with a very low probability, the bfq_queue may happen to also have a pending change of its ioprio class. If both conditions hold when __bfq_entity_update_weight_prio() is invoked, then the entity moves to a sort of hybrid state: the new service tree for the entity, as returned by bfq_entity_service_tree(), differs from service tree on which the entity still is. The functions that handle activations and deactivations of entities do not cope with such a hybrid state (and would need to become more complex to cope). This commit addresses this issue by just making __bfq_entity_update_weight_prio() not perform also a possible pending change of ioprio class, when invoked on an I/O-request dispatch for a bfq_queue. Such a change is thus postponed to when __bfq_entity_update_weight_prio() is invoked on deactivation or re-scheduling of the bfq_queue. Reported-by: Marco Piazza <mpiazza@gmail.com> Reported-by: Laurentiu Nicola <lnicola@dend.ro> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Tested-by: Marco Piazza <mpiazza@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-27block, bfq: update wr_busy_queues if needed on a queue splitPaolo Valente
This commit fixes a bug triggered by a non-trivial sequence of events. These events are briefly described in the next two paragraphs. The impatiens, or those who are familiar with queue merging and splitting, can jump directly to the last paragraph. On each I/O-request arrival for a shared bfq_queue, i.e., for a bfq_queue that is the result of the merge of two or more bfq_queues, BFQ checks whether the shared bfq_queue has become seeky (i.e., if too many random I/O requests have arrived for the bfq_queue; if the device is non rotational, then random requests must be also small for the bfq_queue to be tagged as seeky). If the shared bfq_queue is actually detected as seeky, then a split occurs: the bfq I/O context of the process that has issued the request is redirected from the shared bfq_queue to a new non-shared bfq_queue. As a degenerate case, if the shared bfq_queue actually happens to be shared only by one process (because of previous splits), then no new bfq_queue is created: the state of the shared bfq_queue is just changed from shared to non shared. Regardless of whether a brand new non-shared bfq_queue is created, or the pre-existing shared bfq_queue is just turned into a non-shared bfq_queue, several parameters of the non-shared bfq_queue are set (restored) to the original values they had when the bfq_queue associated with the bfq I/O context of the process (that has just issued an I/O request) was merged with the shared bfq_queue. One of these parameters is the weight-raising state. If, on the split of a shared bfq_queue, 1) a pre-existing shared bfq_queue is turned into a non-shared bfq_queue; 2) the previously shared bfq_queue happens to be busy; 3) the weight-raising state of the previously shared bfq_queue happens to change; the number of weight-raised busy queues changes. The field wr_busy_queues must then be updated accordingly, but such an update was missing. This commit adds the missing update. Reported-by: Luca Miccio <lucmiccio@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-18blk-mq-sched: unify request prepare methodsChristoph Hellwig
This patch makes sure we always allocate requests in the core blk-mq code and use a common prepare_request method to initialize them for both mq I/O schedulers. For Kyber and additional limit_depth method is added that is called before allocating the request. Also because none of the intializations can really fail the new method does not return an error - instead the bfq finish method is hardened to deal with the no-IOC case. Last but not least this removes the abuse of RQF_QUEUE by the blk-mq scheduling code as RQF_ELFPRIV is all that is needed now. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-18bfq-iosched: fix NULL ioc check in bfq_get_rq_privateChristoph Hellwig
icq_to_bic is a container_of operation, so we need to check for NULL before it. Also move the check outside the spinlock while we're at it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-18blk-mq-sched: unify request finished methodsChristoph Hellwig
No need to have two different callouts of bfq vs kyber. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-06-08block, bfq: access and cache blkg data only when safePaolo Valente
In blk-cgroup, operations on blkg objects are protected with the request_queue lock. This is no more the lock that protects I/O-scheduler operations in blk-mq. In fact, the latter are now protected with a finer-grained per-scheduler-instance lock. As a consequence, although blkg lookups are also rcu-protected, blk-mq I/O schedulers may see inconsistent data when they access blkg and blkg-related objects. BFQ does access these objects, and does incur this problem, in the following case. The blkg_lookup performed in bfq_get_queue, being protected (only) through rcu, may happen to return the address of a copy of the original blkg. If this is the case, then the blkg_get performed in bfq_get_queue, to pin down the blkg, is useless: it does not prevent blk-cgroup code from destroying both the original blkg and all objects directly or indirectly referred by the copy of the blkg. BFQ accesses these objects, which typically causes a crash for NULL-pointer dereference of memory-protection violation. Some additional protection mechanism should be added to blk-cgroup to address this issue. In the meantime, this commit provides a quick temporary fix for BFQ: cache (when safe) blkg data that might disappear right after a blkg_lookup. In particular, this commit exploits the following facts to achieve its goal without introducing further locks. Destroy operations on a blkg invoke, as a first step, hooks of the scheduler associated with the blkg. And these hooks are executed with bfqd->lock held for BFQ. As a consequence, for any blkg associated with the request queue an instance of BFQ is attached to, we are guaranteed that such a blkg is not destroyed, and that all the pointers it contains are consistent, while that instance is holding its bfqd->lock. A blkg_lookup performed with bfqd->lock held then returns a fully consistent blkg, which remains consistent until this lock is held. In more detail, this holds even if the returned blkg is a copy of the original one. Finally, also the object describing a group inside BFQ needs to be protected from destruction on the blkg_free of the original blkg (which invokes bfq_pd_free). This commit adds private refcounting for this object, to let it disappear only after no bfq_queue refers to it any longer. This commit also removes or updates some stale comments on locking issues related to blk-cgroup operations. Reported-by: Tomas Konir <tomas.konir@gmail.com> Reported-by: Lee Tibbert <lee.tibbert@gmail.com> Reported-by: Marco Piazza <mpiazza@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Tested-by: Tomas Konir <tomas.konir@gmail.com> Tested-by: Lee Tibbert <lee.tibbert@gmail.com> Tested-by: Marco Piazza <mpiazza@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-05-10block, bfq: stress that low_latency must be off to get max throughputPaolo Valente
The introduction of the BFQ and Kyber I/O schedulers has triggered a new wave of I/O benchmarks. Unfortunately, comments and discussions on these benchmarks confirm that there is still little awareness that it is very hard to achieve, at the same time, a low latency and a high throughput. In particular, virtually all benchmarks measure throughput, or throughput-related figures of merit, but, for BFQ, they use the scheduler in its default configuration. This configuration is geared, instead, toward a low latency. This is evidently a sign that BFQ documentation is still too unclear on this important aspect. This commit addresses this issue by stressing how BFQ configuration must be (easily) changed if the only goal is maximum throughput. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20block, bfq: don't dereference bic before null checking itColin Ian King
The call to bfq_check_ioprio_change will dereference bic, however, the null check for bic is after this call. Move the the null check on bic to before the call to avoid any potential null pointer dereference issues. Detected by CoverityScan, CID#1430138 ("Dereference before null check") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: split bfq-iosched.c into multiple source filesPaolo Valente
The BFQ I/O scheduler features an optimal fair-queuing (proportional-share) scheduling algorithm, enriched with several mechanisms to boost throughput and reduce latency for interactive and real-time applications. This makes BFQ a large and complex piece of code. This commit addresses this issue by splitting BFQ into three main, independent components, and by moving each component into a separate source file: 1. Main algorithm: handles the interaction with the kernel, and decides which requests to dispatch; it uses the following two further components to achieve its goals. 2. Scheduling engine (Hierarchical B-WF2Q+ scheduling algorithm): computes the schedule, using weights and budgets provided by the above component. 3. cgroups support: handles group operations (creation, destruction, move, ...). Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: remove all get and put of I/O contextsPaolo Valente
When a bfq queue is set in service and when it is merged, a reference to the I/O context associated with the queue is taken. This reference is then released when the queue is deselected from service or split. More precisely, the release of the reference is postponed to when the scheduler lock is released, to avoid nesting between the scheduler and the I/O-context lock. In fact, such nesting would lead to deadlocks, because of other code paths that take the same locks in the opposite order. This postponing of I/O-context releases does complicate code. This commit addresses these issue by modifying involved operations in such a way to not need to get the above I/O-context references any more. Then it also removes any get and release of these references. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: handle bursts of queue activationsArianna Avanzini
Many popular I/O-intensive services or applications spawn or reactivate many parallel threads/processes during short time intervals. Examples are systemd during boot or git grep. These services or applications benefit mostly from a high throughput: the quicker the I/O generated by their processes is cumulatively served, the sooner the target job of these services or applications gets completed. As a consequence, it is almost always counterproductive to weight-raise any of the queues associated to the processes of these services or applications: in most cases it would just lower the throughput, mainly because weight-raising also implies device idling. To address this issue, an I/O scheduler needs, first, to detect which queues are associated with these services or applications. In this respect, we have that, from the I/O-scheduler standpoint, these services or applications cause bursts of activations, i.e., activations of different queues occurring shortly after each other. However, a shorter burst of activations may be caused also by the start of an application that does not consist in a lot of parallel I/O-bound threads (see the comments on the function bfq_handle_burst for details). In view of these facts, this commit introduces: 1) an heuristic to detect (only) bursts of queue activations caused by services or applications consisting in many parallel I/O-bound threads; 2) the prevention of device idling and weight-raising for the queues belonging to these bursts. Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: boost the throughput with random I/O on NCQ-capable HDDsPaolo Valente
This patch is basically the counterpart, for NCQ-capable rotational devices, of the previous patch. Exactly as the previous patch does on flash-based devices and for any workload, this patch disables device idling on rotational devices, but only for random I/O. In fact, only with these queues disabling idling boosts the throughput on NCQ-capable rotational devices. To not break service guarantees, idling is disabled for NCQ-enabled rotational devices only when the same symmetry conditions considered in the previous patches hold. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: boost the throughput on NCQ-capable flash-based devicesPaolo Valente
This patch boosts the throughput on NCQ-capable flash-based devices, while still preserving latency guarantees for interactive and soft real-time applications. The throughput is boosted by just not idling the device when the in-service queue remains empty, even if the queue is sync and has a non-null idle window. This helps to keep the drive's internal queue full, which is necessary to achieve maximum performance. This solution to boost the throughput is a port of commits a68bbdd and f7d7b7a for CFQ. As already highlighted in a previous patch, allowing the device to prefetch and internally reorder requests trivially causes loss of control on the request service order, and hence on service guarantees. Fortunately, as discussed in detail in the comments on the function bfq_bfqq_may_idle(), if every process has to receive the same fraction of the throughput, then the service order enforced by the internal scheduler of a flash-based device is relatively close to that enforced by BFQ. In particular, it is close enough to let service guarantees be substantially preserved. Things change in an asymmetric scenario, i.e., if not every process has to receive the same fraction of the throughput. In this case, to guarantee the desired throughput distribution, the device must be prevented from prefetching requests. This is exactly what this patch does in asymmetric scenarios. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: reduce idling only in symmetric scenariosArianna Avanzini
A seeky queue (i..e, a queue containing random requests) is assigned a very small device-idling slice, for throughput issues. Unfortunately, given the process associated with a seeky queue, this behavior causes the following problem: if the process, say P, performs sync I/O and has a higher weight than some other processes doing I/O and associated with non-seeky queues, then BFQ may fail to guarantee to P its reserved share of the throughput. The reason is that idling is key for providing service guarantees to processes doing sync I/O [1]. This commit addresses this issue by allowing the device-idling slice to be reduced for a seeky queue only if the scenario happens to be symmetric, i.e., if all the queues are to receive the same share of the throughput. [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O Scheduler", Proceedings of the First Workshop on Mobile System Technologies (MST-2015), May 2015. http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Riccardo Pizzetti <riccardo.pizzetti@gmail.com> Signed-off-by: Samuele Zecchini <samuele.zecchini92@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: add Early Queue Merge (EQM)Arianna Avanzini
A set of processes may happen to perform interleaved reads, i.e., read requests whose union would give rise to a sequential read pattern. There are two typical cases: first, processes reading fixed-size chunks of data at a fixed distance from each other; second, processes reading variable-size chunks at variable distances. The latter case occurs for example with QEMU, which splits the I/O generated by a guest into multiple chunks, and lets these chunks be served by a pool of I/O threads, iteratively assigning the next chunk of I/O to the first available thread. CFQ denotes as 'cooperating' a set of processes that are doing interleaved I/O, and when it detects cooperating processes, it merges their queues to obtain a sequential I/O pattern from the union of their I/O requests, and hence boost the throughput. Unfortunately, in the following frequent case, the mechanism implemented in CFQ for detecting cooperating processes and merging their queues is not responsive enough to handle also the fluctuating I/O pattern of the second type of processes. Suppose that one process of the second type issues a request close to the next request to serve of another process of the same type. At that time the two processes would be considered as cooperating. But, if the request issued by the first process is to be merged with some other already-queued request, then, from the moment at which this request arrives, to the moment when CFQ controls whether the two processes are cooperating, the two processes are likely to be already doing I/O in distant zones of the disk surface or device memory. CFQ uses however preemption to get a sequential read pattern out of the read requests performed by the second type of processes too. As a consequence, CFQ uses two different mechanisms to achieve the same goal: boosting the throughput with interleaved I/O. This patch introduces Early Queue Merge (EQM), a unified mechanism to get a sequential read pattern with both types of processes. The main idea is to immediately check whether a newly-arrived request lets some pair of processes become cooperating, both in the case of actual request insertion and, to be responsive with the second type of processes, in the case of request merge. Both types of processes are then handled by just merging their queues. Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: reduce latency during request-pool saturationPaolo Valente
This patch introduces an heuristic that reduces latency when the I/O-request pool is saturated. This goal is achieved by disabling device idling, for non-weight-raised queues, when there are weight- raised queues with pending or in-flight requests. In fact, as explained in more detail in the comment on the function bfq_bfqq_may_idle(), this reduces the rate at which processes associated with non-weight-raised queues grab requests from the pool, thereby increasing the probability that processes associated with weight-raised queues get a request immediately (or at least soon) when they need one. Along the same line, if there are weight-raised queues, then this patch halves the service rate of async (write) requests for non-weight-raised queues. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: preserve a low latency also with NCQ-capable drivesPaolo Valente
I/O schedulers typically allow NCQ-capable drives to prefetch I/O requests, as NCQ boosts the throughput exactly by prefetching and internally reordering requests. Unfortunately, as discussed in detail and shown experimentally in [1], this may cause fairness and latency guarantees to be violated. The main problem is that the internal scheduler of an NCQ-capable drive may postpone the service of some unlucky (prefetched) requests as long as it deems serving other requests more appropriate to boost the throughput. This patch addresses this issue by not disabling device idling for weight-raised queues, even if the device supports NCQ. This allows BFQ to start serving a new queue, and therefore allows the drive to prefetch new requests, only after the idling timeout expires. At that time, all the outstanding requests of the expired queue have been most certainly served. [1] P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. Slightly extended version: http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- results.pdf Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: reduce I/O latency for soft real-time applicationsPaolo Valente
To guarantee a low latency also to the I/O requests issued by soft real-time applications, this patch introduces a further heuristic, which weight-raises (in the sense explained in the previous patch) also the queues associated to applications deemed as soft real-time. To be deemed as soft real-time, an application must meet two requirements. First, the application must not require an average bandwidth higher than the approximate bandwidth required to playback or record a compressed high-definition video. Second, the request pattern of the application must be isochronous, i.e., after issuing a request or a batch of requests, the application must stop issuing new requests until all its pending requests have been completed. After that, the application may issue a new batch, and so on. As for the second requirement, it is critical to require also that, after all the pending requests of the application have been completed, an adequate minimum amount of time elapses before the application starts issuing new requests. This prevents also greedy (i.e., I/O-bound) applications from being incorrectly deemed, occasionally, as soft real-time. In fact, if *any amount of time* is fine, then even a greedy application may, paradoxically, meet both the above requirements, if: (1) the application performs random I/O and/or the device is slow, and (2) the CPU load is high. The reason is the following. First, if condition (1) is true, then, during the service of the application, the throughput may be low enough to let the application meet the bandwidth requirement. Second, if condition (2) is true as well, then the application may occasionally behave in an apparently isochronous way, because it may simply stop issuing requests while the CPUs are busy serving other processes. To address this issue, the heuristic leverages the simple fact that greedy applications issue *all* their requests as quickly as they can, whereas soft real-time applications spend some time processing data after each batch of requests is completed. In particular, the heuristic works as follows. First, according to the above isochrony requirement, the heuristic checks whether an application may be soft real-time, thereby giving to the application the opportunity to be deemed as such, only when both the following two conditions happen to hold: 1) the queue associated with the application has expired and is empty, 2) there is no outstanding request of the application. Suppose that both conditions hold at time, say, t_c and that the application issues its next request at time, say, t_i. At time t_c the heuristic computes the next time instant, called soft_rt_next_start in the code, such that, only if t_i >= soft_rt_next_start, then both the next conditions will hold when the application issues its next request: 1) the application will meet the above bandwidth requirement, 2) a given minimum time interval, say Delta, will have elapsed from time t_c (so as to filter out greedy application). The current value of Delta is a little bit higher than the value that we have found, experimentally, to be adequate on a real, general-purpose machine. In particular we had to increase Delta to make the filter quite precise also in slower, embedded systems, and in KVM/QEMU virtual machines (details in the comments on the code). If the application actually issues its next request after time soft_rt_next_start, then its associated queue will be weight-raised for a relatively short time interval. If, during this time interval, the application proves again to meet the bandwidth and isochrony requirements, then the end of the weight-raising period for the queue is moved forward, and so on. Note that an application whose associated queue never happens to be empty when it expires will never have the opportunity to be deemed as soft real-time. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: improve responsivenessPaolo Valente
This patch introduces a simple heuristic to load applications quickly, and to perform the I/O requested by interactive applications just as quickly. To this purpose, both a newly-created queue and a queue associated with an interactive application (we explain in a moment how BFQ decides whether the associated application is interactive), receive the following two special treatments: 1) The weight of the queue is raised. 2) The queue unconditionally enjoys device idling when it empties; in fact, if the requests of a queue are sync, then performing device idling for the queue is a necessary condition to guarantee that the queue receives a fraction of the throughput proportional to its weight (see [1] for details). For brevity, we call just weight-raising the combination of these two preferential treatments. For a newly-created queue, weight-raising starts immediately and lasts for a time interval that: 1) depends on the device speed and type (rotational or non-rotational), and 2) is equal to the time needed to load (start up) a large-size application on that device, with cold caches and with no additional workload. Finally, as for guaranteeing a fast execution to interactive, I/O-related tasks (such as opening a file), consider that any interactive application blocks and waits for user input both after starting up and after executing some task. After a while, the user may trigger new operations, after which the application stops again, and so on. Accordingly, the low-latency heuristic weight-raises again a queue in case it becomes backlogged after being idle for a sufficiently long (configurable) time. The weight-raising then lasts for the same time as for a just-created queue. According to our experiments, the combination of this low-latency heuristic and of the improvements described in the previous patch allows BFQ to guarantee a high application responsiveness. [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O Scheduler", Proceedings of the First Workshop on Mobile System Technologies (MST-2015), May 2015. http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: add more fairness with writes and slow processesPaolo Valente
This patch deals with two sources of unfairness, which can also cause high latencies and throughput loss. The first source is related to write requests. Write requests tend to starve read requests, basically because, on one side, writes are slower than reads, whereas, on the other side, storage devices confuse schedulers by deceptively signaling the completion of write requests immediately after receiving them. This patch addresses this issue by just throttling writes. In particular, after a write request is dispatched for a queue, the budget of the queue is decremented by the number of sectors to write, multiplied by an (over)charge coefficient. The value of the coefficient is the result of our tuning with different devices. The second source of unfairness has to do with slowness detection: when the in-service queue is expired, BFQ also controls whether the queue has been "too slow", i.e., has consumed its last-assigned budget at such a low rate that it would have been impossible to consume all of this budget within the maximum time slice T_max (Subsec. 3.5 in [1]). In this case, the queue is always (over)charged the whole budget, to reduce its utilization of the device. Both this overcharge and the slowness-detection criterion may cause unfairness. First, always charging a full budget to a slow queue is too coarse. It is much more accurate, and this patch lets BFQ do so, to charge an amount of service 'equivalent' to the amount of time during which the queue has been in service. As explained in more detail in the comments on the code, this enables BFQ to provide time f