diff options
author | Mauro Carvalho Chehab <mchehab+samsung@kernel.org> | 2019-04-18 19:45:00 -0300 |
---|---|---|
committer | Mauro Carvalho Chehab <mchehab+samsung@kernel.org> | 2019-07-15 09:20:27 -0300 |
commit | 898bd37a92063e46bc8d7b870781cecd66234f92 (patch) | |
tree | 1eac9c597d45080cc2ff366f6e882a87fcea2d2b | |
parent | 53b9537509654a6267c3f56b4d2e7409b9089686 (diff) |
docs: block: convert to ReST
Rename the block documentation files to ReST, add an
index for them and adjust in order to produce a nice html
output via the Sphinx build system.
At its new index.rst, let's add a :orphan: while this is not linked to
the main index.rst file, in order to avoid build warnings.
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
-rw-r--r-- | Documentation/admin-guide/kernel-parameters.txt | 8 | ||||
-rw-r--r-- | Documentation/block/bfq-iosched.rst (renamed from Documentation/block/bfq-iosched.txt) | 66 | ||||
-rw-r--r-- | Documentation/block/biodoc.rst (renamed from Documentation/block/biodoc.txt) | 330 | ||||
-rw-r--r-- | Documentation/block/biovecs.rst (renamed from Documentation/block/biovecs.txt) | 20 | ||||
-rw-r--r-- | Documentation/block/capability.rst | 18 | ||||
-rw-r--r-- | Documentation/block/capability.txt | 15 | ||||
-rw-r--r-- | Documentation/block/cmdline-partition.rst (renamed from Documentation/block/cmdline-partition.txt) | 13 | ||||
-rw-r--r-- | Documentation/block/data-integrity.rst (renamed from Documentation/block/data-integrity.txt) | 60 | ||||
-rw-r--r-- | Documentation/block/deadline-iosched.rst (renamed from Documentation/block/deadline-iosched.txt) | 21 | ||||
-rw-r--r-- | Documentation/block/index.rst | 25 | ||||
-rw-r--r-- | Documentation/block/ioprio.rst (renamed from Documentation/block/ioprio.txt) | 103 | ||||
-rw-r--r-- | Documentation/block/kyber-iosched.rst (renamed from Documentation/block/kyber-iosched.txt) | 3 | ||||
-rw-r--r-- | Documentation/block/null_blk.rst (renamed from Documentation/block/null_blk.txt) | 65 | ||||
-rw-r--r-- | Documentation/block/pr.rst (renamed from Documentation/block/pr.txt) | 18 | ||||
-rw-r--r-- | Documentation/block/queue-sysfs.rst (renamed from Documentation/block/queue-sysfs.txt) | 7 | ||||
-rw-r--r-- | Documentation/block/request.rst (renamed from Documentation/block/request.txt) | 47 | ||||
-rw-r--r-- | Documentation/block/stat.rst (renamed from Documentation/block/stat.txt) | 13 | ||||
-rw-r--r-- | Documentation/block/switching-sched.rst (renamed from Documentation/block/switching-sched.txt) | 28 | ||||
-rw-r--r-- | Documentation/block/writeback_cache_control.rst (renamed from Documentation/block/writeback_cache_control.txt) | 12 | ||||
-rw-r--r-- | Documentation/blockdev/zram.rst | 2 | ||||
-rw-r--r-- | MAINTAINERS | 2 | ||||
-rw-r--r-- | block/Kconfig | 2 | ||||
-rw-r--r-- | block/Kconfig.iosched | 2 | ||||
-rw-r--r-- | block/bfq-iosched.c | 2 | ||||
-rw-r--r-- | block/blk-integrity.c | 2 | ||||
-rw-r--r-- | block/ioprio.c | 2 | ||||
-rw-r--r-- | block/mq-deadline.c | 2 | ||||
-rw-r--r-- | block/partitions/cmdline.c | 2 |
28 files changed, 545 insertions, 345 deletions
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 01123f1de354..e8e28cac32a3 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -430,7 +430,7 @@ blkdevparts= Manual partition parsing of block device(s) for embedded devices based on command line input. - See Documentation/block/cmdline-partition.txt + See Documentation/block/cmdline-partition.rst boot_delay= Milliseconds to delay each printk during boot. Values larger than 10 seconds (10000) are changed to @@ -1199,9 +1199,9 @@ elevator= [IOSCHED] Format: { "mq-deadline" | "kyber" | "bfq" } - See Documentation/block/deadline-iosched.txt, - Documentation/block/kyber-iosched.txt and - Documentation/block/bfq-iosched.txt for details. + See Documentation/block/deadline-iosched.rst, + Documentation/block/kyber-iosched.rst and + Documentation/block/bfq-iosched.rst for details. elfcorehdr=[size[KMG]@]offset[KMG] [IA64,PPC,SH,X86,S390] Specifies physical address of start of kernel core diff --git a/Documentation/block/bfq-iosched.txt b/Documentation/block/bfq-iosched.rst index bbd6eb5bbb07..2c13b2fc1888 100644 --- a/Documentation/block/bfq-iosched.txt +++ b/Documentation/block/bfq-iosched.rst @@ -1,9 +1,11 @@ +========================== BFQ (Budget Fair Queueing) ========================== BFQ is a proportional-share I/O scheduler, with some extra low-latency capabilities. In addition to cgroups support (blkio or io controllers), BFQ's main features are: + - BFQ guarantees a high system and application responsiveness, and a low latency for time-sensitive applications, such as audio or video players; @@ -55,18 +57,18 @@ sustainable throughputs, on the same systems as above: BFQ works for multi-queue devices too. -The table of contents follow. Impatients can just jump to Section 3. +.. The table of contents follow. Impatients can just jump to Section 3. -CONTENTS +.. CONTENTS -1. When may BFQ be useful? - 1-1 Personal systems - 1-2 Server systems -2. How does BFQ work? -3. What are BFQ's tunables and how to properly configure BFQ? -4. BFQ group scheduling - 4-1 Service guarantees provided - 4-2 Interface + 1. When may BFQ be useful? + 1-1 Personal systems + 1-2 Server systems + 2. How does BFQ work? + 3. What are BFQ's tunables and how to properly configure BFQ? + 4. BFQ group scheduling + 4-1 Service guarantees provided + 4-2 Interface 1. When may BFQ be useful? ========================== @@ -77,17 +79,20 @@ BFQ provides the following benefits on personal and server systems. -------------------- Low latency for interactive applications +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Regardless of the actual background workload, BFQ guarantees that, for interactive tasks, the storage device is virtually as responsive as if it was idle. For example, even if one or more of the following background workloads are being executed: + - one or more large files are being read, written or copied, - a tree of source files is being compiled, - one or more virtual machines are performing I/O, - a software update is in progress, - indexing daemons are scanning filesystems and updating their databases, + starting an application or loading a file from within an application takes about the same time as if the storage device was idle. As a comparison, with CFQ, NOOP or DEADLINE, and in the same conditions, @@ -95,13 +100,14 @@ applications experience high latencies, or even become unresponsive until the background workload terminates (also on SSDs). Low latency for soft real-time applications - +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Also soft real-time applications, such as audio and video players/streamers, enjoy a low latency and a low drop rate, regardless of the background I/O workload. As a consequence, these applications do not suffer from almost any glitch due to the background workload. Higher speed for code-development tasks +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If some additional workload happens to be executed in parallel, then BFQ executes the I/O-related components of typical code-development @@ -109,6 +115,7 @@ tasks (compilation, checkout, merge, ...) much more quickly than CFQ, NOOP or DEADLINE. High throughput +^^^^^^^^^^^^^^^ On hard disks, BFQ achieves up to 30% higher throughput than CFQ, and up to 150% higher throughput than DEADLINE and NOOP, with all the @@ -117,6 +124,7 @@ and with all the workloads on flash-based devices, BFQ achieves, instead, about the same throughput as the other schedulers. Strong fairness, bandwidth and delay guarantees +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ BFQ distributes the device throughput, and not just the device time, among I/O-bound applications in proportion their weights, with any @@ -133,15 +141,15 @@ Most benefits for server systems follow from the same service properties as above. In particular, regardless of whether additional, possibly heavy workloads are being served, BFQ guarantees: -. audio and video-streaming with zero or very low jitter and drop +* audio and video-streaming with zero or very low jitter and drop rate; -. fast retrieval of WEB pages and embedded objects; +* fast retrieval of WEB pages and embedded objects; -. real-time recording of data in live-dumping applications (e.g., +* real-time recording of data in live-dumping applications (e.g., packet logging); -. responsiveness in local and remote access to a server. +* responsiveness in local and remote access to a server. 2. How does BFQ work? @@ -151,7 +159,7 @@ BFQ is a proportional-share I/O scheduler, whose general structure, plus a lot of code, are borrowed from CFQ. - Each process doing I/O on a device is associated with a weight and a - (bfq_)queue. + `(bfq_)queue`. - BFQ grants exclusive access to the device, for a while, to one queue (process) at a time, and implements this service model by @@ -540,11 +548,12 @@ created, and kept up-to-date by bfq, depends on whether CONFIG_BFQ_CGROUP_DEBUG is set. If it is set, then bfq creates all the stat files documented in Documentation/cgroup-v1/blkio-controller.rst. If, instead, -CONFIG_BFQ_CGROUP_DEBUG is not set, then bfq creates only the files -blkio.bfq.io_service_bytes -blkio.bfq.io_service_bytes_recursive -blkio.bfq.io_serviced -blkio.bfq.io_serviced_recursive +CONFIG_BFQ_CGROUP_DEBUG is not set, then bfq creates only the files:: + + blkio.bfq.io_service_bytes + blkio.bfq.io_service_bytes_recursive + blkio.bfq.io_serviced + blkio.bfq.io_serviced_recursive The value of CONFIG_BFQ_CGROUP_DEBUG greatly influences the maximum throughput sustainable with bfq, because updating the blkio.bfq.* @@ -567,17 +576,22 @@ weight of the queues associated with interactive and soft real-time applications. Unset this tunable if you need/want to control weights. -[1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O +[1] + P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O Scheduler", Proceedings of the First Workshop on Mobile System Technologies (MST-2015), May 2015. + http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf -[2] P. Valente and M. Andreolini, "Improving Application +[2] + P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. + Slightly extended version: - http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- - results.pdf -[3] https://github.com/Algodev-github/S + http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite-results.pdf + +[3] + https://github.com/Algodev-github/S diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.rst index 31c177663ed5..d6e30b680405 100644 --- a/Documentation/block/biodoc.txt +++ b/Documentation/block/biodoc.rst @@ -1,15 +1,24 @@ - Notes on the Generic Block Layer Rewrite in Linux 2.5 - ===================================================== +===================================================== +Notes on the Generic Block Layer Rewrite in Linux 2.5 +===================================================== + +.. note:: + + It seems that there are lot of outdated stuff here. This seems + to be written somewhat as a task list. Yet, eventually, something + here might still be useful. Notes Written on Jan 15, 2002: - Jens Axboe <jens.axboe@oracle.com> - Suparna Bhattacharya <suparna@in.ibm.com> + - Jens Axboe <jens.axboe@oracle.com> + - Suparna Bhattacharya <suparna@in.ibm.com> Last Updated May 2, 2002 + September 2003: Updated I/O Scheduler portions - Nick Piggin <npiggin@kernel.dk> + - Nick Piggin <npiggin@kernel.dk> -Introduction: +Introduction +============ These are some notes describing some aspects of the 2.5 block layer in the context of the bio rewrite. The idea is to bring out some of the key @@ -17,11 +26,11 @@ changes and a glimpse of the rationale behind those changes. Please mail corrections & suggestions to suparna@in.ibm.com. -Credits: ---------- +Credits +======= 2.5 bio rewrite: - Jens Axboe <jens.axboe@oracle.com> + - Jens Axboe <jens.axboe@oracle.com> Many aspects of the generic block layer redesign were driven by and evolved over discussions, prior patches and the collective experience of several @@ -29,62 +38,63 @@ people. See sections 8 and 9 for a list of some related references. The following people helped with review comments and inputs for this document: - Christoph Hellwig <hch@infradead.org> - Arjan van de Ven <arjanv@redhat.com> - Randy Dunlap <rdunlap@xenotime.net> - Andre Hedrick <andre@linux-ide.org> + + - Christoph Hellwig <hch@infradead.org> + - Arjan van de Ven <arjanv@redhat.com> + - Randy Dunlap <rdunlap@xenotime.net> + - Andre Hedrick <andre@linux-ide.org> The following people helped with fixes/contributions to the bio patches while it was still work-in-progress: - David S. Miller <davem@redhat.com> + - David S. Miller <davem@redhat.com> -Description of Contents: ------------------------- -1. Scope for tuning of logic to various needs - 1.1 Tuning based on device or low level driver capabilities +.. Description of Contents: + + 1. Scope for tuning of logic to various needs + 1.1 Tuning based on device or low level driver capabilities - Per-queue parameters - Highmem I/O support - I/O scheduler modularization - 1.2 Tuning based on high level requirements/capabilities + 1.2 Tuning based on high level requirements/capabilities 1.2.1 Request Priority/Latency - 1.3 Direct access/bypass to lower layers for diagnostics and special - device operations + 1.3 Direct access/bypass to lower layers for diagnostics and special + device operations 1.3.1 Pre-built commands -2. New flexible and generic but minimalist i/o structure or descriptor - (instead of using buffer heads at the i/o layer) - 2.1 Requirements/Goals addressed - 2.2 The bio struct in detail (multi-page io unit) - 2.3 Changes in the request structure -3. Using bios - 3.1 Setup/teardown (allocation, splitting) - 3.2 Generic bio helper routines - 3.2.1 Traversing segments and completion units in a request - 3.2.2 Setting up DMA scatterlists - 3.2.3 I/O completion - 3.2.4 Implications for drivers that do not interpret bios (don't handle - multiple segments) - 3.3 I/O submission -4. The I/O scheduler -5. Scalability related changes - 5.1 Granular locking: Removal of io_request_lock - 5.2 Prepare for transition to 64 bit sector_t -6. Other Changes/Implications - 6.1 Partition re-mapping handled by the generic block layer -7. A few tips on migration of older drivers -8. A list of prior/related/impacted patches/ideas -9. Other References/Discussion Threads + 2. New flexible and generic but minimalist i/o structure or descriptor + (instead of using buffer heads at the i/o layer) + 2.1 Requirements/Goals addressed + 2.2 The bio struct in detail (multi-page io unit) + 2.3 Changes in the request structure + 3. Using bios + 3.1 Setup/teardown (allocation, splitting) + 3.2 Generic bio helper routines + 3.2.1 Traversing segments and completion units in a request + 3.2.2 Setting up DMA scatterlists + 3.2.3 I/O completion + 3.2.4 Implications for drivers that do not interpret bios (don't handle + multiple segments) + 3.3 I/O submission + 4. The I/O scheduler + 5. Scalability related changes + 5.1 Granular locking: Removal of io_request_lock + 5.2 Prepare for transition to 64 bit sector_t + 6. Other Changes/Implications + 6.1 Partition re-mapping handled by the generic block layer + 7. A few tips on migration of older drivers + 8. A list of prior/related/impacted patches/ideas + 9. Other References/Discussion Threads ---------------------------------------------------------------------------- Bio Notes --------- +========= Let us discuss the changes in the context of how some overall goals for the block layer are addressed. 1. Scope for tuning the generic logic to satisfy various requirements +===================================================================== The block layer design supports adaptable abstractions to handle common processing with the ability to tune the logic to an appropriate extent @@ -97,6 +107,7 @@ and application/middleware software designed to take advantage of these capabilities. 1.1 Tuning based on low level device / driver capabilities +---------------------------------------------------------- Sophisticated devices with large built-in caches, intelligent i/o scheduling optimizations, high memory DMA support, etc may find some of the @@ -133,12 +144,12 @@ Some new queue property settings: Sets two variables that limit the size of the request. - The request queue's max_sectors, which is a soft size in - units of 512 byte sectors, and could be dynamically varied - by the core kernel. + units of 512 byte sectors, and could be dynamically varied + by the core kernel. - The request queue's max_hw_sectors, which is a hard limit - and reflects the maximum size request a driver can handle - in units of 512 byte sectors. + and reflects the maximum size request a driver can handle + in units of 512 byte sectors. The default for both max_sectors and max_hw_sectors is 255. The upper limit of max_sectors is 1024. @@ -234,6 +245,7 @@ I/O scheduler wrappers are to be used instead of accessing the queue directly. See section 4. The I/O scheduler for details. 1.2 Tuning Based on High level code capabilities +------------------------------------------------ i. Application capabilities for raw i/o @@ -258,9 +270,11 @@ would need an additional mechanism either via open flags or ioctls, or some other upper level mechanism to communicate such settings to block. 1.2.1 Request Priority/Latency +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Todo/Under discussion: -Arjan's proposed request priority scheme allows higher levels some broad +Todo/Under discussion:: + + Arjan's proposed request priority scheme allows higher levels some broad control (high/med/low) over the priority of an i/o request vs other pending requests in the queue. For example it allows reads for bringing in an executable page on demand to be given a higher priority over pending write @@ -272,7 +286,9 @@ Arjan's proposed request priority scheme allows higher levels some broad 1.3 Direct Access to Low level Device/Driver Capabilities (Bypass mode) - (e.g Diagnostics, Systems Management) +----------------------------------------------------------------------- + +(e.g Diagnostics, Systems Management) There are situations where high-level code needs to have direct access to the low level device capabilities or requires the ability to issue commands @@ -308,28 +324,32 @@ involved. In the latter case, the driver would modify and manage the request->buffer, request->sector and request->nr_sectors or request->current_nr_sectors fields itself rather than using the block layer end_request or end_that_request_first completion interfaces. -(See 2.3 or Documentation/block/request.txt for a brief explanation of +(See 2.3 or Documentation/block/request.rst for a brief explanation of the request structure fields) -[TBD: end_that_request_last should be usable even in this case; -Perhaps an end_that_direct_request_first routine could be implemented to make -handling direct requests easier for such drivers; Also for drivers that -expect bios, a helper function could be provided for setting up a bio -corresponding to a data buffer] - -<JENS: I dont understand the above, why is end_that_request_first() not -usable? Or _last for that matter. I must be missing something> -<SUP: What I meant here was that if the request doesn't have a bio, then - end_that_request_first doesn't modify nr_sectors or current_nr_sectors, - and hence can't be used for advancing request state settings on the - completion of partial transfers. The driver has to modify these fields - directly by hand. - This is because end_that_request_first only iterates over the bio list, - and always returns 0 if there are none associated with the request. - _last works OK in this case, and is not a problem, as I mentioned earlier -> +:: + + [TBD: end_that_request_last should be usable even in this case; + Perhaps an end_that_direct_request_first routine could be implemented to make + handling direct requests easier for such drivers; Also for drivers that + expect bios, a helper function could be provided for setting up a bio + corresponding to a data buffer] + + <JENS: I dont understand the above, why is end_that_request_first() not + usable? Or _last for that matter. I must be missing something> + + <SUP: What I meant here was that if the request doesn't have a bio, then + end_that_request_first doesn't modify nr_sectors or current_nr_sectors, + and hence can't be used for advancing request state settings on the + completion of partial transfers. The driver has to modify these fields + directly by hand. + This is because end_that_request_first only iterates over the bio list, + and always returns 0 if there are none associated with the request. + _last works OK in this case, and is not a problem, as I mentioned earlier + > 1.3.1 Pre-built Commands +^^^^^^^^^^^^^^^^^^^^^^^^ A request can be created with a pre-built custom command to be sent directly to the device. The cmd block in the request structure has room for filling @@ -360,9 +380,11 @@ Aside: the pre-builder hook can be invoked there. -2. Flexible and generic but minimalist i/o structure/descriptor. +2. Flexible and generic but minimalist i/o structure/descriptor +=============================================================== 2.1 Reason for a new structure and requirements addressed +--------------------------------------------------------- Prior to 2.5, buffer heads were used as the unit of i/o at the generic block layer, and the low level request structure was associated with a chain of @@ -378,26 +400,26 @@ which were generated for each such chunk. The following were some of the goals and expectations considered in the redesign of the block i/o data structure in 2.5. -i. Should be appropriate as a descriptor for both raw and buffered i/o - +1. Should be appropriate as a descriptor for both raw and buffered i/o - avoid cache related fields which are irrelevant in the direct/page i/o path, or filesystem block size alignment restrictions which may not be relevant for raw i/o. -ii. Ability to represent high-memory buffers (which do not have a virtual +2. Ability to represent high-memory buffers (which do not have a virtual address mapping in kernel address space). -iii.Ability to represent large i/os w/o unnecessarily breaking them up (i.e +3. Ability to represent large i/os w/o unnecessarily breaking them up (i.e greater than PAGE_SIZE chunks in one shot) -iv. At the same time, ability to retain independent identity of i/os from +4. At the same time, ability to retain independent identity of i/os from different sources or i/o units requiring individual completion (e.g. for latency reasons) -v. Ability to represent an i/o involving multiple physical memory segments +5. Ability to represent an i/o involving multiple physical memory segments (including non-page aligned page fragments, as specified via readv/writev) without unnecessarily breaking it up, if the underlying device is capable of handling it. -vi. Preferably should be based on a memory descriptor structure that can be +6. Preferably should be based on a memory descriptor structure that can be passed around different types of subsystems or layers, maybe even networking, without duplication or extra copies of data/descriptor fields themselves in the process -vii.Ability to handle the possibility of splits/merges as the structure passes +7. Ability to handle the possibility of splits/merges as the structure passes through layered drivers (lvm, md, evms), with minimal overhead. The solution was to define a new structure (bio) for the block layer, @@ -408,6 +430,7 @@ bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are mapped to bio structures. 2.2 The bio struct +------------------ The bio structure uses a vector representation pointing to an array of tuples of <page, offset, len> to describe the i/o buffer, and has various other @@ -417,16 +440,18 @@ performing the i/o. Notice that this representation means that a bio has no virtual address mapping at all (unlike buffer heads). -struct bio_vec { +:: + + struct bio_vec { struct page *bv_page; unsigned short bv_len; unsigned short bv_offset; -}; + }; -/* - * main unit of I/O for the block layer and lower layers (ie drivers) - */ -struct bio { + /* + * main unit of I/O for the block layer and lower layers (ie drivers) + */ + struct bio { struct bio *bi_next; /* request queue link */ struct block_device *bi_bdev; /* target device */ unsigned long bi_flags; /* status, command, etc */ @@ -443,7 +468,7 @@ struct bio { bio_end_io_t *bi_end_io; /* bi_end_io (bio) */ atomic_t bi_cnt; /* pin count: free when it hits zero */ void *bi_private; -}; + }; With this multipage bio design: @@ -453,7 +478,7 @@ With this multipage bio design: - Splitting of an i/o request across multiple devices (as in the case of lvm or raid) is achieved by cloning the bio (where the clone points to the same bi_io_vec array, but with the index and size accordingly modified) -- A linked list of bios is used as before for unrelated merges (*) - this +- A linked list of bios is used as before for unrelated merges [*]_ - this avoids reallocs and makes independent completions easier to handle. - Code that traverses the req list can find all the segments of a bio by using rq_for_each_segment. This handles the fact that a request @@ -462,10 +487,12 @@ With this multipage bio design: field to keep track of the next bio_vec entry to process. (e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE) [TBD: Should preferably also have a bi_voffset and bi_vlen to avoid modifying - bi_offset an len fields] + bi_offset an len fields] -(*) unrelated merges -- a request ends up containing two or more bios that - didn't originate from the same place. +.. [*] + + unrelated merges -- a request ends up containing two or more bios that + didn't originate from the same place. bi_end_io() i/o callback gets called on i/o completion of the entire bio. @@ -483,10 +510,11 @@ which in turn means that only raw I/O uses it (direct i/o may not work right now). The intent however is to enable clustering of pages etc to become possible. The pagebuf abstraction layer from SGI also uses multi-page bios, but that is currently not included in the stock development kernels. -The same is true of Andrew Morton's work-in-progress multipage bio writeout +The same is true of Andrew Morton's work-in-progress multipage bio writeout and readahead patches. 2.3 Changes in the Request Structure +------------------------------------ The request structure is the structure that gets passed down to low level drivers. The block layer make_request function builds up a request structure, @@ -499,11 +527,11 @@ request structure. Only some relevant fields (mainly those which changed or may be referred to in some of the discussion here) are listed below, not necessarily in the order in which they occur in the structure (see include/linux/blkdev.h) -Refer to Documentation/block/request.txt for details about all the request +Refer to Documentation/block/request.rst for details about all the request structure fields and a quick reference about the layers which are -supposed to use or modify those fields. +supposed to use or modify those fields:: -struct request { + struct request { struct list_head queuelist; /* Not meant to be directly accessed by the driver. Used by q->elv_next_request_fn @@ -548,11 +576,11 @@ struct request { . struct bio *bio, *biotail; /* bio list instead of bh */ struct request_list *rl; -} - + } + See the req_ops and req_flag_bits definitions for an explanation of the various flags available. Some bits are used by the block layer or i/o scheduler. - + The behaviour of the various sector counts are almost the same as before, except that since we have multi-segment bios, current_nr_sectors refers to the numbers of sectors in the current segment being processed which could @@ -578,8 +606,10 @@ a driver needs to be careful about interoperation with the block layer helper functions which the driver uses. (Section 1.3) 3. Using bios +============= 3.1 Setup/Teardown +------------------ There are routines for managing the allocation, and reference counting, and freeing of bios (bio_alloc, bio_get, bio_put). @@ -606,10 +636,13 @@ case of bio, these routines make use of the standard slab allocator. The caller of bio_alloc is expected to taken certain steps to avoid deadlocks, e.g. avoid trying to allocate more memory from the pool while already holding memory obtained from the pool. -[TBD: This is a potential issue, though a rare possibility - in the bounce bio allocation that happens in the current code, since - it ends up allocating a second bio from the same pool while - holding the original bio ] + +:: + + [TBD: This is a potential issue, though a rare possibility + in the bounce bio allocation that happens in the current code, since + it ends up allocating a second bio from the same pool while + holding the original bio ] Memory allocated from the pool should be released back within a limited amount of time (in the case of bio, that would be after the i/o is completed). @@ -635,14 +668,18 @@ same bio_vec_list). This would typically be used for splitting i/o requests in lvm or md. 3.2 Generic bio helper Routines +------------------------------- 3.2.1 Traversing segments and completion units in a request +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The macro rq_for_each_segment() should be used for traversing the bios in the request list (drivers should avoid directly trying to do it themselves). Using these helpers should also make it easier to cope with block changes in the future. +:: + struct req_iterator iter; rq_for_each_segment(bio_vec, rq, iter) /* bio_vec is now current segment */ @@ -653,6 +690,7 @@ which don't make a distinction between segments and completion units would need to be reorganized to support multi-segment bios. 3.2.2 Setting up DMA scatterlists +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The blk_rq_map_sg() helper routine would be used for setting up scatter gather lists from a request, so a driver need not do it on its own. @@ -683,6 +721,7 @@ of physical data segments in a request (i.e. the largest sized scatter list a driver could handle) 3.2.3 I/O completion +^^^^^^^^^^^^^^^^^^^^ The existing generic block layer helper routines end_request, end_that_request_first and end_that_request_last can be used for i/o @@ -691,8 +730,10 @@ request can be kicked of) as before. With the introduction of multi-page bio support, end_that_request_first requires an additional argument indicating the number of sectors completed. -3.2.4 Implications for drivers that do not interpret bios (don't handle - multiple segments) +3.2.4 Implications for drivers that do not interpret bios +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +(don't handle multiple segments) Drivers that do not interpret bios e.g those which do not handle multiple segments and do not support i/o into high memory addresses (require bounce @@ -707,15 +748,18 @@ be used if only if the request has come down from block/bio path, not for direct access requests which only specify rq->buffer without a valid rq->bio) 3.3 I/O Submission +------------------ The routine submit_bio() is used to submit a single io. Higher level i/o routines make use of this: (a) Buffered i/o: + The routine submit_bh() invokes submit_bio() on a bio corresponding to the bh, allocating the bio if required. ll_rw_block() uses submit_bh() as before. (b) Kiobuf i/o (for raw/direct i/o): + The ll_rw_kio() routine breaks up the kiobuf into page sized chunks and maps the array to one or more multi-page bios, issuing submit_bio() to perform the i/o on each of these. @@ -738,6 +782,7 @@ Todo/Observation: (c) Page i/o: + Todo/Under discussion: Andrew Morton's multi-page bio patches attempt to issue multi-page @@ -753,6 +798,7 @@ Todo/Under discussion: abstraction, but intended to be as lightweight as possible). (d) Direct access i/o: + Direct access requests that do not contain bios would be submitted differently as discussed earlier in section 1.3. @@ -780,11 +826,13 @@ Aside: 4. The I/O scheduler +==================== + I/O scheduler, a.k.a. elevator, is implemented in two lay |