Based on kernel version 4.16.1. Page generated on 2018-04-09 11:52 EST.
1 Notes on the Generic Block Layer Rewrite in Linux 2.5 2 ===================================================== 3 4 Notes Written on Jan 15, 2002: 5 Jens Axboe <jens.axboe@oracle.com> 6 Suparna Bhattacharya <suparna@in.ibm.com> 7 8 Last Updated May 2, 2002 9 September 2003: Updated I/O Scheduler portions 10 Nick Piggin <npiggin@kernel.dk> 11 12 Introduction: 13 14 These are some notes describing some aspects of the 2.5 block layer in the 15 context of the bio rewrite. The idea is to bring out some of the key 16 changes and a glimpse of the rationale behind those changes. 17 18 Please mail corrections & suggestions to suparna@in.ibm.com. 19 20 Credits: 21 --------- 22 23 2.5 bio rewrite: 24 Jens Axboe <jens.axboe@oracle.com> 25 26 Many aspects of the generic block layer redesign were driven by and evolved 27 over discussions, prior patches and the collective experience of several 28 people. See sections 8 and 9 for a list of some related references. 29 30 The following people helped with review comments and inputs for this 31 document: 32 Christoph Hellwig <hch@infradead.org> 33 Arjan van de Ven <arjanv@redhat.com> 34 Randy Dunlap <rdunlap@xenotime.net> 35 Andre Hedrick <andre@linux-ide.org> 36 37 The following people helped with fixes/contributions to the bio patches 38 while it was still work-in-progress: 39 David S. Miller <davem@redhat.com> 40 41 42 Description of Contents: 43 ------------------------ 44 45 1. Scope for tuning of logic to various needs 46 1.1 Tuning based on device or low level driver capabilities 47 - Per-queue parameters 48 - Highmem I/O support 49 - I/O scheduler modularization 50 1.2 Tuning based on high level requirements/capabilities 51 1.2.1 Request Priority/Latency 52 1.3 Direct access/bypass to lower layers for diagnostics and special 53 device operations 54 1.3.1 Pre-built commands 55 2. New flexible and generic but minimalist i/o structure or descriptor 56 (instead of using buffer heads at the i/o layer) 57 2.1 Requirements/Goals addressed 58 2.2 The bio struct in detail (multi-page io unit) 59 2.3 Changes in the request structure 60 3. Using bios 61 3.1 Setup/teardown (allocation, splitting) 62 3.2 Generic bio helper routines 63 3.2.1 Traversing segments and completion units in a request 64 3.2.2 Setting up DMA scatterlists 65 3.2.3 I/O completion 66 3.2.4 Implications for drivers that do not interpret bios (don't handle 67 multiple segments) 68 3.2.5 Request command tagging 69 3.3 I/O submission 70 4. The I/O scheduler 71 5. Scalability related changes 72 5.1 Granular locking: Removal of io_request_lock 73 5.2 Prepare for transition to 64 bit sector_t 74 6. Other Changes/Implications 75 6.1 Partition re-mapping handled by the generic block layer 76 7. A few tips on migration of older drivers 77 8. A list of prior/related/impacted patches/ideas 78 9. Other References/Discussion Threads 79 80 --------------------------------------------------------------------------- 81 82 Bio Notes 83 -------- 84 85 Let us discuss the changes in the context of how some overall goals for the 86 block layer are addressed. 87 88 1. Scope for tuning the generic logic to satisfy various requirements 89 90 The block layer design supports adaptable abstractions to handle common 91 processing with the ability to tune the logic to an appropriate extent 92 depending on the nature of the device and the requirements of the caller. 93 One of the objectives of the rewrite was to increase the degree of tunability 94 and to enable higher level code to utilize underlying device/driver 95 capabilities to the maximum extent for better i/o performance. This is 96 important especially in the light of ever improving hardware capabilities 97 and application/middleware software designed to take advantage of these 98 capabilities. 99 100 1.1 Tuning based on low level device / driver capabilities 101 102 Sophisticated devices with large built-in caches, intelligent i/o scheduling 103 optimizations, high memory DMA support, etc may find some of the 104 generic processing an overhead, while for less capable devices the 105 generic functionality is essential for performance or correctness reasons. 106 Knowledge of some of the capabilities or parameters of the device should be 107 used at the generic block layer to take the right decisions on 108 behalf of the driver. 109 110 How is this achieved ? 111 112 Tuning at a per-queue level: 113 114 i. Per-queue limits/values exported to the generic layer by the driver 115 116 Various parameters that the generic i/o scheduler logic uses are set at 117 a per-queue level (e.g maximum request size, maximum number of segments in 118 a scatter-gather list, logical block size) 119 120 Some parameters that were earlier available as global arrays indexed by 121 major/minor are now directly associated with the queue. Some of these may 122 move into the block device structure in the future. Some characteristics 123 have been incorporated into a queue flags field rather than separate fields 124 in themselves. There are blk_queue_xxx functions to set the parameters, 125 rather than update the fields directly 126 127 Some new queue property settings: 128 129 blk_queue_bounce_limit(q, u64 dma_address) 130 Enable I/O to highmem pages, dma_address being the 131 limit. No highmem default. 132 133 blk_queue_max_sectors(q, max_sectors) 134 Sets two variables that limit the size of the request. 135 136 - The request queue's max_sectors, which is a soft size in 137 units of 512 byte sectors, and could be dynamically varied 138 by the core kernel. 139 140 - The request queue's max_hw_sectors, which is a hard limit 141 and reflects the maximum size request a driver can handle 142 in units of 512 byte sectors. 143 144 The default for both max_sectors and max_hw_sectors is 145 255. The upper limit of max_sectors is 1024. 146 147 blk_queue_max_phys_segments(q, max_segments) 148 Maximum physical segments you can handle in a request. 128 149 default (driver limit). (See 3.2.2) 150 151 blk_queue_max_hw_segments(q, max_segments) 152 Maximum dma segments the hardware can handle in a request. 128 153 default (host adapter limit, after dma remapping). 154 (See 3.2.2) 155 156 blk_queue_max_segment_size(q, max_seg_size) 157 Maximum size of a clustered segment, 64kB default. 158 159 blk_queue_logical_block_size(q, logical_block_size) 160 Lowest possible sector size that the hardware can operate 161 on, 512 bytes default. 162 163 New queue flags: 164 165 QUEUE_FLAG_CLUSTER (see 3.2.2) 166 QUEUE_FLAG_QUEUED (see 3.2.4) 167 168 169 ii. High-mem i/o capabilities are now considered the default 170 171 The generic bounce buffer logic, present in 2.4, where the block layer would 172 by default copyin/out i/o requests on high-memory buffers to low-memory buffers 173 assuming that the driver wouldn't be able to handle it directly, has been 174 changed in 2.5. The bounce logic is now applied only for memory ranges 175 for which the device cannot handle i/o. A driver can specify this by 176 setting the queue bounce limit for the request queue for the device 177 (blk_queue_bounce_limit()). This avoids the inefficiencies of the copyin/out 178 where a device is capable of handling high memory i/o. 179 180 In order to enable high-memory i/o where the device is capable of supporting 181 it, the pci dma mapping routines and associated data structures have now been 182 modified to accomplish a direct page -> bus translation, without requiring 183 a virtual address mapping (unlike the earlier scheme of virtual address 184 -> bus translation). So this works uniformly for high-memory pages (which 185 do not have a corresponding kernel virtual address space mapping) and 186 low-memory pages. 187 188 Note: Please refer to Documentation/DMA-API-HOWTO.txt for a discussion 189 on PCI high mem DMA aspects and mapping of scatter gather lists, and support 190 for 64 bit PCI. 191 192 Special handling is required only for cases where i/o needs to happen on 193 pages at physical memory addresses beyond what the device can support. In these 194 cases, a bounce bio representing a buffer from the supported memory range 195 is used for performing the i/o with copyin/copyout as needed depending on 196 the type of the operation. For example, in case of a read operation, the 197 data read has to be copied to the original buffer on i/o completion, so a 198 callback routine is set up to do this, while for write, the data is copied 199 from the original buffer to the bounce buffer prior to issuing the 200 operation. Since an original buffer may be in a high memory area that's not 201 mapped in kernel virtual addr, a kmap operation may be required for 202 performing the copy, and special care may be needed in the completion path 203 as it may not be in irq context. Special care is also required (by way of 204 GFP flags) when allocating bounce buffers, to avoid certain highmem 205 deadlock possibilities. 206 207 It is also possible that a bounce buffer may be allocated from high-memory 208 area that's not mapped in kernel virtual addr, but within the range that the 209 device can use directly; so the bounce page may need to be kmapped during 210 copy operations. [Note: This does not hold in the current implementation, 211 though] 212 213 There are some situations when pages from high memory may need to 214 be kmapped, even if bounce buffers are not necessary. For example a device 215 may need to abort DMA operations and revert to PIO for the transfer, in 216 which case a virtual mapping of the page is required. For SCSI it is also 217 done in some scenarios where the low level driver cannot be trusted to 218 handle a single sg entry correctly. The driver is expected to perform the 219 kmaps as needed on such occasions as appropriate. A driver could also use 220 the blk_queue_bounce() routine on its own to bounce highmem i/o to low 221 memory for specific requests if so desired. 222 223 iii. The i/o scheduler algorithm itself can be replaced/set as appropriate 224 225 As in 2.4, it is possible to plugin a brand new i/o scheduler for a particular 226 queue or pick from (copy) existing generic schedulers and replace/override 227 certain portions of it. The 2.5 rewrite provides improved modularization 228 of the i/o scheduler. There are more pluggable callbacks, e.g for init, 229 add request, extract request, which makes it possible to abstract specific 230 i/o scheduling algorithm aspects and details outside of the generic loop. 231 It also makes it possible to completely hide the implementation details of 232 the i/o scheduler from block drivers. 233 234 I/O scheduler wrappers are to be used instead of accessing the queue directly. 235 See section 4. The I/O scheduler for details. 236 237 1.2 Tuning Based on High level code capabilities 238 239 i. Application capabilities for raw i/o 240 241 This comes from some of the high-performance database/middleware 242 requirements where an application prefers to make its own i/o scheduling 243 decisions based on an understanding of the access patterns and i/o 244 characteristics 245 246 ii. High performance filesystems or other higher level kernel code's 247 capabilities 248 249 Kernel components like filesystems could also take their own i/o scheduling 250 decisions for optimizing performance. Journalling filesystems may need 251 some control over i/o ordering. 252 253 What kind of support exists at the generic block layer for this ? 254 255 The flags and rw fields in the bio structure can be used for some tuning 256 from above e.g indicating that an i/o is just a readahead request, or priority 257 settings (currently unused). As far as user applications are concerned they 258 would need an additional mechanism either via open flags or ioctls, or some 259 other upper level mechanism to communicate such settings to block. 260 261 1.2.1 Request Priority/Latency 262 263 Todo/Under discussion: 264 Arjan's proposed request priority scheme allows higher levels some broad 265 control (high/med/low) over the priority of an i/o request vs other pending 266 requests in the queue. For example it allows reads for bringing in an 267 executable page on demand to be given a higher priority over pending write 268 requests which haven't aged too much on the queue. Potentially this priority 269 could even be exposed to applications in some manner, providing higher level 270 tunability. Time based aging avoids starvation of lower priority 271 requests. Some bits in the bi_opf flags field in the bio structure are 272 intended to be used for this priority information. 273 274 275 1.3 Direct Access to Low level Device/Driver Capabilities (Bypass mode) 276 (e.g Diagnostics, Systems Management) 277 278 There are situations where high-level code needs to have direct access to 279 the low level device capabilities or requires the ability to issue commands 280 to the device bypassing some of the intermediate i/o layers. 281 These could, for example, be special control commands issued through ioctl 282 interfaces, or could be raw read/write commands that stress the drive's 283 capabilities for certain kinds of fitness tests. Having direct interfaces at 284 multiple levels without having to pass through upper layers makes 285 it possible to perform bottom up validation of the i/o path, layer by 286 layer, starting from the media. 287 288 The normal i/o submission interfaces, e.g submit_bio, could be bypassed 289 for specially crafted requests which such ioctl or diagnostics 290 interfaces would typically use, and the elevator add_request routine 291 can instead be used to directly insert such requests in the queue or preferably 292 the blk_do_rq routine can be used to place the request on the queue and 293 wait for completion. Alternatively, sometimes the caller might just 294 invoke a lower level driver specific interface with the request as a 295 parameter. 296 297 If the request is a means for passing on special information associated with 298 the command, then such information is associated with the request->special 299 field (rather than misuse the request->buffer field which is meant for the 300 request data buffer's virtual mapping). 301 302 For passing request data, the caller must build up a bio descriptor 303 representing the concerned memory buffer if the underlying driver interprets 304 bio segments or uses the block layer end*request* functions for i/o 305 completion. Alternatively one could directly use the request->buffer field to 306 specify the virtual address of the buffer, if the driver expects buffer 307 addresses passed in this way and ignores bio entries for the request type 308 involved. In the latter case, the driver would modify and manage the 309 request->buffer, request->sector and request->nr_sectors or 310 request->current_nr_sectors fields itself rather than using the block layer 311 end_request or end_that_request_first completion interfaces. 312 (See 2.3 or Documentation/block/request.txt for a brief explanation of 313 the request structure fields) 314 315 [TBD: end_that_request_last should be usable even in this case; 316 Perhaps an end_that_direct_request_first routine could be implemented to make 317 handling direct requests easier for such drivers; Also for drivers that 318 expect bios, a helper function could be provided for setting up a bio 319 corresponding to a data buffer] 320 321 <JENS: I dont understand the above, why is end_that_request_first() not 322 usable? Or _last for that matter. I must be missing something> 323 <SUP: What I meant here was that if the request doesn't have a bio, then 324 end_that_request_first doesn't modify nr_sectors or current_nr_sectors, 325 and hence can't be used for advancing request state settings on the 326 completion of partial transfers. The driver has to modify these fields 327 directly by hand. 328 This is because end_that_request_first only iterates over the bio list, 329 and always returns 0 if there are none associated with the request. 330 _last works OK in this case, and is not a problem, as I mentioned earlier 331 > 332 333 1.3.1 Pre-built Commands 334 335 A request can be created with a pre-built custom command to be sent directly 336 to the device. The cmd block in the request structure has room for filling 337 in the command bytes. (i.e rq->cmd is now 16 bytes in size, and meant for 338 command pre-building, and the type of the request is now indicated 339 through rq->flags instead of via rq->cmd) 340 341 The request structure flags can be set up to indicate the type of request 342 in such cases (REQ_PC: direct packet command passed to driver, REQ_BLOCK_PC: 343 packet command issued via blk_do_rq, REQ_SPECIAL: special request). 344 345 It can help to pre-build device commands for requests in advance. 346 Drivers can now specify a request prepare function (q->prep_rq_fn) that the 347 block layer would invoke to pre-build device commands for a given request, 348 or perform other preparatory processing for the request. This is routine is 349 called by elv_next_request(), i.e. typically just before servicing a request. 350 (The prepare function would not be called for requests that have RQF_DONTPREP 351 enabled) 352 353 Aside: 354 Pre-building could possibly even be done early, i.e before placing the 355 request on the queue, rather than construct the command on the fly in the 356 driver while servicing the request queue when it may affect latencies in 357 interrupt context or responsiveness in general. One way to add early 358 pre-building would be to do it whenever we fail to merge on a request. 359 Now REQ_NOMERGE is set in the request flags to skip this one in the future, 360 which means that it will not change before we feed it to the device. So 361 the pre-builder hook can be invoked there. 362 363 364 2. Flexible and generic but minimalist i/o structure/descriptor. 365 366 2.1 Reason for a new structure and requirements addressed 367 368 Prior to 2.5, buffer heads were used as the unit of i/o at the generic block 369 layer, and the low level request structure was associated with a chain of 370 buffer heads for a contiguous i/o request. This led to certain inefficiencies 371 when it came to large i/o requests and readv/writev style operations, as it 372 forced such requests to be broken up into small chunks before being passed 373 on to the generic block layer, only to be merged by the i/o scheduler 374 when the underlying device was capable of handling the i/o in one shot. 375 Also, using the buffer head as an i/o structure for i/os that didn't originate 376 from the buffer cache unnecessarily added to the weight of the descriptors 377 which were generated for each such chunk. 378 379 The following were some of the goals and expectations considered in the 380 redesign of the block i/o data structure in 2.5. 381 382 i. Should be appropriate as a descriptor for both raw and buffered i/o - 383 avoid cache related fields which are irrelevant in the direct/page i/o path, 384 or filesystem block size alignment restrictions which may not be relevant 385 for raw i/o. 386 ii. Ability to represent high-memory buffers (which do not have a virtual 387 address mapping in kernel address space). 388 iii.Ability to represent large i/os w/o unnecessarily breaking them up (i.e 389 greater than PAGE_SIZE chunks in one shot) 390 iv. At the same time, ability to retain independent identity of i/os from 391 different sources or i/o units requiring individual completion (e.g. for 392 latency reasons) 393 v. Ability to represent an i/o involving multiple physical memory segments 394 (including non-page aligned page fragments, as specified via readv/writev) 395 without unnecessarily breaking it up, if the underlying device is capable of 396 handling it. 397 vi. Preferably should be based on a memory descriptor structure that can be 398 passed around different types of subsystems or layers, maybe even 399 networking, without duplication or extra copies of data/descriptor fields 400 themselves in the process 401 vii.Ability to handle the possibility of splits/merges as the structure passes 402 through layered drivers (lvm, md, evms), with minimal overhead. 403 404 The solution was to define a new structure (bio) for the block layer, 405 instead of using the buffer head structure (bh) directly, the idea being 406 avoidance of some associated baggage and limitations. The bio structure 407 is uniformly used for all i/o at the block layer ; it forms a part of the 408 bh structure for buffered i/o, and in the case of raw/direct i/o kiobufs are 409 mapped to bio structures. 410 411 2.2 The bio struct 412 413 The bio structure uses a vector representation pointing to an array of tuples 414 of <page, offset, len> to describe the i/o buffer, and has various other 415 fields describing i/o parameters and state that needs to be maintained for 416 performing the i/o. 417 418 Notice that this representation means that a bio has no virtual address 419 mapping at all (unlike buffer heads). 420 421 struct bio_vec { 422 struct page *bv_page; 423 unsigned short bv_len; 424 unsigned short bv_offset; 425 }; 426 427 /* 428 * main unit of I/O for the block layer and lower layers (ie drivers) 429 */ 430 struct bio { 431 struct bio *bi_next; /* request queue link */ 432 struct block_device *bi_bdev; /* target device */ 433 unsigned long bi_flags; /* status, command, etc */ 434 unsigned long bi_opf; /* low bits: r/w, high: priority */ 435 436 unsigned int bi_vcnt; /* how may bio_vec's */ 437 struct bvec_iter bi_iter; /* current index into bio_vec array */ 438 439 unsigned int bi_size; /* total size in bytes */ 440 unsigned short bi_phys_segments; /* segments after physaddr coalesce*/ 441 unsigned short bi_hw_segments; /* segments after DMA remapping */ 442 unsigned int bi_max; /* max bio_vecs we can hold 443 used as index into pool */ 444 struct bio_vec *bi_io_vec; /* the actual vec list */ 445 bio_end_io_t *bi_end_io; /* bi_end_io (bio) */ 446 atomic_t bi_cnt; /* pin count: free when it hits zero */ 447 void *bi_private; 448 }; 449 450 With this multipage bio design: 451 452 - Large i/os can be sent down in one go using a bio_vec list consisting 453 of an array of <page, offset, len> fragments (similar to the way fragments 454 are represented in the zero-copy network code) 455 - Splitting of an i/o request across multiple devices (as in the case of 456 lvm or raid) is achieved by cloning the bio (where the clone points to 457 the same bi_io_vec array, but with the index and size accordingly modified) 458 - A linked list of bios is used as before for unrelated merges (*) - this 459 avoids reallocs and makes independent completions easier to handle. 460 - Code that traverses the req list can find all the segments of a bio 461 by using rq_for_each_segment. This handles the fact that a request 462 has multiple bios, each of which can have multiple segments. 463 - Drivers which can't process a large bio in one shot can use the bi_iter 464 field to keep track of the next bio_vec entry to process. 465 (e.g a 1MB bio_vec needs to be handled in max 128kB chunks for IDE) 466 [TBD: Should preferably also have a bi_voffset and bi_vlen to avoid modifying 467 bi_offset an len fields] 468 469 (*) unrelated merges -- a request ends up containing two or more bios that 470 didn't originate from the same place. 471 472 bi_end_io() i/o callback gets called on i/o completion of the entire bio. 473 474 At a lower level, drivers build a scatter gather list from the merged bios. 475 The scatter gather list is in the form of an array of <page, offset, len> 476 entries with their corresponding dma address mappings filled in at the 477 appropriate time. As an optimization, contiguous physical pages can be 478 covered by a single entry where <page> refers to the first page and <len> 479 covers the range of pages (up to 16 contiguous pages could be covered this 480 way). There is a helper routine (blk_rq_map_sg) which drivers can use to build 481 the sg list. 482 483 Note: Right now the only user of bios with more than one page is ll_rw_kio, 484 which in turn means that only raw I/O uses it (direct i/o may not work 485 right now). The intent however is to enable clustering of pages etc to 486 become possible. The pagebuf abstraction layer from SGI also uses multi-page 487 bios, but that is currently not included in the stock development kernels. 488 The same is true of Andrew Morton's work-in-progress multipage bio writeout 489 and readahead patches. 490 491 2.3 Changes in the Request Structure 492 493 The request structure is the structure that gets passed down to low level 494 drivers. The block layer make_request function builds up a request structure, 495 places it on the queue and invokes the drivers request_fn. The driver makes 496 use of block layer helper routine elv_next_request to pull the next request 497 off the queue. Control or diagnostic functions might bypass block and directly 498 invoke underlying driver entry points passing in a specially constructed 499 request structure. 500 501 Only some relevant fields (mainly those which changed or may be referred 502 to in some of the discussion here) are listed below, not necessarily in 503 the order in which they occur in the structure (see include/linux/blkdev.h) 504 Refer to Documentation/block/request.txt for details about all the request 505 structure fields and a quick reference about the layers which are 506 supposed to use or modify those fields. 507 508 struct request { 509 struct list_head queuelist; /* Not meant to be directly accessed by 510 the driver. 511 Used by q->elv_next_request_fn 512 rq->queue is gone 513 */ 514 . 515 . 516 unsigned char cmd[16]; /* prebuilt command data block */ 517 unsigned long flags; /* also includes earlier rq->cmd settings */ 518 . 519 . 520 sector_t sector; /* this field is now of type sector_t instead of int 521 preparation for 64 bit sectors */ 522 . 523 . 524 525 /* Number of scatter-gather DMA addr+len pairs after 526 * physical address coalescing is performed. 527 */ 528 unsigned short nr_phys_segments; 529 530 /* Number of scatter-gather addr+len pairs after 531 * physical and DMA remapping hardware coalescing is performed. 532 * This is the number of scatter-gather entries the driver 533 * will actually have to deal with after DMA mapping is done. 534 */ 535 unsigned short nr_hw_segments; 536 537 /* Various sector counts */ 538 unsigned long nr_sectors; /* no. of sectors left: driver modifiable */ 539 unsigned long hard_nr_sectors; /* block internal copy of above */ 540 unsigned int current_nr_sectors; /* no. of sectors left in the 541 current segment:driver modifiable */ 542 unsigned long hard_cur_sectors; /* block internal copy of the above */ 543 . 544 . 545 int tag; /* command tag associated with request */ 546 void *special; /* same as before */ 547 char *buffer; /* valid only for low memory buffers up to 548 current_nr_sectors */ 549 . 550 . 551 struct bio *bio, *biotail; /* bio list instead of bh */ 552 struct request_list *rl; 553 } 554 555 See the req_ops and req_flag_bits definitions for an explanation of the various 556 flags available. Some bits are used by the block layer or i/o scheduler. 557 558 The behaviour of the various sector counts are almost the same as before, 559 except that since we have multi-segment bios, current_nr_sectors refers 560 to the numbers of sectors in the current segment being processed which could 561 be one of the many segments in the current bio (i.e i/o completion unit). 562 The nr_sectors value refers to the total number of sectors in the whole 563 request that remain to be transferred (no change). The purpose of the 564 hard_xxx values is for block to remember these counts every time it hands 565 over the request to the driver. These values are updated by block on 566 end_that_request_first, i.e. every time the driver completes a part of the 567 transfer and invokes block end*request helpers to mark this. The 568 driver should not modify these values. The block layer sets up the 569 nr_sectors and current_nr_sectors fields (based on the corresponding 570 hard_xxx values and the number of bytes transferred) and updates it on 571 every transfer that invokes end_that_request_first. It does the same for the 572 buffer, bio, bio->bi_iter fields too. 573 574 The buffer field is just a virtual address mapping of the current segment 575 of the i/o buffer in cases where the buffer resides in low-memory. For high 576 memory i/o, this field is not valid and must not be used by drivers. 577 578 Code that sets up its own request structures and passes them down to 579 a driver needs to be careful about interoperation with the block layer helper 580 functions which the driver uses. (Section 1.3) 581 582 3. Using bios 583 584 3.1 Setup/Teardown 585 586 There are routines for managing the allocation, and reference counting, and 587 freeing of bios (bio_alloc, bio_get, bio_put). 588 589 This makes use of Ingo Molnar's mempool implementation, which enables 590 subsystems like bio to maintain their own reserve memory pools for guaranteed 591 deadlock-free allocations during extreme VM load. For example, the VM 592 subsystem makes use of the block layer to writeout dirty pages in order to be 593 able to free up memory space, a case which needs careful handling. The 594 allocation logic draws from the preallocated emergency reserve in situations 595 where it cannot allocate through normal means. If the pool is empty and it 596 can wait, then it would trigger action that would help free up memory or 597 replenish the pool (without deadlocking) and wait for availability in the pool. 598 If it is in IRQ context, and hence not in a position to do this, allocation 599 could fail if the pool is empty. In general mempool always first tries to 600 perform allocation without having to wait, even if it means digging into the 601 pool as long it is not less that 50% full. 602 603 On a free, memory is released to the pool or directly freed depending on 604 the current availability in the pool. The mempool interface lets the 605 subsystem specify the routines to be used for normal alloc and free. In the 606 case of bio, these routines make use of the standard slab allocator. 607 608 The caller of bio_alloc is expected to taken certain steps to avoid 609 deadlocks, e.g. avoid trying to allocate more memory from the pool while 610 already holding memory obtained from the pool. 611 [TBD: This is a potential issue, though a rare possibility 612 in the bounce bio allocation that happens in the current code, since 613 it ends up allocating a second bio from the same pool while 614 holding the original bio ] 615 616 Memory allocated from the pool should be released back within a limited 617 amount of time (in the case of bio, that would be after the i/o is completed). 618 This ensures that if part of the pool has been used up, some work (in this 619 case i/o) must already be in progress and memory would be available when it 620 is over. If allocating from multiple pools in the same code path, the order 621 or hierarchy of allocation needs to be consistent, just the way one deals 622 with multiple locks. 623 624 The bio_alloc routine also needs to allocate the bio_vec_list (bvec_alloc()) 625 for a non-clone bio. There are the 6 pools setup for different size biovecs, 626 so bio_alloc(gfp_mask, nr_iovecs) will allocate a vec_list of the 627 given size from these slabs. 628 629 The bio_get() routine may be used to hold an extra reference on a bio prior 630 to i/o submission, if the bio fields are likely to be accessed after the 631 i/o is issued (since the bio may otherwise get freed in case i/o completion 632 happens in the meantime). 633 634 The bio_clone_fast() routine may be used to duplicate a bio, where the clone 635 shares the bio_vec_list with the original bio (i.e. both point to the 636 same bio_vec_list). This would typically be used for splitting i/o requests 637 in lvm or md. 638 639 3.2 Generic bio helper Routines 640 641 3.2.1 Traversing segments and completion units in a request 642 643 The macro rq_for_each_segment() should be used for traversing the bios 644 in the request list (drivers should avoid directly trying to do it 645 themselves). Using these helpers should also make it easier to cope 646 with block changes in the future. 647 648 struct req_iterator iter; 649 rq_for_each_segment(bio_vec, rq, iter) 650 /* bio_vec is now current segment */ 651 652 I/O completion callbacks are per-bio rather than per-segment, so drivers 653 that traverse bio chains on completion need to keep that in mind. Drivers 654 which don't make a distinction between segments and completion units would 655 need to be reorganized to support multi-segment bios. 656 657 3.2.2 Setting up DMA scatterlists 658 659 The blk_rq_map_sg() helper routine would be used for setting up scatter 660 gather lists from a request, so a driver need not do it on its own. 661 662 nr_segments = blk_rq_map_sg(q, rq, scatterlist); 663 664 The helper routine provides a level of abstraction which makes it easier 665 to modify the internals of request to scatterlist conversion down the line 666 without breaking drivers. The blk_rq_map_sg routine takes care of several 667 things like collapsing physically contiguous segments (if QUEUE_FLAG_CLUSTER 668 is set) and correct segment accounting to avoid exceeding the limits which 669 the i/o hardware can handle, based on various queue properties. 670 671 - Prevents a clustered segment from crossing a 4GB mem boundary 672 - Avoids building segments that would exceed the number of physical 673 memory segments that the driver can handle (phys_segments) and the 674 number that the underlying hardware can handle at once, accounting for 675 DMA remapping (hw_segments) (i.e. IOMMU aware limits). 676 677 Routines which the low level driver can use to set up the segment limits: 678 679 blk_queue_max_hw_segments() : Sets an upper limit of the maximum number of 680 hw data segments in a request (i.e. the maximum number of address/length 681 pairs the host adapter can actually hand to the device at once) 682 683 blk_queue_max_phys_segments() : Sets an upper limit on the maximum number 684 of physical data segments in a request (i.e. the largest sized scatter list 685 a driver could handle) 686 687 3.2.3 I/O completion 688 689 The existing generic block layer helper routines end_request, 690 end_that_request_first and end_that_request_last can be used for i/o 691 completion (and setting things up so the rest of the i/o or the next 692 request can be kicked of) as before. With the introduction of multi-page 693 bio support, end_that_request_first requires an additional argument indicating 694 the number of sectors completed. 695 696 3.2.4 Implications for drivers that do not interpret bios (don't handle 697 multiple segments) 698 699 Drivers that do not interpret bios e.g those which do not handle multiple 700 segments and do not support i/o into high memory addresses (require bounce 701 buffers) and expect only virtually mapped buffers, can access the rq->buffer 702 field. As before the driver should use current_nr_sectors to determine the 703 size of remaining data in the current segment (that is the maximum it can 704 transfer in one go unless it interprets segments), and rely on the block layer 705 end_request, or end_that_request_first/last to take care of all accounting 706 and transparent mapping of the next bio segment when a segment boundary 707 is crossed on completion of a transfer. (The end*request* functions should 708 be used if only if the request has come down from block/bio path, not for 709 direct access requests which only specify rq->buffer without a valid rq->bio) 710 711 3.2.5 Generic request command tagging 712 713 3.2.5.1 Tag helpers 714 715 Block now offers some simple generic functionality to help support command 716 queueing (typically known as tagged command queueing), ie manage more than 717 one outstanding command on a queue at any given time. 718 719 blk_queue_init_tags(struct request_queue *q, int depth) 720 721 Initialize internal command tagging structures for a maximum 722 depth of 'depth'. 723 724 blk_queue_free_tags((struct request_queue *q) 725 726 Teardown tag info associated with the queue. This will be done 727 automatically by block if blk_queue_cleanup() is called on a queue 728 that is using tagging. 729 730 The above are initialization and exit management, the main helpers during 731 normal operations are: 732 733 blk_queue_start_tag(struct request_queue *q, struct request *rq) 734 735 Start tagged operation for this request. A free tag number between 736 0 and 'depth' is assigned to the request (rq->tag holds this number), 737 and 'rq' is added to the internal tag management. If the maximum depth 738 for this queue is already achieved (or if the tag wasn't started for 739 some other reason), 1 is returned. Otherwise 0 is returned. 740 741 blk_queue_end_tag(struct request_queue *q, struct request *rq) 742 743 End tagged operation on this request. 'rq' is removed from the internal 744 book keeping structures. 745 746 To minimize struct request and queue overhead, the tag helpers utilize some 747 of the same request members that are used for normal request queue management. 748 This means that a request cannot both be an active tag and be on the queue 749 list at the same time. blk_queue_start_tag() will remove the request, but 750 the driver must remember to call blk_queue_end_tag() before signalling 751 completion of the request to the block layer. This means ending tag 752 operations before calling end_that_request_last()! For an example of a user 753 of these helpers, see the IDE tagged command queueing support. 754 755 Certain hardware conditions may dictate a need to invalidate the block tag 756 queue. For instance, on IDE any tagged request error needs to clear both 757 the hardware and software block queue and enable the driver to sanely restart 758 all the outstanding requests. There's a third helper to do that: 759 760 blk_queue_invalidate_tags(struct request_queue *q) 761 762 Clear the internal block tag queue and re-add all the pending requests 763 to the request queue. The driver will receive them again on the 764 next request_fn run, just like it did the first time it encountered 765 them. 766 767 3.2.5.2 Tag info 768 769 Some block functions exist to query current tag status or to go from a 770 tag number to the associated request. These are, in no particular order: 771 772 blk_queue_tagged(q) 773 774 Returns 1 if the queue 'q' is using tagging, 0 if not. 775 776 blk_queue_tag_request(q, tag) 777 778 Returns a pointer to the request associated with tag 'tag'. 779 780 blk_queue_tag_depth(q) 781 782 Return current queue depth. 783 784 blk_queue_tag_queue(q) 785 786 Returns 1 if the queue can accept a new queued command, 0 if we are 787 at the maximum depth already. 788 789 blk_queue_rq_tagged(rq) 790 791 Returns 1 if the request 'rq' is tagged. 792 793 3.2.5.2 Internal structure 794 795 Internally, block manages tags in the blk_queue_tag structure: 796 797 struct blk_queue_tag { 798 struct request **tag_index; /* array or pointers to rq */ 799 unsigned long *tag_map; /* bitmap of free tags */ 800 struct list_head busy_list; /* fifo list of busy tags */ 801 int busy; /* queue depth */ 802 int max_depth; /* max queue depth */ 803 }; 804 805 Most of the above is simple and straight forward, however busy_list may need 806 a bit of explaining. Normally we don't care too much about request ordering, 807 but in the event of any barrier requests in the tag queue we need to ensure 808 that requests are restarted in the order they were queue. This may happen 809 if the driver needs to use blk_queue_invalidate_tags(). 810 811 3.3 I/O Submission 812 813 The routine submit_bio() is used to submit a single io. Higher level i/o 814 routines make use of this: 815 816 (a) Buffered i/o: 817 The routine submit_bh() invokes submit_bio() on a bio corresponding to the 818 bh, allocating the bio if required. ll_rw_block() uses submit_bh() as before. 819 820 (b) Kiobuf i/o (for raw/direct i/o): 821 The ll_rw_kio() routine breaks up the kiobuf into page sized chunks and 822 maps the array to one or more multi-page bios, issuing submit_bio() to 823 perform the i/o on each of these. 824 825 The embedded bh array in the kiobuf structure has been removed and no 826 preallocation of bios is done for kiobufs. [The intent is to remove the 827 blocks array as well, but it's currently in there to kludge around direct i/o.] 828 Thus kiobuf allocation has switched back to using kmalloc rather than vmalloc. 829 830 Todo/Observation: 831 832 A single kiobuf structure is assumed to correspond to a contiguous range 833 of data, so brw_kiovec() invokes ll_rw_kio for each kiobuf in a kiovec. 834 So right now it wouldn't work for direct i/o on non-contiguous blocks. 835 This is to be resolved. The eventual direction is to replace kiobuf 836 by kvec's. 837 838 Badari Pulavarty has a patch to implement direct i/o correctly using 839 bio and kvec. 840 841 842 (c) Page i/o: 843 Todo/Under discussion: 844 845 Andrew Morton's multi-page bio patches attempt to issue multi-page 846 writeouts (and reads) from the page cache, by directly building up 847 large bios for submission completely bypassing the usage of buffer 848 heads. This work is still in progress. 849 850 Christoph Hellwig had some code that uses bios for page-io (rather than 851 bh). This isn't included in bio as yet. Christoph was also working on a 852 design for representing virtual/real extents as an entity and modifying 853 some of the address space ops interfaces to utilize this abstraction rather 854 than buffer_heads. (This is somewhat along the lines of the SGI XFS pagebuf 855 abstraction, but intended to be as lightweight as possible). 856 857 (d) Direct access i/o: 858 Direct access requests that do not contain bios would be submitted differently 859 as discussed earlier in section 1.3. 860 861 Aside: 862 863 Kvec i/o: 864 865 Ben LaHaise's aio code uses a slightly different structure instead 866 of kiobufs, called a kvec_cb. This contains an array of <page, offset, len> 867 tuples (very much like the networking code), together with a callback function 868 and data pointer. This is embedded into a brw_cb structure when passed 869 to brw_kvec_async(). 870 871 Now it should be possible to directly map these kvecs to a bio. Just as while 872 cloning, in this case rather than PRE_BUILT bio_vecs, we set the bi_io_vec 873 array pointer to point to the veclet array in kvecs. 874 875 TBD: In order for this to work, some changes are needed in the way multi-page 876 bios are handled today. The values of the tuples in such a vector passed in 877 from higher level code should not be modified by the block layer in the course 878 of its request processing, since that would make it hard for the higher layer 879 to continue to use the vector descriptor (kvec) after i/o completes. Instead, 880 all such transient state should either be maintained in the request structure, 881 and passed on in some way to the endio completion routine. 882 883 884 4. The I/O scheduler 885 I/O scheduler, a.k.a. elevator, is implemented in two layers. Generic dispatch 886 queue and specific I/O schedulers. Unless stated otherwise, elevator is used 887 to refer to both parts and I/O scheduler to specific I/O schedulers. 888 889 Block layer implements generic dispatch queue in block/*.c. 890 The generic dispatch queue is responsible for requeueing, handling non-fs 891 requests and all other subtleties. 892 893 Specific I/O schedulers are responsible for ordering normal filesystem 894 requests. They can also choose to delay certain requests to improve 895 throughput or whatever purpose. As the plural form indicates, there are 896 multiple I/O schedulers. They can be built as modules but at least one should 897 be built inside the kernel. Each queue can choose different one and can also 898 change to another one dynamically. 899 900 A block layer call to the i/o scheduler follows the convention elv_xxx(). This 901 calls elevator_xxx_fn in the elevator switch (block/elevator.c). Oh, xxx 902 and xxx might not match exactly, but use your imagination. If an elevator 903 doesn't implement a function, the switch does nothing or some minimal house 904 keeping work. 905 906 4.1. I/O scheduler API 907 908 The functions an elevator may implement are: (* are mandatory) 909 elevator_merge_fn called to query requests for merge with a bio 910 911 elevator_merge_req_fn called when two requests get merged. the one 912 which gets merged into the other one will be 913 never seen by I/O scheduler again. IOW, after 914 being merged, the request is gone. 915 916 elevator_merged_fn called when a request in the scheduler has been 917 involved in a merge. It is used in the deadline 918 scheduler for example, to reposition the request 919 if its sorting order has changed. 920 921 elevator_allow_merge_fn called whenever the block layer determines 922 that a bio can be merged into an existing 923 request safely. The io scheduler may still 924 want to stop a merge at this point if it 925 results in some sort of conflict internally, 926 this hook allows it to do that. Note however 927 that two *requests* can still be merged at later 928 time. Currently the io scheduler has no way to 929 prevent that. It can only learn about the fact 930 from elevator_merge_req_fn callback. 931 932 elevator_dispatch_fn* fills the dispatch queue with ready requests. 933 I/O schedulers are free to postpone requests by 934 not filling the dispatch queue unless @force 935 is non-zero. Once dispatched, I/O schedulers 936 are not allowed to manipulate the requests - 937 they belong to generic dispatch queue. 938 939 elevator_add_req_fn* called to add a new request into the scheduler 940 941 elevator_former_req_fn 942 elevator_latter_req_fn These return the request before or after the 943 one specified in disk sort order. Used by the 944 block layer to find merge possibilities. 945 946 elevator_completed_req_fn called when a request is completed. 947 948 elevator_may_queue_fn returns true if the scheduler wants to allow the 949 current context to queue a new request even if 950 it is over the queue limit. This must be used 951 very carefully!! 952 953 elevator_set_req_fn 954 elevator_put_req_fn Must be used to allocate and free any elevator 955 specific storage for a request. 956 957 elevator_activate_req_fn Called when device driver first sees a request. 958 I/O schedulers can use this callback to 959 determine when actual execution of a request 960 starts. 961 elevator_deactivate_req_fn Called when device driver decides to delay 962 a request by requeueing it. 963 964 elevator_init_fn* 965 elevator_exit_fn Allocate and free any elevator specific storage 966 for a queue. 967 968 4.2 Request flows seen by I/O schedulers 969 All requests seen by I/O schedulers strictly follow one of the following three 970 flows. 971 972 set_req_fn -> 973 974 i. add_req_fn -> (merged_fn ->)* -> dispatch_fn -> activate_req_fn -> 975 (deactivate_req_fn -> activate_req_fn ->)* -> completed_req_fn 976 ii. add_req_fn -> (merged_fn ->)* -> merge_req_fn 977 iii. [none] 978 979 -> put_req_fn 980 981 4.3 I/O scheduler implementation 982 The generic i/o scheduler algorithm attempts to sort/merge/batch requests for 983 optimal disk scan and request servicing performance (based on generic 984 principles and device capabilities), optimized for: 985 i. improved throughput 986 ii. improved latency 987 iii. better utilization of h/w & CPU time 988 989 Characteristics: 990 991 i. Binary tree 992 AS and deadline i/o schedulers use red black binary trees for disk position 993 sorting and searching, and a fifo linked list for time-based searching. This 994 gives good scalability and good availability of information. Requests are 995 almost always dispatched in disk sort order, so a cache is kept of the next 996 request in sort order to prevent binary tree lookups. 997 998 This arrangement is not a generic block layer characteristic however, so 999 elevators may implement queues as they please. 1000 1001 ii. Merge hash 1002 AS and deadline use a hash table indexed by the last sector of a request. This 1003 enables merging code to quickly look up "back merge" candidates, even when 1004 multiple I/O streams are being performed at once on one disk. 1005 1006 "Front merges", a new request being merged at the front of an existing request, 1007 are far less common than "back merges" due to the nature of most I/O patterns. 1008 Front merges are handled by the binary trees in AS and deadline schedulers. 1009 1010 iii. Plugging the queue to batch requests in anticipation of opportunities for 1011 merge/sort optimizations 1012 1013 Plugging is an approach that the current i/o scheduling algorithm resorts to so 1014 that it collects up enough requests in the queue to be able to take 1015 advantage of the sorting/merging logic in the elevator. If the 1016 queue is empty when a request comes in, then it plugs the request queue 1017 (sort of like plugging the bath tub of a vessel to get fluid to build up) 1018 till it fills up with a few more requests, before starting to service 1019 the requests. This provides an opportunity to merge/sort the requests before 1020 passing them down to the device. There are various conditions when the queue is 1021 unplugged (to open up the flow again), either through a scheduled task or 1022 could be on demand. For example wait_on_buffer sets the unplugging going 1023 through sync_buffer() running blk_run_address_space(mapping). Or the caller 1024 can do it explicity through blk_unplug(bdev). So in the read case, 1025 the queue gets explicitly unplugged as part of waiting for completion on that 1026 buffer. 1027 1028 Aside: 1029 This is kind of controversial territory, as it's not clear if plugging is 1030 always the right thing to do. Devices typically have their own queues, 1031 and allowing a big queue to build up in software, while letting the device be 1032 idle for a while may not always make sense. The trick is to handle the fine 1033 balance between when to plug and when to open up. Also now that we have 1034 multi-page bios being queued in one shot, we may not need to wait to merge 1035 a big request from the broken up pieces coming by. 1036 1037 4.4 I/O contexts 1038 I/O contexts provide a dynamically allocated per process data area. They may 1039 be used in I/O schedulers, and in the block layer (could be used for IO statis, 1040 priorities for example). See *io_context in block/ll_rw_blk.c, and as-iosched.c 1041 for an example of usage in an i/o scheduler. 1042 1043 1044 5. Scalability related changes 1045 1046 5.1 Granular Locking: io_request_lock replaced by a per-queue lock 1047 1048 The global io_request_lock has been removed as of 2.5, to avoid 1049 the scalability bottleneck it was causing, and has been replaced by more 1050 granular locking. The request queue structure has a pointer to the 1051 lock to be used for that queue. As a result, locking can now be 1052 per-queue, with a provision for sharing a lock across queues if 1053 necessary (e.g the scsi layer sets the queue lock pointers to the 1054 corresponding adapter lock, which results in a per host locking 1055 granularity). The locking semantics are the same, i.e. locking is 1056 still imposed by the block layer, grabbing the lock before 1057 request_fn execution which it means that lots of older drivers 1058 should still be SMP safe. Drivers are free to drop the queue 1059 lock themselves, if required. Drivers that explicitly used the 1060 io_request_lock for serialization need to be modified accordingly. 1061 Usually it's as easy as adding a global lock: 1062 1063 static DEFINE_SPINLOCK(my_driver_lock); 1064 1065 and passing the address to that lock to blk_init_queue(). 1066 1067 5.2 64 bit sector numbers (sector_t prepares for 64 bit support) 1068 1069 The sector number used in the bio structure has been changed to sector_t, 1070 which could be defined as 64 bit in preparation for 64 bit sector support. 1071 1072 6. Other Changes/Implications 1073 1074 6.1 Partition re-mapping handled by the generic block layer 1075 1076 In 2.5 some of the gendisk/partition related code has been reorganized. 1077 Now the generic block layer performs partition-remapping early and thus 1078 provides drivers with a sector number relative to whole device, rather than 1079 having to take partition number into account in order to arrive at the true 1080 sector number. The routine blk_partition_remap() is invoked by 1081 generic_make_request even before invoking the queue specific make_request_fn, 1082 so the i/o scheduler also gets to operate on whole disk sector numbers. This 1083 should typically not require changes to block drivers, it just never gets 1084 to invoke its own partition sector offset calculations since all bios 1085 sent are offset from the beginning of the device. 1086 1087 1088 7. A Few Tips on Migration of older drivers 1089 1090 Old-style drivers that just use CURRENT and ignores clustered requests, 1091 may not need much change. The generic layer will automatically handle 1092 clustered requests, multi-page bios, etc for the driver. 1093 1094 For a low performance driver or hardware that is PIO driven or just doesn't 1095 support scatter-gather changes should be minimal too. 1096 1097 The following are some points to keep in mind when converting old drivers 1098 to bio. 1099 1100 Drivers should use elv_next_request to pick up requests and are no longer 1101 supposed to handle looping directly over the request list. 1102 (struct request->queue has been removed) 1103 1104 Now end_that_request_first takes an additional number_of_sectors argument. 1105 It used to handle always just the first buffer_head in a request, now 1106 it will loop and handle as many sectors (on a bio-segment granularity) 1107 as specified. 1108 1109 Now bh->b_end_io is replaced by bio->bi_end_io, but most of the time the 1110 right thing to use is bio_endio(bio) instead. 1111 1112 If the driver is dropping the io_request_lock from its request_fn strategy, 1113 then it just needs to replace that with q->queue_lock instead. 1114 1115 As described in Sec 1.1, drivers can set max sector size, max segment size 1116 etc per queue now. Drivers that used to define their own merge functions i 1117 to handle things like this can now just use the blk_queue_* functions at 1118 blk_init_queue time. 1119 1120 Drivers no longer have to map a {partition, sector offset} into the 1121 correct absolute location anymore, this is done by the block layer, so 1122 where a driver received a request ala this before: 1123 1124 rq->rq_dev = mk_kdev(3, 5); /* /dev/hda5 */ 1125 rq->sector = 0; /* first sector on hda5 */ 1126 1127 it will now see 1128 1129 rq->rq_dev = mk_kdev(3, 0); /* /dev/hda */ 1130 rq->sector = 123128; /* offset from start of disk */ 1131 1132 As mentioned, there is no virtual mapping of a bio. For DMA, this is 1133 not a problem as the driver probably never will need a virtual mapping. 1134 Instead it needs a bus mapping (dma_map_page for a single segment or 1135 use dma_map_sg for scatter gather) to be able to ship it to the driver. For 1136 PIO drivers (or drivers that need to revert to PIO transfer once in a 1137 while (IDE for example)), where the CPU is doing the actual data 1138 transfer a virtual mapping is needed. If the driver supports highmem I/O, 1139 (Sec 1.1, (ii) ) it needs to use kmap_atomic or similar to temporarily map 1140 a bio into the virtual address space. 1141 1142 1143 8. Prior/Related/Impacted patches 1144 1145 8.1. Earlier kiobuf patches (sct/axboe/chait/hch/mkp) 1146 - orig kiobuf & raw i/o patches (now in 2.4 tree) 1147 - direct kiobuf based i/o to devices (no intermediate bh's) 1148 - page i/o using kiobuf 1149 - kiobuf splitting for lvm (mkp) 1150 - elevator support for kiobuf request merging (axboe) 1151 8.2. Zero-copy networking (Dave Miller) 1152 8.3. SGI XFS - pagebuf patches - use of kiobufs 1153 8.4. Multi-page pioent patch for bio (Christoph Hellwig) 1154 8.5. Direct i/o implementation (Andrea Arcangeli) since 2.4.10-pre11 1155 8.6. Async i/o implementation patch (Ben LaHaise) 1156 8.7. EVMS layering design (IBM EVMS team) 1157 8.8. Larger page cache size patch (Ben LaHaise) and 1158 Large page size (Daniel Phillips) 1159 => larger contiguous physical memory buffers 1160 8.9. VM reservations patch (Ben LaHaise) 1161 8.10. Write clustering patches ? (Marcelo/Quintela/Riel ?) 1162 8.11. Block device in page cache patch (Andrea Archangeli) - now in 2.4.10+ 1163 8.12. Multiple block-size transfers for faster raw i/o (Shailabh Nagar, 1164 Badari) 1165 8.13 Priority based i/o scheduler - prepatches (Arjan van de Ven) 1166 8.14 IDE Taskfile i/o patch (Andre Hedrick) 1167 8.15 Multi-page writeout and readahead patches (Andrew Morton) 1168 8.16 Direct i/o patches for 2.5 using kvec and bio (Badari Pulavarthy) 1169 1170 9. Other References: 1171 1172 9.1 The Splice I/O Model - Larry McVoy (and subsequent discussions on lkml, 1173 and Linus' comments - Jan 2001) 1174 9.2 Discussions about kiobuf and bh design on lkml between sct, linus, alan 1175 et al - Feb-March 2001 (many of the initial thoughts that led to bio were 1176 brought up in this discussion thread) 1177 9.3 Discussions on mempool on lkml - Dec 2001.