1 .. SPDX-License-Identifier: GPL-2.0
5 Dumb style notes to maintain the author's sanity:
6 Please try to start sentences on separate lines so that
7 sentence changes don't bleed colors in diff.
8 Heading decorations are documented in sphinx.rst.
10 =========================
11 Supported File Operations
12 =========================
14 .. contents:: Table of Contents
17 Below are a discussion of the high level file operations that iomap
23 Buffered I/O is the default file I/O path in Linux.
24 File contents are cached in memory ("pagecache") to satisfy reads and
26 Dirty cache will be written back to disk at some point that can be
27 forced via ``fsync`` and variants.
29 iomap implements nearly all the folio and pagecache management that
30 filesystems have to implement themselves under the legacy I/O model.
31 This means that the filesystem need not know the details of allocating,
32 mapping, managing uptodate and dirty state, or writeback of pagecache
34 Under the legacy I/O model, this was managed very inefficiently with
35 linked lists of buffer heads instead of the per-folio bitmaps that iomap
37 Unless the filesystem explicitly opts in to buffer heads, they will not
38 be used, which makes buffered I/O much more efficient, and the pagecache
39 maintainer much happier.
41 ``struct address_space_operations``
42 -----------------------------------
44 The following iomap functions can be referenced directly from the
45 address space operations structure:
47 * ``iomap_dirty_folio``
48 * ``iomap_release_folio``
49 * ``iomap_invalidate_folio``
50 * ``iomap_is_partially_uptodate``
52 The following address space operations can be wrapped easily:
60 ``struct iomap_folio_ops``
61 --------------------------
63 The ``->iomap_begin`` function for pagecache operations may set the
64 ``struct iomap::folio_ops`` field to an ops structure to override
65 default behaviors of iomap:
69 struct iomap_folio_ops {
70 struct folio *(*get_folio)(struct iomap_iter *iter, loff_t pos,
72 void (*put_folio)(struct inode *inode, loff_t pos, unsigned copied,
74 bool (*iomap_valid)(struct inode *inode, const struct iomap *iomap);
77 iomap calls these functions:
79 - ``get_folio``: Called to allocate and return an active reference to
80 a locked folio prior to starting a write.
81 If this function is not provided, iomap will call
83 This could be used to `set up per-folio filesystem state
84 <https://lore.kernel.org/all/20190429220934.10415-5-agruenba@redhat.com/>`_
87 - ``put_folio``: Called to unlock and put a folio after a pagecache
89 If this function is not provided, iomap will ``folio_unlock`` and
90 ``folio_put`` on its own.
91 This could be used to `commit per-folio filesystem state
92 <https://lore.kernel.org/all/20180619164137.13720-6-hch@lst.de/>`_
93 that was set up by ``->get_folio``.
95 - ``iomap_valid``: The filesystem may not hold locks between
96 ``->iomap_begin`` and ``->iomap_end`` because pagecache operations
97 can take folio locks, fault on userspace pages, initiate writeback
98 for memory reclamation, or engage in other time-consuming actions.
99 If a file's space mapping data are mutable, it is possible that the
100 mapping for a particular pagecache folio can `change in the time it
102 <https://lore.kernel.org/all/20221123055812.747923-8-david@fromorbit.com/>`_
103 to allocate, install, and lock that folio.
105 For the pagecache, races can happen if writeback doesn't take
106 ``i_rwsem`` or ``invalidate_lock`` and updates mapping information.
107 Races can also happen if the filesytem allows concurrent writes.
108 For such files, the mapping *must* be revalidated after the folio
109 lock has been taken so that iomap can manage the folio correctly.
111 fsdax does not need this revalidation because there's no writeback
112 and no support for unwritten extents.
114 Filesystems subject to this kind of race must provide a
115 ``->iomap_valid`` function to decide if the mapping is still valid.
116 If the mapping is not valid, the mapping will be sampled again.
118 To support making the validity decision, the filesystem's
119 ``->iomap_begin`` function may set ``struct iomap::validity_cookie``
120 at the same time that it populates the other iomap fields.
121 A simple validation cookie implementation is a sequence counter.
122 If the filesystem bumps the sequence counter every time it modifies
123 the inode's extent map, it can be placed in the ``struct
124 iomap::validity_cookie`` during ``->iomap_begin``.
125 If the value in the cookie is found to be different to the value
126 the filesystem holds when the mapping is passed back to
127 ``->iomap_valid``, then the iomap should considered stale and the
130 These ``struct kiocb`` flags are significant for buffered I/O with iomap:
132 * ``IOCB_NOWAIT``: Turns on ``IOMAP_NOWAIT``.
134 Internal per-Folio State
135 ------------------------
137 If the fsblock size matches the size of a pagecache folio, it is assumed
138 that all disk I/O operations will operate on the entire folio.
139 The uptodate (memory contents are at least as new as what's on disk) and
140 dirty (memory contents are newer than what's on disk) status of the
141 folio are all that's needed for this case.
143 If the fsblock size is less than the size of a pagecache folio, iomap
144 tracks the per-fsblock uptodate and dirty state itself.
145 This enables iomap to handle both "bs < ps" `filesystems
146 <https://lore.kernel.org/all/20230725122932.144426-1-ritesh.list@gmail.com/>`_
147 and large folios in the pagecache.
149 iomap internally tracks two state bits per fsblock:
151 * ``uptodate``: iomap will try to keep folios fully up to date.
152 If there are read(ahead) errors, those fsblocks will not be marked
154 The folio itself will be marked uptodate when all fsblocks within the
157 * ``dirty``: iomap will set the per-block dirty state when programs
159 The folio itself will be marked dirty when any fsblock within the
162 iomap also tracks the amount of read and write disk IOs that are in
164 This structure is much lighter weight than ``struct buffer_head``
165 because there is only one per folio, and the per-fsblock overhead is two
168 Filesystems wishing to turn on large folios in the pagecache should call
169 ``mapping_set_large_folios`` when initializing the incore inode.
171 Buffered Readahead and Reads
172 ----------------------------
174 The ``iomap_readahead`` function initiates readahead to the pagecache.
175 The ``iomap_read_folio`` function reads one folio's worth of data into
177 The ``flags`` argument to ``->iomap_begin`` will be set to zero.
178 The pagecache takes whatever locks it needs before calling the
184 The ``iomap_file_buffered_write`` function writes an ``iocb`` to the
186 ``IOMAP_WRITE`` or ``IOMAP_WRITE`` | ``IOMAP_NOWAIT`` will be passed as
187 the ``flags`` argument to ``->iomap_begin``.
188 Callers commonly take ``i_rwsem`` in either shared or exclusive mode
189 before calling this function.
194 The ``iomap_page_mkwrite`` function handles a write fault to a folio in
196 ``IOMAP_WRITE | IOMAP_FAULT`` will be passed as the ``flags`` argument
197 to ``->iomap_begin``.
198 Callers commonly take the mmap ``invalidate_lock`` in shared or
199 exclusive mode before calling this function.
201 Buffered Write Failures
202 ~~~~~~~~~~~~~~~~~~~~~~~
204 After a short write to the pagecache, the areas not written will not
206 The filesystem must arrange to `cancel
207 <https://lore.kernel.org/all/20221123055812.747923-6-david@fromorbit.com/>`_
209 <https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/>`_
210 because writeback will not consume the reservation.
211 The ``iomap_write_delalloc_release`` can be called from a
212 ``->iomap_end`` function to find all the clean areas of the folios
213 caching a fresh (``IOMAP_F_NEW``) delalloc mapping.
214 It takes the ``invalidate_lock``.
216 The filesystem must supply a function ``punch`` to be called for
217 each file range in this state.
218 This function must *only* remove delayed allocation reservations, in
219 case another thread racing with the current thread writes successfully
220 to the same region and triggers writeback to flush the dirty data out to
223 Zeroing for File Operations
224 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
226 Filesystems can call ``iomap_zero_range`` to perform zeroing of the
227 pagecache for non-truncation file operations that are not aligned to
229 ``IOMAP_ZERO`` will be passed as the ``flags`` argument to
231 Callers typically hold ``i_rwsem`` and ``invalidate_lock`` in exclusive
232 mode before calling this function.
234 Unsharing Reflinked File Data
235 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
237 Filesystems can call ``iomap_file_unshare`` to force a file sharing
238 storage with another file to preemptively copy the shared data to newly
240 ``IOMAP_WRITE | IOMAP_UNSHARE`` will be passed as the ``flags`` argument
241 to ``->iomap_begin``.
242 Callers typically hold ``i_rwsem`` and ``invalidate_lock`` in exclusive
243 mode before calling this function.
248 Filesystems can call ``iomap_truncate_page`` to zero the bytes in the
249 pagecache from EOF to the end of the fsblock during a file truncation
251 ``truncate_setsize`` or ``truncate_pagecache`` will take care of
252 everything after the EOF block.
253 ``IOMAP_ZERO`` will be passed as the ``flags`` argument to
255 Callers typically hold ``i_rwsem`` and ``invalidate_lock`` in exclusive
256 mode before calling this function.
261 Filesystems can call ``iomap_writepages`` to respond to a request to
262 write dirty pagecache folios to disk.
263 The ``mapping`` and ``wbc`` parameters should be passed unchanged.
264 The ``wpc`` pointer should be allocated by the filesystem and must
265 be initialized to zero.
267 The pagecache will lock each folio before trying to schedule it for
269 It does not lock ``i_rwsem`` or ``invalidate_lock``.
271 The dirty bit will be cleared for all folios run through the
272 ``->map_blocks`` machinery described below even if the writeback fails.
273 This is to prevent dirty folio clots when storage devices fail; an
274 ``-EIO`` is recorded for userspace to collect via ``fsync``.
276 The ``ops`` structure must be specified and is as follows:
278 ``struct iomap_writeback_ops``
279 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
283 struct iomap_writeback_ops {
284 int (*map_blocks)(struct iomap_writepage_ctx *wpc, struct inode *inode,
285 loff_t offset, unsigned len);
286 int (*prepare_ioend)(struct iomap_ioend *ioend, int status);
287 void (*discard_folio)(struct folio *folio, loff_t pos);
290 The fields are as follows:
292 - ``map_blocks``: Sets ``wpc->iomap`` to the space mapping of the file
293 range (in bytes) given by ``offset`` and ``len``.
294 iomap calls this function for each dirty fs block in each dirty folio,
295 though it will `reuse mappings
296 <https://lore.kernel.org/all/20231207072710.176093-15-hch@lst.de/>`_
297 for runs of contiguous dirty fsblocks within a folio.
298 Do not return ``IOMAP_INLINE`` mappings here; the ``->iomap_end``
299 function must deal with persisting written data.
300 Do not return ``IOMAP_DELALLOC`` mappings here; iomap currently
301 requires mapping to allocated space.
302 Filesystems can skip a potentially expensive mapping lookup if the
303 mappings have not changed.
304 This revalidation must be open-coded by the filesystem; it is
305 unclear if ``iomap::validity_cookie`` can be reused for this
307 This function must be supplied by the filesystem.
309 - ``prepare_ioend``: Enables filesystems to transform the writeback
310 ioend or perform any other preparatory work before the writeback I/O
312 This might include pre-write space accounting updates, or installing
313 a custom ``->bi_end_io`` function for internal purposes, such as
314 deferring the ioend completion to a workqueue to run metadata update
315 transactions from process context.
316 This function is optional.
318 - ``discard_folio``: iomap calls this function after ``->map_blocks``
319 fails to schedule I/O for any part of a dirty folio.
320 The function should throw away any reservations that may have been
322 The folio will be marked clean and an ``-EIO`` recorded in the
324 Filesystems can use this callback to `remove
325 <https://lore.kernel.org/all/20201029163313.1766967-1-bfoster@redhat.com/>`_
326 delalloc reservations to avoid having delalloc reservations for
328 This function is optional.
330 Pagecache Writeback Completion
331 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333 To handle the bookkeeping that must happen after disk I/O for writeback
334 completes, iomap creates chains of ``struct iomap_ioend`` objects that
335 wrap the ``bio`` that is used to write pagecache data to disk.
336 By default, iomap finishes writeback ioends by clearing the writeback
337 bit on the folios attached to the ``ioend``.
338 If the write failed, it will also set the error bits on the folios and
340 This can happen in interrupt or process context, depending on the
343 Filesystems that need to update internal bookkeeping (e.g. unwritten
344 extent conversions) should provide a ``->prepare_ioend`` function to
345 set ``struct iomap_end::bio::bi_end_io`` to its own function.
346 This function should call ``iomap_finish_ioends`` after finishing its
347 own work (e.g. unwritten extent conversion).
349 Some filesystems may wish to `amortize the cost of running metadata
351 <https://lore.kernel.org/all/20220120034733.221737-1-david@fromorbit.com/>`_
352 for post-writeback updates by batching them.
353 They may also require transactions to run from process context, which
354 implies punting batches to a workqueue.
355 iomap ioends contain a ``list_head`` to enable batching.
357 Given a batch of ioends, iomap has a few helpers to assist with
360 * ``iomap_sort_ioends``: Sort all the ioends in the list by file
363 * ``iomap_ioend_try_merge``: Given an ioend that is not in any list and
364 a separate list of sorted ioends, merge as many of the ioends from
365 the head of the list into the given ioend.
366 ioends can only be merged if the file range and storage addresses are
367 contiguous; the unwritten and shared status are the same; and the
368 write I/O outcome is the same.
369 The merged ioends become their own list.
371 * ``iomap_finish_ioends``: Finish an ioend that possibly has other
377 In Linux, direct I/O is defined as file I/O that is issued directly to
378 storage, bypassing the pagecache.
379 The ``iomap_dio_rw`` function implements O_DIRECT (direct I/O) reads and
384 ssize_t iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
385 const struct iomap_ops *ops,
386 const struct iomap_dio_ops *dops,
387 unsigned int dio_flags, void *private,
390 The filesystem can provide the ``dops`` parameter if it needs to perform
391 extra work before or after the I/O is issued to storage.
392 The ``done_before`` parameter tells the how much of the request has
393 already been transferred.
394 It is used to continue a request asynchronously when `part of the
396 <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c03098d4b9ad76bca2966a8769dcfe59f7f85103>`_
397 has already been completed synchronously.
399 The ``done_before`` parameter should be set if writes for the ``iocb``
400 have been initiated prior to the call.
401 The direction of the I/O is determined from the ``iocb`` passed in.
403 The ``dio_flags`` argument can be set to any combination of the
406 * ``IOMAP_DIO_FORCE_WAIT``: Wait for the I/O to complete even if the
407 kiocb is not synchronous.
409 * ``IOMAP_DIO_OVERWRITE_ONLY``: Perform a pure overwrite for this range
410 or fail with ``-EAGAIN``.
411 This can be used by filesystems with complex unaligned I/O
412 write paths to provide an optimised fast path for unaligned writes.
413 If a pure overwrite can be performed, then serialisation against
414 other I/Os to the same filesystem block(s) is unnecessary as there is
415 no risk of stale data exposure or data loss.
416 If a pure overwrite cannot be performed, then the filesystem can
417 perform the serialisation steps needed to provide exclusive access
418 to the unaligned I/O range so that it can perform allocation and
419 sub-block zeroing safely.
420 Filesystems can use this flag to try to reduce locking contention,
421 but a lot of `detailed checking
422 <https://lore.kernel.org/linux-ext4/20230314130759.642710-1-bfoster@redhat.com/>`_
423 is required to do it `correctly
424 <https://lore.kernel.org/linux-ext4/20230810165559.946222-1-bfoster@redhat.com/>`_.
426 * ``IOMAP_DIO_PARTIAL``: If a page fault occurs, return whatever
427 progress has already been made.
428 The caller may deal with the page fault and retry the operation.
429 If the caller decides to retry the operation, it should pass the
430 accumulated return values of all previous calls as the
431 ``done_before`` parameter to the next call.
433 These ``struct kiocb`` flags are significant for direct I/O with iomap:
435 * ``IOCB_NOWAIT``: Turns on ``IOMAP_NOWAIT``.
437 * ``IOCB_SYNC``: Ensure that the device has persisted data to disk
438 before completing the call.
439 In the case of pure overwrites, the I/O may be issued with FUA
442 * ``IOCB_HIPRI``: Poll for I/O completion instead of waiting for an
444 Only meaningful for asynchronous I/O, and only if the entire I/O can
445 be issued as a single ``struct bio``.
447 * ``IOCB_DIO_CALLER_COMP``: Try to run I/O completion from the caller's
449 See ``linux/fs.h`` for more details.
451 Filesystems should call ``iomap_dio_rw`` from ``->read_iter`` and
452 ``->write_iter``, and set ``FMODE_CAN_ODIRECT`` in the ``->open``
453 function for the file.
454 They should not set ``->direct_IO``, which is deprecated.
456 If a filesystem wishes to perform its own work before direct I/O
457 completion, it should call ``__iomap_dio_rw``.
458 If its return value is not an error pointer or a NULL pointer, the
459 filesystem should pass the return value to ``iomap_dio_complete`` after
460 finishing its internal work.
465 ``iomap_dio_rw`` can return one of the following:
467 * A non-negative number of bytes transferred.
469 * ``-ENOTBLK``: Fall back to buffered I/O.
470 iomap itself will return this value if it cannot invalidate the page
471 cache before issuing the I/O to storage.
472 The ``->iomap_begin`` or ``->iomap_end`` functions may also return
475 * ``-EIOCBQUEUED``: The asynchronous direct I/O request has been
476 queued and will be completed separately.
478 * Any of the other negative error codes.
483 A direct I/O read initiates a read I/O from the storage device to the
485 Dirty parts of the pagecache are flushed to storage before initiating
487 The ``flags`` value for ``->iomap_begin`` will be ``IOMAP_DIRECT`` with
488 any combination of the following enhancements:
490 * ``IOMAP_NOWAIT``, as defined previously.
492 Callers commonly hold ``i_rwsem`` in shared mode before calling this
498 A direct I/O write initiates a write I/O to the storage device from the
500 Dirty parts of the pagecache are flushed to storage before initiating
502 The pagecache is invalidated both before and after the write io.
503 The ``flags`` value for ``->iomap_begin`` will be ``IOMAP_DIRECT |
504 IOMAP_WRITE`` with any combination of the following enhancements:
506 * ``IOMAP_NOWAIT``, as defined previously.
508 * ``IOMAP_OVERWRITE_ONLY``: Allocating blocks and zeroing partial
509 blocks is not allowed.
510 The entire file range must map to a single written or unwritten
512 The file I/O range must be aligned to the filesystem block size
513 if the mapping is unwritten and the filesystem cannot handle zeroing
514 the unaligned regions without exposing stale contents.
516 * ``IOMAP_ATOMIC``: This write is being issued with torn-write
518 Only a single bio can be created for the write, and the write must
519 not be split into multiple I/O requests, i.e. flag REQ_ATOMIC must be
521 The file range to write must be aligned to satisfy the requirements
522 of both the filesystem and the underlying block device's atomic
524 If filesystem metadata updates are required (e.g. unwritten extent
525 conversion or copy on write), all updates for the entire file range
526 must be committed atomically as well.
527 Only one space mapping is allowed per untorn write.
528 Untorn writes must be aligned to, and must not be longer than, a
531 Callers commonly hold ``i_rwsem`` in shared or exclusive mode before
532 calling this function.
534 ``struct iomap_dio_ops:``
535 -------------------------
538 struct iomap_dio_ops {
539 void (*submit_io)(const struct iomap_iter *iter, struct bio *bio,
541 int (*end_io)(struct kiocb *iocb, ssize_t size, int error,
543 struct bio_set *bio_set;
546 The fields of this structure are as follows:
548 - ``submit_io``: iomap calls this function when it has constructed a
549 ``struct bio`` object for the I/O requested, and wishes to submit it
551 If no function is provided, ``submit_bio`` will be called directly.
552 Filesystems that would like to perform additional work before (e.g.
553 data replication for btrfs) should implement this function.
555 - ``end_io``: This is called after the ``struct bio`` completes.
556 This function should perform post-write conversions of unwritten
557 extent mappings, handle write failures, etc.
558 The ``flags`` argument may be set to a combination of the following:
560 * ``IOMAP_DIO_UNWRITTEN``: The mapping was unwritten, so the ioend
561 should mark the extent as written.
563 * ``IOMAP_DIO_COW``: Writing to the space in the mapping required a
564 copy on write operation, so the ioend should switch mappings.
566 - ``bio_set``: This allows the filesystem to provide a custom bio_set
567 for allocating direct I/O bios.
568 This enables filesystems to `stash additional per-bio information
569 <https://lore.kernel.org/all/20220505201115.937837-3-hch@lst.de/>`_
571 If this field is NULL, generic ``struct bio`` objects will be used.
573 Filesystems that want to perform extra work after an I/O completion
574 should set a custom ``->bi_end_io`` function via ``->submit_io``.
575 Afterwards, the custom endio function must call
576 ``iomap_dio_bio_end_io`` to finish the direct I/O.
581 Some storage devices can be directly mapped as memory.
582 These devices support a new access mode known as "fsdax" that allows
583 loads and stores through the CPU and memory controller.
588 A fsdax read performs a memcpy from storage device to the caller's
590 The ``flags`` value for ``->iomap_begin`` will be ``IOMAP_DAX`` with any
591 combination of the following enhancements:
593 * ``IOMAP_NOWAIT``, as defined previously.
595 Callers commonly hold ``i_rwsem`` in shared mode before calling this
601 A fsdax write initiates a memcpy to the storage device from the caller's
603 The ``flags`` value for ``->iomap_begin`` will be ``IOMAP_DAX |
604 IOMAP_WRITE`` with any combination of the following enhancements:
606 * ``IOMAP_NOWAIT``, as defined previously.
608 * ``IOMAP_OVERWRITE_ONLY``: The caller requires a pure overwrite to be
609 performed from this mapping.
610 This requires the filesystem extent mapping to already exist as an
611 ``IOMAP_MAPPED`` type and span the entire range of the write I/O
613 If the filesystem cannot map this request in a way that allows the
614 iomap infrastructure to perform a pure overwrite, it must fail the
615 mapping operation with ``-EAGAIN``.
617 Callers commonly hold ``i_rwsem`` in exclusive mode before calling this
623 The ``dax_iomap_fault`` function handles read and write faults to fsdax
625 For a read fault, ``IOMAP_DAX | IOMAP_FAULT`` will be passed as the
626 ``flags`` argument to ``->iomap_begin``.
627 For a write fault, ``IOMAP_DAX | IOMAP_FAULT | IOMAP_WRITE`` will be
628 passed as the ``flags`` argument to ``->iomap_begin``.
630 Callers commonly hold the same locks as they do to call their iomap
631 pagecache counterparts.
633 fsdax Truncation, fallocate, and Unsharing
634 ------------------------------------------
636 For fsdax files, the following functions are provided to replace their
637 iomap pagecache I/O counterparts.
638 The ``flags`` argument to ``->iomap_begin`` are the same as the
639 pagecache counterparts, with ``IOMAP_DAX`` added.
641 * ``dax_file_unshare``
643 * ``dax_truncate_page``
645 Callers commonly hold the same locks as they do to call their iomap
646 pagecache counterparts.
651 Filesystems implementing the ``FIDEDUPERANGE`` ioctl must call the
652 ``dax_remap_file_range_prep`` function with their own iomap read ops.
657 iomap implements the two iterating whence modes of the ``llseek`` system
663 The ``iomap_seek_data`` function implements the SEEK_DATA "whence" value
665 ``IOMAP_REPORT`` will be passed as the ``flags`` argument to
668 For unwritten mappings, the pagecache will be searched.
669 Regions of the pagecache with a folio mapped and uptodate fsblocks
670 within those folios will be reported as data areas.
672 Callers commonly hold ``i_rwsem`` in shared mode before calling this
678 The ``iomap_seek_hole`` function implements the SEEK_HOLE "whence" value
680 ``IOMAP_REPORT`` will be passed as the ``flags`` argument to
683 For unwritten mappings, the pagecache will be searched.
684 Regions of the pagecache with no folio mapped, or a !uptodate fsblock
685 within a folio will be reported as sparse hole areas.
687 Callers commonly hold ``i_rwsem`` in shared mode before calling this
693 The ``iomap_swapfile_activate`` function finds all the base-page aligned
694 regions in a file and sets them up as swap space.
695 The file will be ``fsync()``'d before activation.
696 ``IOMAP_REPORT`` will be passed as the ``flags`` argument to
698 All mappings must be mapped or unwritten; cannot be dirty or shared, and
699 cannot span multiple block devices.
700 Callers must hold ``i_rwsem`` in exclusive mode; this is already
701 provided by ``swapon``.
703 File Space Mapping Reporting
704 ============================
706 iomap implements two of the file space mapping system calls.
711 The ``iomap_fiemap`` function exports file extent mappings to userspace
712 in the format specified by the ``FS_IOC_FIEMAP`` ioctl.
713 ``IOMAP_REPORT`` will be passed as the ``flags`` argument to
715 Callers commonly hold ``i_rwsem`` in shared mode before calling this
721 ``iomap_bmap`` implements FIBMAP.
722 The calling conventions are the same as for FIEMAP.
723 This function is only provided to maintain compatibility for filesystems
724 that implemented FIBMAP prior to conversion.
725 This ioctl is deprecated; do **not** add a FIBMAP implementation to
726 filesystems that do not have it.
727 Callers should probably hold ``i_rwsem`` in shared mode before calling
728 this function, but this is unclear.