4 This document describes a collection of device-mapper targets that
5 between them implement thin-provisioning and snapshots.
7 The main highlight of this implementation, compared to the previous
8 implementation of snapshots, is that it allows many virtual devices to
9 be stored on the same data volume. This simplifies administration and
10 allows the sharing of data between volumes, thus reducing disk usage.
12 Another significant feature is support for an arbitrary depth of
13 recursive snapshots (snapshots of snapshots of snapshots ...). The
14 previous implementation of snapshots did this by chaining together
15 lookup tables, and so performance was O(depth). This new
16 implementation uses a single data structure to avoid this degradation
17 with depth. Fragmentation may still be an issue, however, in some
20 Metadata is stored on a separate device from data, giving the
21 administrator some freedom, for example to:
23 - Improve metadata resilience by storing metadata on a mirrored volume
24 but data on a non-mirrored one.
26 - Improve performance by storing the metadata on SSD.
31 These targets are very much still in the EXPERIMENTAL state. Please
32 do not yet rely on them in production. But do experiment and offer us
33 feedback. Different use cases will have different performance
34 characteristics, for example due to fragmentation of the data volume.
36 If you find this software is not performing as expected please mail
37 dm-devel@redhat.com with details and we'll try our best to improve
40 Userspace tools for checking and repairing the metadata are under
46 This section describes some quick recipes for using thin provisioning.
47 They use the dmsetup program to control the device-mapper driver
48 directly. End users will be advised to use a higher-level volume
49 manager such as LVM2 once support has been added.
54 The pool device ties together the metadata volume and the data volume.
55 It maps I/O linearly to the data volume and updates the metadata via
58 - Function calls from the thin targets
60 - Device-mapper 'messages' from userspace which control the creation of new
61 virtual devices amongst other things.
63 Setting up a fresh pool device
64 ------------------------------
66 Setting up a pool device requires a valid metadata device, and a
67 data device. If you do not have an existing metadata device you can
68 make one by zeroing the first 4k to indicate empty metadata.
70 dd if=/dev/zero of=$metadata_dev bs=4096 count=1
72 The amount of metadata you need will vary according to how many blocks
73 are shared between thin devices (i.e. through snapshots). If you have
74 less sharing than average you'll need a larger-than-average metadata device.
76 As a guide, we suggest you calculate the number of bytes to use in the
77 metadata device as 48 * $data_dev_size / $data_block_size but round it up
78 to 2MB if the answer is smaller. If you're creating large numbers of
79 snapshots which are recording large amounts of change, you may find you
80 need to increase this.
82 The largest size supported is 16GB: If the device is larger,
83 a warning will be issued and the excess space will not be used.
85 Reloading a pool table
86 ----------------------
88 You may reload a pool's table, indeed this is how the pool is resized
89 if it runs out of space. (N.B. While specifying a different metadata
90 device when reloading is not forbidden at the moment, things will go
91 wrong if it does not route I/O to exactly the same on-disk location as
94 Using an existing pool device
95 -----------------------------
98 --table "0 20971520 thin-pool $metadata_dev $data_dev \
99 $data_block_size $low_water_mark"
101 $data_block_size gives the smallest unit of disk space that can be
102 allocated at a time expressed in units of 512-byte sectors.
103 $data_block_size must be between 128 (64KB) and 2097152 (1GB) and a
104 multiple of 128 (64KB). $data_block_size cannot be changed after the
105 thin-pool is created. People primarily interested in thin provisioning
106 may want to use a value such as 1024 (512KB). People doing lots of
107 snapshotting may want a smaller value such as 128 (64KB). If you are
108 not zeroing newly-allocated data, a larger $data_block_size in the
109 region of 256000 (128MB) is suggested.
111 $low_water_mark is expressed in blocks of size $data_block_size. If
112 free space on the data device drops below this level then a dm event
113 will be triggered which a userspace daemon should catch allowing it to
114 extend the pool device. Only one such event will be sent.
116 No special event is triggered if a just resumed device's free space is below
117 the low water mark. However, resuming a device always triggers an
118 event; a userspace daemon should verify that free space exceeds the low
119 water mark when handling this event.
121 A low water mark for the metadata device is maintained in the kernel and
122 will trigger a dm event if free space on the metadata device drops below
125 Updating on-disk metadata
126 -------------------------
128 On-disk metadata is committed every time a FLUSH or FUA bio is written.
129 If no such requests are made then commits will occur every second. This
130 means the thin-provisioning target behaves like a physical disk that has
131 a volatile write cache. If power is lost you may lose some recent
132 writes. The metadata should always be consistent in spite of any crash.
134 If data space is exhausted the pool will either error or queue IO
135 according to the configuration (see: error_if_no_space). If metadata
136 space is exhausted or a metadata operation fails: the pool will error IO
137 until the pool is taken offline and repair is performed to 1) fix any
138 potential inconsistencies and 2) clear the flag that imposes repair.
139 Once the pool's metadata device is repaired it may be resized, which
140 will allow the pool to return to normal operation. Note that if a pool
141 is flagged as needing repair, the pool's data and metadata devices
142 cannot be resized until repair is performed. It should also be noted
143 that when the pool's metadata space is exhausted the current metadata
144 transaction is aborted. Given that the pool will cache IO whose
145 completion may have already been acknowledged to upper IO layers
146 (e.g. filesystem) it is strongly suggested that consistency checks
147 (e.g. fsck) be performed on those layers when repair of the pool is
153 i) Creating a new thinly-provisioned volume.
155 To create a new thinly- provisioned volume you must send a message to an
156 active pool device, /dev/mapper/pool in this example.
158 dmsetup message /dev/mapper/pool 0 "create_thin 0"
160 Here '0' is an identifier for the volume, a 24-bit number. It's up
161 to the caller to allocate and manage these identifiers. If the
162 identifier is already in use, the message will fail with -EEXIST.
164 ii) Using a thinly-provisioned volume.
166 Thinly-provisioned volumes are activated using the 'thin' target:
168 dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
170 The last parameter is the identifier for the thinp device.
175 i) Creating an internal snapshot.
177 Snapshots are created with another message to the pool.
179 N.B. If the origin device that you wish to snapshot is active, you
180 must suspend it before creating the snapshot to avoid corruption.
181 This is NOT enforced at the moment, so please be careful!
183 dmsetup suspend /dev/mapper/thin
184 dmsetup message /dev/mapper/pool 0 "create_snap 1 0"
185 dmsetup resume /dev/mapper/thin
187 Here '1' is the identifier for the volume, a 24-bit number. '0' is the
188 identifier for the origin device.
190 ii) Using an internal snapshot.
192 Once created, the user doesn't have to worry about any connection
193 between the origin and the snapshot. Indeed the snapshot is no
194 different from any other thinly-provisioned device and can be
195 snapshotted itself via the same method. It's perfectly legal to
196 have only one of them active, and there's no ordering requirement on
197 activating or removing them both. (This differs from conventional
198 device-mapper snapshots.)
200 Activate it exactly the same way as any other thinly-provisioned volume:
202 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
207 You can use an external _read only_ device as an origin for a
208 thinly-provisioned volume. Any read to an unprovisioned area of the
209 thin device will be passed through to the origin. Writes trigger
210 the allocation of new blocks as usual.
212 One use case for this is VM hosts that want to run guests on
213 thinly-provisioned volumes but have the base image on another device
214 (possibly shared between many VMs).
216 You must not write to the origin device if you use this technique!
217 Of course, you may write to the thin device and take internal snapshots
220 i) Creating a snapshot of an external device
222 This is the same as creating a thin device.
223 You don't mention the origin at this stage.
225 dmsetup message /dev/mapper/pool 0 "create_thin 0"
227 ii) Using a snapshot of an external device.
229 Append an extra parameter to the thin target specifying the origin:
231 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image"
233 N.B. All descendants (internal snapshots) of this snapshot require the
234 same extra origin parameter.
239 All devices using a pool must be deactivated before the pool itself
254 thin-pool <metadata dev> <data dev> <data block size (sectors)> \
255 <low water mark (blocks)> [<number of feature args> [<arg>]*]
257 Optional feature arguments:
259 skip_block_zeroing: Skip the zeroing of newly-provisioned blocks.
261 ignore_discard: Disable discard support.
263 no_discard_passdown: Don't pass discards down to the underlying
264 data device, but just remove the mapping.
266 read_only: Don't allow any changes to be made to the pool
269 error_if_no_space: Error IOs, instead of queueing, if no space.
271 Data block size must be between 64KB (128 sectors) and 1GB
272 (2097152 sectors) inclusive.
277 <transaction id> <used metadata blocks>/<total metadata blocks>
278 <used data blocks>/<total data blocks> <held metadata root>
279 [no_]discard_passdown ro|rw
282 A 64-bit number used by userspace to help synchronise with metadata
283 from volume managers.
285 used data blocks / total data blocks
286 If the number of free blocks drops below the pool's low water mark a
287 dm event will be sent to userspace. This event is edge-triggered and
288 it will occur only once after each resume so volume manager writers
289 should register for the event and then check the target's status.
292 The location, in blocks, of the metadata root that has been
293 'held' for userspace read access. '-' indicates there is no
296 discard_passdown|no_discard_passdown
297 Whether or not discards are actually being passed down to the
298 underlying device. When this is enabled when loading the table,
299 it can get disabled if the underlying device doesn't support it.
301 ro|rw|out_of_data_space
302 If the pool encounters certain types of device failures it will
303 drop into a read-only metadata mode in which no changes to
304 the pool metadata (like allocating new blocks) are permitted.
306 In serious cases where even a read-only mode is deemed unsafe
307 no further I/O will be permitted and the status will just
308 contain the string 'Fail'. The userspace recovery tools
311 error_if_no_space|queue_if_no_space
312 If the pool runs out of data or metadata space, the pool will
313 either queue or error the IO destined to the data device. The
314 default is to queue the IO until more space is added or the
315 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool
316 module parameter can be used to change this timeout -- it
317 defaults to 60 seconds but may be disabled using a value of 0.
320 A metadata operation has failed, resulting in the needs_check
321 flag being set in the metadata's superblock. The metadata
322 device must be deactivated and checked/repaired before the
323 thin-pool can be made fully operational again. '-' indicates
324 needs_check is not set.
330 Create a new thinly-provisioned device.
331 <dev id> is an arbitrary unique 24-bit identifier chosen by
334 create_snap <dev id> <origin id>
336 Create a new snapshot of another thinly-provisioned device.
337 <dev id> is an arbitrary unique 24-bit identifier chosen by
339 <origin id> is the identifier of the thinly-provisioned device
340 of which the new device will be a snapshot.
344 Deletes a thin device. Irreversible.
346 set_transaction_id <current id> <new id>
348 Userland volume managers, such as LVM, need a way to
349 synchronise their external metadata with the internal metadata of the
350 pool target. The thin-pool target offers to store an
351 arbitrary 64-bit transaction id and return it on the target's
352 status line. To avoid races you must provide what you think
353 the current transaction id is when you change it with this
354 compare-and-swap message.
356 reserve_metadata_snap
358 Reserve a copy of the data mapping btree for use by userland.
359 This allows userland to inspect the mappings as they were when
360 this message was executed. Use the pool's status command to
361 get the root block associated with the metadata snapshot.
363 release_metadata_snap
365 Release a previously reserved copy of the data mapping btree.
372 thin <pool dev> <dev id> [<external origin dev>]
375 the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
378 the internal device identifier of the device to be
382 an optional block device outside the pool to be treated as a
383 read-only snapshot origin: reads to unprovisioned areas of the
384 thin target will be mapped to this device.
386 The pool doesn't store any size against the thin devices. If you
387 load a thin target that is smaller than you've been using previously,
388 then you'll have no access to blocks mapped beyond the end. If you
389 load a target that is bigger than before, then extra blocks will be
390 provisioned as and when needed.
394 <nr mapped sectors> <highest mapped sector>
396 If the pool has encountered device errors and failed, the status
397 will just contain the string 'Fail'. The userspace recovery
398 tools should then be used.