4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
35 .Nd available properties for ZFS storage pools
37 Each pool has several properties associated with it.
38 Some properties are read-only statistics while others are configurable and
39 change the behavior of the pool.
41 The following are read-only properties:
44 Amount of storage used within the pool.
51 Percentage of pool space used.
52 This property can also be referred to by its shortened column name,
55 Amount of uninitialized space within the pool or device that can be used to
56 increase the total capacity of the pool.
57 On whole-disk vdevs, this is the space beyond the end of the GPT –
58 typically occurring when a LUN is dynamically expanded
59 or a disk replaced with a larger one.
60 On partition vdevs, this is the space appended to the partition after it was
61 added to the pool – most likely by resizing it in-place.
62 The space can be claimed for the pool by bringing it online with
65 .Nm zpool Cm online Fl e .
67 The amount of fragmentation in the pool. As the amount of space
69 increases, it becomes more difficult to locate
71 space. This may result in lower write performance compared to pools with more
72 unfragmented free space.
74 The amount of free space available in the pool.
78 property describes how much new data can be written to ZFS filesystems/volumes.
81 property is not generally useful for this purpose, and can be substantially more than the zfs
83 space. This discrepancy is due to several factors, including raidz parity; zfs
84 reservation, quota, refreservation, and refquota properties; and space set aside by
87 .Xr zfs-module-parameters 5
88 for more information).
90 After a file system or snapshot is destroyed, the space it was using is
91 returned to the pool asynchronously.
93 is the amount of space remaining to be reclaimed.
100 The current health of the pool.
102 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
104 A unique identifier for the pool.
106 A unique identifier for the pool.
109 property, this identifier is generated every time we load the pool (e.g. does
110 not persist across imports/exports) and never changes while the pool is loaded
113 operation takes place).
115 Total size of the storage pool.
116 .It Sy unsupported@ Ns Em feature_guid
117 Information about unsupported features that are enabled on the pool.
123 The space usage properties report actual physical space available to the
125 The physical space can be different from the total amount of space that any
126 contained datasets can actually use.
127 The amount of space used in a raidz configuration depends on the characteristics
128 of the data being written.
129 In addition, ZFS reserves some space for internal accounting that the
131 command takes into account, but the
134 For non-full pools of a reasonable size, these effects should be invisible.
135 For small pools, or pools that are close to being completely full, these
136 discrepancies may become more noticeable.
138 The following property can be set at creation time and import time:
141 Alternate root directory.
142 If set, this directory is prepended to any mount points within the pool.
143 This can be used when examining an unknown pool where the mount points cannot be
144 trusted, or in an alternate boot environment, where the typical paths are not
147 is not a persistent property.
148 It is valid only while the system is up.
152 .Sy cachefile Ns = Ns Sy none ,
153 though this may be overridden using an explicit setting.
156 The following property can be set only at import time:
158 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
161 the pool will be imported in read-only mode.
162 This property can also be referred to by its shortened column name,
166 The following properties can be set at creation time and import time, and later
171 .It Sy ashift Ns = Ns Sy ashift
172 Pool sector size exponent, to the power of
174 (internally referred to as
176 ). Values from 9 to 16, inclusive, are valid; also, the
177 value 0 (the default) means to auto-detect using the kernel's block
178 layer and a ZFS internal exception list. I/O operations will be aligned
179 to the specified size boundaries. Additionally, the minimum (disk)
180 write size will be set to the specified size, so this represents a
181 space vs. performance trade-off. For optimal performance, the pool
182 sector size should be greater than or equal to the sector size of the
183 underlying disks. The typical case for setting this property is when
184 performance is important and the underlying disks use 4KiB sectors but
185 report 512B sectors to the OS (for compatibility reasons); in that
188 (which is 1<<12 = 4096). When set, this property is
189 used as the default hint value in subsequent vdev operations (add,
190 attach and replace). Changing this value will not modify any existing
191 vdev, not even on disk replacement; however it can be used, for
192 instance, to replace a dying 512B sectors disk with a newer 4KiB
193 sectors device: this will probably result in bad performance but at the
194 same time could prevent loss of data.
195 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
196 Controls automatic pool expansion when the underlying LUN is grown.
199 the pool will be resized according to the size of the expanded device.
200 If the device is part of a mirror or raidz then all devices within that
201 mirror/raidz group must be expanded before the new space is made available to
203 The default behavior is
205 This property can also be referred to by its shortened column name,
207 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
208 Controls automatic device replacement.
211 device replacement must be initiated by the administrator by using the
216 any new device, found in the same physical location as a device that previously
217 belonged to the pool, is automatically formatted and replaced.
218 The default behavior is
220 This property can also be referred to by its shortened column name,
222 Autoreplace can also be used with virtual disks (like device
223 mapper) provided that you use the /dev/disk/by-vdev paths setup by
224 vdev_id.conf. See the
226 man page for more details.
227 Autoreplace and autoonline require the ZFS Event Daemon be configured and
230 man page for more details.
231 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
234 space which has been recently freed, and is no longer allocated by the pool,
235 will be periodically trimmed. This allows block device vdevs which support
236 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
237 supports hole-punching, to reclaim unused blocks. The default setting for
241 Automatic TRIM does not immediately reclaim blocks after a free. Instead,
242 it will optimistically delay allowing smaller ranges to be aggregated in to
243 a few larger ones. These can then be issued more efficiently to the storage.
244 TRIM on L2ARC devices is enabled by setting
245 .Sy l2arc_trim_ahead > 0 .
247 Be aware that automatic trimming of recently freed data blocks can put
248 significant stress on the underlying storage devices. This will vary
249 depending of how well the specific device handles these commands. For
250 lower end devices it is often possible to achieve most of the benefits
251 of automatic trimming by running an on-demand (manual) TRIM periodically
255 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
256 Identifies the default bootable dataset for the root pool. This property is
257 expected to be set mainly by the installation and upgrade programs.
258 Not all Linux distribution boot processes use the bootfs property.
259 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
260 Controls the location of where the pool configuration is cached.
261 Discovering all pools on system startup requires a cached copy of the
262 configuration data that is stored on the root file system.
263 All pools in this cache are automatically imported when the system boots.
264 Some environments, such as install and clustering, need to cache this
265 information in a different location so that pools are not automatically
267 Setting this property caches the pool configuration in a different location that
268 can later be imported with
269 .Nm zpool Cm import Fl c .
270 Setting it to the value
272 creates a temporary pool that is never cached, and the
275 uses the default location.
277 Multiple pools can share the same cache file.
278 Because the kernel destroys and recreates this file when pools are added and
279 removed, care should be taken when attempting to access this file.
280 When the last pool using a
282 is exported or destroyed, the file will be empty.
283 .It Sy comment Ns = Ns Ar text
284 A text string consisting of printable ASCII characters that will be stored
285 such that it is available even if the pool becomes faulted.
286 An administrator can provide additional information about a pool using this
288 .It Sy dedupditto Ns = Ns Ar number
289 This property is deprecated and no longer has any effect.
290 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
291 Controls whether a non-privileged user is granted access based on the dataset
292 permissions defined on the dataset.
295 for more information on ZFS delegated administration.
296 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
297 Controls the system behavior in the event of catastrophic pool failure.
298 This condition is typically a result of a loss of connectivity to the underlying
299 storage device(s) or a failure of all devices within the pool.
300 The behavior of such an event is determined as follows:
301 .Bl -tag -width "continue"
303 Blocks all I/O access until the device connectivity is recovered and the errors
305 This is the default behavior.
309 to any new write I/O requests but allows reads to any of the remaining healthy
311 Any write requests that have yet to be committed to disk would be blocked.
313 Prints out a message to the console and generates a system crash dump.
315 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
316 The value of this property is the current state of
318 The only valid value when setting this property is
322 to the enabled state.
325 for details on feature states.
326 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
327 Controls whether information about snapshots associated with this pool is
335 This property can also be referred to by its shortened name,
337 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
338 Controls whether a pool activity check should be performed during
339 .Nm zpool Cm import .
340 When a pool is determined to be active it cannot be imported, even with the
342 option. This property is intended to be used in failover configurations
343 where multiple hosts have access to a pool on shared storage.
345 Multihost provides protection on import only. It does not protect against an
346 individual device being used in multiple pools, regardless of the type of vdev.
347 See the discussion under
350 When this property is on, periodic writes to storage occur to show the pool is
352 .Sy zfs_multihost_interval
354 .Xr zfs-module-parameters 5
355 man page. In order to enable this property each host must set a unique hostid.
359 .Xr spl-module-parameters 5
360 for additional details. The default value is
362 .It Sy version Ns = Ns Ar version
363 The current on-disk version of the pool.
364 This can be increased, but never decreased.
365 The preferred method of updating pools is with the
367 command, though this property can be used when a specific version is needed for
368 backwards compatibility.
369 Once feature flags are enabled on a pool this property will no longer have a