4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or https://opensource.org/licenses/CDDL-1.0.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
21 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24 .\" Copyright (c) 2017 Datto Inc.
25 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26 .\" Copyright 2017 Nexenta Systems, Inc.
27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28 .\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org>
29 .\" Copyright (c) 2023, Klara Inc.
37 .Nd properties of ZFS storage pools
40 Each pool has several properties associated with it.
41 Some properties are read-only statistics while others are configurable and
42 change the behavior of the pool.
44 User properties have no effect on ZFS behavior.
45 Use them to annotate pools in a way that is meaningful in your environment.
46 For more information about user properties, see the
50 The following are read-only properties:
51 .Bl -tag -width "unsupported@guid"
53 Amount of storage used within the pool.
60 The ratio of the total amount of storage that would be required to store all
61 the cloned blocks without cloning to the actual storage used.
64 property is calculated as:
66 .Sy ( ( bclonesaved + bcloneused ) * 100 ) / bcloneused
68 The amount of additional storage that would be required if block cloning
71 The amount of storage used by cloned blocks.
73 Percentage of pool space used.
74 This property can also be referred to by its shortened column name,
77 Amount of uninitialized space within the pool or device that can be used to
78 increase the total capacity of the pool.
79 On whole-disk vdevs, this is the space beyond the end of the GPT –
80 typically occurring when a LUN is dynamically expanded
81 or a disk replaced with a larger one.
82 On partition vdevs, this is the space appended to the partition after it was
83 added to the pool – most likely by resizing it in-place.
84 The space can be claimed for the pool by bringing it online with
87 .Nm zpool Cm online Fl e .
89 The amount of fragmentation in the pool.
90 As the amount of space
92 increases, it becomes more difficult to locate
95 This may result in lower write performance compared to pools with more
96 unfragmented free space.
98 The amount of free space available in the pool.
102 property describes how much new data can be written to ZFS filesystems/volumes.
105 property is not generally useful for this purpose, and can be substantially more
109 This discrepancy is due to several factors, including raidz parity;
110 zfs reservation, quota, refreservation, and refquota properties; and space set
115 for more information).
117 After a file system or snapshot is destroyed, the space it was using is
118 returned to the pool asynchronously.
120 is the amount of space remaining to be reclaimed.
127 A unique identifier for the pool.
129 The current health of the pool.
131 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
133 Space not released while
135 due to corruption, now permanently leaked into the pool.
137 A unique identifier for the pool.
140 property, this identifier is generated every time we load the pool (i.e. does
141 not persist across imports/exports) and never changes while the pool is loaded
144 operation takes place).
146 Total size of the storage pool.
147 .It Sy unsupported@ Ns Em guid
148 Information about unsupported features that are enabled on the pool.
154 The space usage properties report actual physical space available to the
156 The physical space can be different from the total amount of space that any
157 contained datasets can actually use.
158 The amount of space used in a raidz configuration depends on the characteristics
159 of the data being written.
160 In addition, ZFS reserves some space for internal accounting that the
162 command takes into account, but the
165 For non-full pools of a reasonable size, these effects should be invisible.
166 For small pools, or pools that are close to being completely full, these
167 discrepancies may become more noticeable.
169 The following property can be set at creation time and import time:
172 Alternate root directory.
173 If set, this directory is prepended to any mount points within the pool.
174 This can be used when examining an unknown pool where the mount points cannot be
175 trusted, or in an alternate boot environment, where the typical paths are not
178 is not a persistent property.
179 It is valid only while the system is up.
183 .Sy cachefile Ns = Ns Sy none ,
184 though this may be overridden using an explicit setting.
187 The following property can be set only at import time:
189 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
192 the pool will be imported in read-only mode.
193 This property can also be referred to by its shortened column name,
197 The following properties can be set at creation time and import time, and later
202 .It Sy ashift Ns = Ns Ar ashift
203 Pool sector size exponent, to the power of
205 (internally referred to as
207 Values from 9 to 16, inclusive, are valid; also, the
208 value 0 (the default) means to auto-detect using the kernel's block
209 layer and a ZFS internal exception list.
210 I/O operations will be aligned to the specified size boundaries.
211 Additionally, the minimum (disk)
212 write size will be set to the specified size, so this represents a
213 space/performance trade-off.
214 For optimal performance, the pool sector size should be greater than
215 or equal to the sector size of the underlying disks.
216 The typical case for setting this property is when
217 performance is important and the underlying disks use 4KiB sectors but
218 report 512B sectors to the OS (for compatibility reasons); in that
220 .Sy ashift Ns = Ns Sy 12
222 .Sy 1<<12 No = Sy 4096 ) .
223 When set, this property is
224 used as the default hint value in subsequent vdev operations (add,
226 Changing this value will not modify any existing
227 vdev, not even on disk replacement; however it can be used, for
228 instance, to replace a dying 512B sectors disk with a newer 4KiB
229 sectors device: this will probably result in bad performance but at the
230 same time could prevent loss of data.
231 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
232 Controls automatic pool expansion when the underlying LUN is grown.
235 the pool will be resized according to the size of the expanded device.
236 If the device is part of a mirror or raidz then all devices within that
237 mirror/raidz group must be expanded before the new space is made available to
239 The default behavior is
241 This property can also be referred to by its shortened column name,
243 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
244 Controls automatic device replacement.
247 device replacement must be initiated by the administrator by using the
252 any new device, found in the same physical location as a device that previously
253 belonged to the pool, is automatically formatted and replaced.
254 The default behavior is
256 This property can also be referred to by its shortened column name,
258 Autoreplace can also be used with virtual disks (like device
259 mapper) provided that you use the /dev/disk/by-vdev paths setup by
263 manual page for more details.
264 Autoreplace and autoonline require the ZFS Event Daemon be configured and
268 manual page for more details.
269 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
272 space which has been recently freed, and is no longer allocated by the pool,
273 will be periodically trimmed.
274 This allows block device vdevs which support
275 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
276 supports hole-punching, to reclaim unused blocks.
277 The default value for this property is
280 Automatic TRIM does not immediately reclaim blocks after a free.
281 Instead, it will optimistically delay allowing smaller ranges to be aggregated
282 into a few larger ones.
283 These can then be issued more efficiently to the storage.
284 TRIM on L2ARC devices is enabled by setting
285 .Sy l2arc_trim_ahead > 0 .
287 Be aware that automatic trimming of recently freed data blocks can put
288 significant stress on the underlying storage devices.
289 This will vary depending of how well the specific device handles these commands.
290 For lower-end devices it is often possible to achieve most of the benefits
291 of automatic trimming by running an on-demand (manual) TRIM periodically
295 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset
296 Identifies the default bootable dataset for the root pool.
297 This property is expected to be set mainly by the installation and upgrade
299 Not all Linux distribution boot processes use the bootfs property.
300 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
301 Controls the location of where the pool configuration is cached.
302 Discovering all pools on system startup requires a cached copy of the
303 configuration data that is stored on the root file system.
304 All pools in this cache are automatically imported when the system boots.
305 Some environments, such as install and clustering, need to cache this
306 information in a different location so that pools are not automatically
308 Setting this property caches the pool configuration in a different location that
309 can later be imported with
310 .Nm zpool Cm import Fl c .
311 Setting it to the value
313 creates a temporary pool that is never cached, and the
316 uses the default location.
318 Multiple pools can share the same cache file.
319 Because the kernel destroys and recreates this file when pools are added and
320 removed, care should be taken when attempting to access this file.
321 When the last pool using a
323 is exported or destroyed, the file will be empty.
324 .It Sy comment Ns = Ns Ar text
325 A text string consisting of printable ASCII characters that will be stored
326 such that it is available even if the pool becomes faulted.
327 An administrator can provide additional information about a pool using this
329 .It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns …
330 Specifies that the pool maintain compatibility with specific feature sets.
333 (or unset) compatibility is disabled (all features may be enabled); when set to
335 no features may be enabled.
336 When set to a comma-separated list of filenames
337 (each filename may either be an absolute path, or relative to
338 .Pa /etc/zfs/compatibility.d
340 .Pa /usr/share/zfs/compatibility.d )
341 the lists of requested features are read from those files, separated by
342 whitespace and/or commas.
343 Only features present in all files may be enabled.
346 .Xr zpool-features 7 ,
350 for more information on the operation of compatibility feature sets.
351 .It Sy dedupditto Ns = Ns Ar number
352 This property is deprecated and no longer has any effect.
353 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
354 Controls whether a non-privileged user is granted access based on the dataset
355 permissions defined on the dataset.
358 for more information on ZFS delegated administration.
359 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
360 Controls the system behavior in the event of catastrophic pool failure.
361 This condition is typically a result of a loss of connectivity to the underlying
362 storage device(s) or a failure of all devices within the pool.
363 The behavior of such an event is determined as follows:
364 .Bl -tag -width "continue"
366 Blocks all I/O access until the device connectivity is recovered and the errors
369 This is the default behavior.
373 to any new write I/O requests but allows reads to any of the remaining healthy
375 Any write requests that have yet to be committed to disk would be blocked.
377 Prints out a message to the console and generates a system crash dump.
379 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
380 The value of this property is the current state of
382 The only valid value when setting this property is
386 to the enabled state.
389 for details on feature states.
390 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
391 Controls whether information about snapshots associated with this pool is
399 This property can also be referred to by its shortened name,
401 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
402 Controls whether a pool activity check should be performed during
403 .Nm zpool Cm import .
404 When a pool is determined to be active it cannot be imported, even with the
407 This property is intended to be used in failover configurations
408 where multiple hosts have access to a pool on shared storage.
410 Multihost provides protection on import only.
411 It does not protect against an
412 individual device being used in multiple pools, regardless of the type of vdev.
413 See the discussion under
414 .Nm zpool Cm create .
416 When this property is on, periodic writes to storage occur to show the pool is
419 .Sy zfs_multihost_interval
423 In order to enable this property each host must set a unique hostid.
428 for additional details.
431 .It Sy version Ns = Ns Ar version
432 The current on-disk version of the pool.
433 This can be increased, but never decreased.
434 The preferred method of updating pools is with the
436 command, though this property can be used when a specific version is needed for
437 backwards compatibility.
438 Once feature flags are enabled on a pool this property will no longer have a
443 In addition to the standard native properties, ZFS supports arbitrary user
445 User properties have no effect on ZFS behavior, but applications or
446 administrators can use them to annotate pools.
448 User property names must contain a colon
450 character to distinguish them from native properties.
451 They may contain lowercase letters, numbers, and the following punctuation
460 The expected convention is that the property name is divided into two portions
462 .Ar module : Ns Ar property ,
463 but this namespace is not enforced by ZFS.
464 User property names can be at most 256 characters, and cannot begin with a dash
467 When making programmatic use of user properties, it is strongly suggested to use
468 a reversed DNS domain name for the
470 component of property names to reduce the chance that two
471 independently-developed packages use the same property name for different
474 The values of user properties are arbitrary strings and
476 All of the commands that operate on properties
477 .Po Nm zpool Cm list ,
482 can be used to manipulate both native properties and user properties.
484 .Nm zpool Cm set Ar name Ns =
485 to clear a user property.
486 Property values are limited to 8192 bytes.