4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright 2016 Nexenta Systems, Inc.
31 .Nd configure ZFS storage pools
42 .Ar pool device new_device
51 .Op Fl m Ar mountpoint
52 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
53 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
70 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
71 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
88 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
94 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
96 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
98 .Ar pool Ns | Ns Ar id
103 .Op Fl T Sy u Ns | Ns Sy d
104 .Oo Ar pool Oc Ns ...
105 .Op Ar interval Op Ar count
113 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
114 .Op Fl T Sy u Ns | Ns Sy d
115 .Oo Ar pool Oc Ns ...
116 .Op Ar interval Op Ar count
120 .Ar pool Ar device Ns ...
124 .Ar pool Ar device Ns ...
133 .Ar pool Ar device Ns ...
137 .Ar pool Ar device Op Ar new_device
144 .Ar property Ns = Ns Ar value
149 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
155 .Op Fl T Sy u Ns | Ns Sy d
156 .Oo Ar pool Oc Ns ...
157 .Op Ar interval Op Ar count
166 .Fl a Ns | Ns Ar pool Ns ...
170 command configures ZFS storage pools. A storage pool is a collection of devices
171 that provides physical storage and data replication for ZFS datasets. All
172 datasets within a storage pool share the same space. See
174 for information on managing datasets.
175 .Ss Virtual Devices (vdevs)
176 A "virtual device" describes a single device or a collection of devices
177 organized according to certain performance and fault characteristics. The
178 following virtual devices are supported:
181 A block device, typically located under
183 ZFS can use individual slices or partitions, though the recommended mode of
184 operation is to use whole disks. A disk can be specified by a full path, or it
185 can be a shorthand name
186 .Po the relative portion of the path under
189 A whole disk can be specified by omitting the slice or partition designation.
193 .Pa /dev/dsk/c0t0d0s2 .
194 When given a whole disk, ZFS automatically labels the disk, if necessary.
196 A regular file. The use of files as a backing store is strongly discouraged. It
197 is designed primarily for experimental purposes, as the fault tolerance of a
198 file is only as good as the file system of which it is a part. A file must be
199 specified by a full path.
201 A mirror of two or more devices. Data is replicated in an identical fashion
202 across all components of a mirror. A mirror with N disks of size X can hold X
203 bytes and can withstand (N-1) devices failing before data integrity is
205 .It Sy raidz , raidz1 , raidz2 , raidz3
206 A variation on RAID-5 that allows for better distribution of parity and
207 eliminates the RAID-5
209 .Pq in which data and parity become inconsistent after a power loss .
210 Data and parity is striped across all disks within a raidz group.
212 A raidz group can have single-, double-, or triple-parity, meaning that the
213 raidz group can sustain one, two, or three failures, respectively, without
216 vdev type specifies a single-parity raidz group; the
218 vdev type specifies a double-parity raidz group; and the
220 vdev type specifies a triple-parity raidz group. The
222 vdev type is an alias for
225 A raidz group with N disks of size X with P parity disks can hold approximately
226 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
227 compromised. The minimum number of devices in a raidz group is one more than
228 the number of parity disks. The recommended number is between 3 and 9 to help
229 increase performance.
231 A special pseudo-vdev which keeps track of available hot spares for a pool. For
232 more information, see the
236 A separate intent log device. If more than one log device is specified, then
237 writes are load-balanced between devices. Log devices can be mirrored. However,
238 raidz vdev types are not supported for the intent log. For more information,
243 A device used to cache storage pool data. A cache device cannot be configured
244 as a mirror or raidz group. For more information, see the
249 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
250 contain files or disks. Mirrors of mirrors
251 .Pq or other combinations
254 A pool can have any number of virtual devices at the top of the configuration
258 Data is dynamically distributed across all top-level devices to balance data
259 among devices. As new virtual devices are added, ZFS automatically places data
260 on the newly available devices.
262 Virtual devices are specified one at a time on the command line, separated by
263 whitespace. The keywords
267 are used to distinguish where a group ends and another begins. For example,
268 the following creates two root vdevs, each a mirror of two disks:
270 # zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
272 .Ss Device Failure and Recovery
273 ZFS supports a rich set of mechanisms for handling device failure and data
274 corruption. All metadata and data is checksummed, and ZFS automatically repairs
275 bad data from a good copy when corruption is detected.
277 In order to take advantage of these features, a pool must make use of some form
278 of redundancy, using either mirrored or raidz groups. While ZFS supports
279 running in a non-redundant configuration, where each root vdev is simply a disk
280 or file, this is strongly discouraged. A single case of bit corruption can
281 render some or all of your data unavailable.
283 A pool's health status is described by one of three states: online, degraded,
284 or faulted. An online pool has all devices operating normally. A degraded pool
285 is one in which one or more devices have failed, but the data is still
286 available due to a redundant configuration. A faulted pool has corrupted
287 metadata, or one or more faulted devices, and insufficient replicas to continue
290 The health of the top-level vdev, such as mirror or raidz device, is
291 potentially impacted by the state of its associated vdevs, or component
292 devices. A top-level vdev or component device is in one of the following
294 .Bl -tag -width "DEGRADED"
296 One or more top-level vdevs is in the degraded state because one or more
297 component devices are offline. Sufficient replicas exist to continue
300 One or more component devices is in the degraded or faulted state, but
301 sufficient replicas exist to continue functioning. The underlying conditions
305 The number of checksum errors exceeds acceptable levels and the device is
306 degraded as an indication that something may be wrong. ZFS continues to use the
309 The number of I/O errors exceeds acceptable levels. The device could not be
310 marked as faulted because there are insufficient replicas to continue
314 One or more top-level vdevs is in the faulted state because one or more
315 component devices are offline. Insufficient replicas exist to continue
318 One or more component devices is in the faulted state, and insufficient
319 replicas exist to continue functioning. The underlying conditions are as
323 The device could be opened, but the contents did not match expected values.
325 The number of I/O errors exceeds acceptable levels and the device is faulted to
326 prevent further use of the device.
329 The device was explicitly taken offline by the
333 The device is online and functioning.
335 The device was physically removed while the system was running. Device removal
336 detection is hardware-dependent and may not be supported on all platforms.
338 The device could not be opened. If a pool is imported when a device was
339 unavailable, then the device will be identified by a unique identifier instead
340 of its path since the path was never correct in the first place.
343 If a device is removed and later re-attached to the system, ZFS attempts
344 to put the device online automatically. Device attach detection is
345 hardware-dependent and might not be supported on all platforms.
347 ZFS allows devices to be associated with pools as
349 These devices are not actively used in the pool, but when an active device
350 fails, it is automatically replaced by a hot spare. To create a pool with hot
353 vdev with any number of devices. For example,
355 # zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
358 Spares can be shared across multiple pools, and can be added with the
360 command and removed with the
362 command. Once a spare replacement is initiated, a new
364 vdev is created within the configuration that will remain there until the
365 original device is replaced. At this point, the hot spare becomes available
366 again if another device fails.
368 If a pool has a shared spare that is currently being used, the pool can not be
369 exported since other pools may use this shared spare, which may lead to
370 potential data corruption.
372 An in-progress spare replacement can be cancelled by detaching the hot spare.
373 If the original faulted device is detached, then the hot spare assumes its
374 place in the configuration, and is removed from the spare list of all active
377 Spares cannot replace log devices.
379 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
380 transactions. For instance, databases often require their transactions to be on
381 stable storage devices when returning from a system call. NFS and other
382 applications can also use
384 to ensure data stability. By default, the intent log is allocated from blocks
385 within the main pool. However, it might be possible to get better performance
386 using separate intent log devices such as NVRAM or a dedicated disk. For
389 # zpool create pool c0d0 c1d0 log c2d0
392 Multiple log devices can also be specified, and they can be mirrored. See the
394 section for an example of mirroring multiple log devices.
396 Log devices can be added, replaced, attached, detached, and imported and
397 exported as part of the larger pool. Mirrored log devices can be removed by
398 specifying the top-level mirror for the log.
400 Devices can be added to a storage pool as
402 These devices provide an additional layer of caching between main memory and
403 disk. For read-heavy workloads, where the working set size is much larger than
404 what can be cached in main memory, using cache devices allow much more of this
405 working set to be served from low latency media. Using cache devices provides
406 the greatest performance improvement for random read-workloads of mostly static
409 To create a pool with cache devices, specify a
411 vdev with any number of devices. For example:
413 # zpool create pool c0d0 c1d0 cache c2d0 c3d0
416 Cache devices cannot be mirrored or part of a raidz configuration. If a read
417 error is encountered on a cache device, that read I/O is reissued to the
418 original storage pool device, which might be part of a mirrored or raidz
421 The content of the cache devices is considered volatile, as is the case with
424 Each pool has several properties associated with it. Some properties are
425 read-only statistics while others are configurable and change the behavior of
428 The following are read-only properties:
431 Amount of storage available within the pool. This property can also be referred
432 to by its shortened column name,
435 The size of the system boot partition. This property can only be set at pool
436 creation time and is read-only once pool is created. Setting this property
441 Percentage of pool space used. This property can also be referred to by its
442 shortened column name,
445 Amount of uninitialized space within the pool or device that can be used to
446 increase the total capacity of the pool. Uninitialized space consists of
447 any space on an EFI labeled vdev which has not been brought online
449 .Nm zpool Cm online Fl e
451 This space occurs when a LUN is dynamically expanded.
453 The amount of fragmentation in the pool.
455 The amount of free space available in the pool.
457 After a file system or snapshot is destroyed, the space it was using is
458 returned to the pool asynchronously.
460 is the amount of space remaining to be reclaimed. Over time
466 The current health of the pool. Health can be one of
467 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
469 A unique identifier for the pool.
471 Total size of the storage pool.
472 .It Sy unsupported@ Ns Em feature_guid
473 Information about unsupported features that are enabled on the pool. See
477 Amount of storage space used within the pool.
480 The space usage properties report actual physical space available to the
481 storage pool. The physical space can be different from the total amount of
482 space that any contained datasets can actually use. The amount of space used in
483 a raidz configuration depends on the characteristics of the data being
484 written. In addition, ZFS reserves some space for internal accounting
487 command takes into account, but the
489 command does not. For non-full pools of a reasonable size, these effects should
490 be invisible. For small pools, or pools that are close to being completely
491 full, these discrepancies may become more noticeable.
493 The following property can be set at creation time and import time:
496 Alternate root directory. If set, this directory is prepended to any mount
497 points within the pool. This can be used when examining an unknown pool where
498 the mount points cannot be trusted, or in an alternate boot environment, where
499 the typical paths are not valid.
501 is not a persistent property. It is valid only while the system is up. Setting
504 .Sy cachefile Ns = Ns Sy none ,
505 though this may be overridden using an explicit setting.
508 The following property can be set only at import time:
510 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
513 the pool will be imported in read-only mode. This property can also be referred
514 to by its shortened column name,
518 The following properties can be set at creation time and import time, and later
523 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
524 Controls automatic pool expansion when the underlying LUN is grown. If set to
526 the pool will be resized according to the size of the expanded device. If the
527 device is part of a mirror or raidz then all devices within that mirror/raidz
528 group must be expanded before the new space is made available to the pool. The
531 This property can also be referred to by its shortened column name,
533 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
534 Controls automatic device replacement. If set to
536 device replacement must be initiated by the administrator by using the
540 any new device, found in the same physical location as a device that previously
541 belonged to the pool, is automatically formatted and replaced. The default
544 This property can also be referred to by its shortened column name,
546 .It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
547 Identifies the default bootable dataset for the root pool. This property is
548 expected to be set mainly by the installation and upgrade programs.
549 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
550 Controls the location of where the pool configuration is cached. Discovering
551 all pools on system startup requires a cached copy of the configuration data
552 that is stored on the root file system. All pools in this cache are
553 automatically imported when the system boots. Some environments, such as
554 install and clustering, need to cache this information in a different location
555 so that pools are not automatically imported. Setting this property caches the
556 pool configuration in a different location that can later be imported with
557 .Nm zpool Cm import Fl c .
558 Setting it to the special value
560 creates a temporary pool that is never cached, and the special value
563 uses the default location.
565 Multiple pools can share the same cache file. Because the kernel destroys and
566 recreates this file when pools are added and removed, care should be taken when
567 attempting to access this file. When the last pool using a
569 is exported or destroyed, the file is removed.
570 .It Sy comment Ns = Ns Ar text
571 A text string consisting of printable ASCII characters that will be stored
572 such that it is available even if the pool becomes faulted. An administrator
573 can provide additional information about a pool using this property.
574 .It Sy dedupditto Ns = Ns Ar number
575 Threshold for the number of block ditto copies. If the reference count for a
576 deduplicated block increases above this number, a new ditto copy of this block
577 is automatically stored. The default setting is
579 which causes no ditto copies to be created for deduplicated blocks. The minimum
580 legal nonzero setting is
582 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
583 Controls whether a non-privileged user is granted access based on the dataset
584 permissions defined on the dataset. See
586 for more information on ZFS delegated administration.
587 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
588 Controls the system behavior in the event of catastrophic pool failure. This
589 condition is typically a result of a loss of connectivity to the underlying
590 storage device(s) or a failure of all devices within the pool. The behavior of
591 such an event is determined as follows:
592 .Bl -tag -width "continue"
594 Blocks all I/O access until the device connectivity is recovered and the errors
595 are cleared. This is the default behavior.
599 to any new write I/O requests but allows reads to any of the remaining healthy
600 devices. Any write requests that have yet to be committed to disk would be
603 Prints out a message to the console and generates a system crash dump.
605 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
606 The value of this property is the current state of
608 The only valid value when setting this property is
612 to the enabled state. See
614 for details on feature states.
615 .It Sy listsnaps Ns = Ns Sy on Ns | Ns Sy off
616 Controls whether information about snapshots associated with this pool is
621 option. The default value is
623 .It Sy version Ns = Ns Ar version
624 The current on-disk version of the pool. This can be increased, but never
625 decreased. The preferred method of updating pools is with the
627 command, though this property can be used when a specific version is needed for
628 backwards compatibility. Once feature flags is enabled on a pool this property
629 will no longer have a value.
632 All subcommands that modify state are logged persistently to the pool in their
637 command provides subcommands to create and destroy storage pools, add capacity
638 to storage pools, and provide information about the storage pools. The
639 following subcommands are supported:
645 Displays a help message.
652 Adds the specified virtual devices to the given pool. The
654 specification is described in the
656 section. The behavior of the
658 option, and the device checks performed are described in the
665 even if they appear in use or specify a conflicting replication level. Not all
666 devices can be overridden in this manner.
668 Displays the configuration that would be used without actually adding the
670 The actual pool creation can still fail due to insufficient privileges or
677 .Ar pool device new_device
683 The existing device cannot be part of a raidz configuration. If
685 is not currently part of a mirrored configuration,
687 automatically transforms into a two-way mirror of
693 is part of a two-way mirror, attaching
695 creates a three-way mirror, and so on. In either case,
697 begins to resilver immediately.
702 even if its appears to be in use. Not all devices can be overridden in this
711 Clears device errors in a pool. If no arguments are specified, all device
712 errors within the pool are cleared. If one or more devices is specified, only
713 those errors associated with the specified device or devices are cleared.
719 .Op Fl m Ar mountpoint
720 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
721 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
725 Creates a new storage pool containing the virtual devices specified on the
726 command line. The pool name must begin with a letter, and can only contain
727 alphanumeric characters as well as underscore
739 are reserved, as are names beginning with the pattern
743 specification is described in the
747 The command verifies that each device specified is accessible and not currently
748 in use by another subsystem. There are some uses, such as being currently
749 mounted, or specified as the dedicated dump device, that prevents a device from
750 ever being used by ZFS . Other uses, such as having a preexisting UFS file
751 system, can be overridden with the
755 The command also checks that the replication strategy for the pool is
756 consistent. An attempt to combine redundant and non-redundant storage in a
757 single pool, or to mix disks and files, results in an error unless
759 is specified. The use of differently sized devices within a single raidz or
760 mirror group is also flagged as an error unless
766 option is specified, the default mount point is
768 The mount point must not exist or must be empty, or else the root dataset
769 cannot be mounted. This can be overridden with the
773 By default all supported features are enabled on the new pool unless the
778 Create whole disk pool with EFI System partition to support booting system
779 with UEFI firmware. Default size is 256MB. To create boot partition with
788 Do not enable any features on the new pool. Individual features can be enabled
789 by setting their corresponding properties to
795 for details about feature properties.
799 even if they appear in use or specify a conflicting replication level. Not all
800 devices can be overridden in this manner.
801 .It Fl m Ar mountpoint
802 Sets the mount point for the root dataset. The default mount point is
808 is specified. The mount point must be an absolute path,
812 For more information on dataset mount points, see
815 Displays the configuration that would be used without actually creating the
816 pool. The actual pool creation can still fail due to insufficient privileges or
818 .It Fl o Ar property Ns = Ns Ar value
819 Sets the given pool properties. See the
821 section for a list of valid properties that can be set.
822 .It Fl O Ar file-system-property Ns = Ns Ar value
823 Sets the given file system properties in the root file system of the pool. See
828 for a list of valid properties that can be set.
831 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
839 Destroys the given pool, freeing up any devices for other use. This command
840 tries to unmount any active datasets before destroying the pool.
843 Forces any active datasets contained within the pool to be unmounted.
852 from a mirror. The operation is refused if there are no other valid replicas of
860 Exports the given pools from the system. All devices are marked as exported,
861 but are still considered in use by other subsystems. The devices can be moved
863 .Pq even those of different endianness
864 and imported as long as a sufficient number of devices are present.
866 Before exporting the pool, all datasets within the pool are unmounted. A pool
867 can not be exported if it has a shared spare that is currently being used.
869 For pools to be portable, you must give the
871 command whole disks, not just slices, so that ZFS can label the disks with
872 portable EFI labels. Otherwise, disk drivers on platforms of different
873 endianness will not recognize the disks.
876 Forcefully unmount all datasets, using the
880 This command will forcefully export the pool even if it has a shared spare that
881 is currently being used. This may lead to potential data corruption.
887 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
888 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
891 Retrieves the given list of properties
897 for the specified storage pool(s). These properties are displayed with
898 the following fields:
900 name Name of storage pool
901 property Property name
903 source Property source, either 'default' or 'local'.
908 section for more information on the available pool properties.
911 Scripted mode. Do not display headers, and separate fields by a single tab
912 instead of arbitrary space.
914 A comma-separated list of columns to display.
915 .Sy name Ns , Ns Sy property Ns , Ns Sy value Ns , Ns Sy source
916 is the default value.
918 Display numbers in parsable (exact) values.
924 .Oo Ar pool Oc Ns ...
926 Displays the command history of the specified pool(s) or all pools if no pool is
930 Displays internally logged ZFS events in addition to user initiated events.
932 Displays log records in long format, which in addition to standard format
933 includes, the user name, the hostname, and the zone in which the operation was
942 Lists pools available to import. If the
944 option is not specified, this command searches for devices in
948 option can be specified multiple times, and all directories are searched. If the
949 device appears to be part of an exported pool, this command displays a summary
950 of the pool with the name of the pool, a numeric identifier, as well as the vdev
951 layout and current health of the device for each device or file. Destroyed
952 pools, pools that were previously destroyed with the
954 command, are not listed unless the
958 The numeric identifier is unique, and can be used instead of the pool name when
959 multiple exported pools of the same name are available.
961 .It Fl c Ar cachefile
962 Reads configuration from the given
964 that was created with the
968 is used instead of searching for devices.
970 Searches for devices or files in
974 option can be specified multiple times.
976 Lists destroyed pools only.
984 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
986 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
989 Imports all pools found in the search directories. Identical to the previous
990 command, except that all pools with a sufficient number of devices available are
991 imported. Destroyed pools, pools that were previously destroyed with the
993 command, will not be imported unless the
998 Searches for and imports all pools found.
999 .It Fl c Ar cachefile
1000 Reads configuration from the given
1002 that was created with the
1006 is used instead of searching for devices.
1008 Searches for devices or files in
1012 option can be specified multiple times. This option is incompatible with the
1016 Imports destroyed pools only. The
1018 option is also required.
1020 Forces import, even if the pool appears to be potentially active.
1022 Recovery mode for a non-importable pool. Attempt to return the pool to an
1023 importable state by discarding the last few transactions. Not all damaged pools
1024 can be recovered by using this option. If successful, the data from the
1025 discarded transactions is irretrievably lost. This option is ignored if the pool
1026 is importable or already imported.
1028 Allows a pool to import when there is a missing log device. Recent transactions
1029 can be lost because the log device will be discarded.
1033 recovery option. Determines whether a non-importable pool can be made importable
1034 again, but does not actually perform the pool recovery. For more details about
1035 pool recovery mode, see the
1039 Import the pool without mounting any file systems.
1041 Comma-separated list of mount options to use when mounting datasets within the
1044 for a description of dataset properties and mount options.
1045 .It Fl o Ar property Ns = Ns Ar value
1046 Sets the specified property on the imported pool. See the
1048 section for more information on the available pool properties.
1064 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1066 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1068 .Ar pool Ns | Ns Ar id
1071 Imports a specific pool. A pool can be identified by its name or the numeric
1074 is specified, the pool is imported using the name
1076 Otherwise, it is imported with the same name as its exported name.
1078 If a device is removed from a system without running
1080 first, the device appears as potentially active. It cannot be determined if
1081 this was a failed export, or whether the device is really in use from another
1082 host. To import a pool in this state, the
1086 .It Fl c Ar cachefile
1087 Reads configuration from the given
1089 that was created with the
1093 is used instead of searching for devices.
1095 Searches for devices or files in
1099 option can be specified multiple times. This option is incompatible with the
1103 Imports destroyed pool. The
1105 option is also required.
1107 Forces import, even if the pool appears to be potentially active.
1109 Recovery mode for a non-importable pool. Attempt to return the pool to an
1110 importable state by discarding the last few transactions. Not all damaged pools
1111 can be recovered by using this option. If successful, the data from the
1112 discarded transactions is irretrievably lost. This option is ignored if the pool
1113 is importable or already imported.
1115 Allows a pool to import when there is a missing log device. Recent transactions
1116 can be lost because the log device will be discarded.
1120 recovery option. Determines whether a non-importable pool can be made importable
1121 again, but does not actually perform the pool recovery. For more details about
1122 pool recovery mode, see the
1126 Comma-separated list of mount options to use when mounting datasets within the
1129 for a description of dataset properties and mount options.
1130 .It Fl o Ar property Ns = Ns Ar value
1131 Sets the specified property on the imported pool. See the
1133 section for more information on the available pool properties.
1148 .Op Fl T Sy u Ns | Ns Sy d
1149 .Oo Ar pool Oc Ns ...
1150 .Op Ar interval Op Ar count
1152 Displays I/O statistics for the given pools. When given an
1154 the statistics are printed every
1156 seconds until ^C is pressed. If no
1158 are specified, statistics for every pool in the system is shown. If
1160 is specified, the command exits after
1162 reports are printed.
1164 .It Fl T Sy u Ns | Ns Sy d
1165 Display a time stamp. Specify
1167 for a printed representation of the internal representation of time. See
1171 for standard date format. See
1174 Verbose statistics. Reports usage statistics for individual vdevs within the
1175 pool, in addition to the pool-wide statistics.
1183 Removes ZFS label information from the specified
1187 must not be part of an active pool configuration.
1190 Treat exported or foreign devices as inactive.
1196 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1197 .Op Fl T Sy u Ns | Ns Sy d
1198 .Oo Ar pool Oc Ns ...
1199 .Op Ar interval Op Ar count
1201 Lists the given pools along with a health status and space usage. If no
1203 are specified, all pools in the system are listed. When given an
1205 the information is printed every
1207 seconds until ^C is pressed. If
1209 is specified, the command exits after
1211 reports are printed.
1214 Scripted mode. Do not display headers, and separate fields by a single tab
1215 instead of arbitrary space.
1216 .It Fl o Ar property
1217 Comma-separated list of properties to display. See the
1219 section for a list of valid properties. The default list is
1220 .Sy name , size , used , available , fragmentation , expandsize , capacity ,
1221 .Sy dedupratio , health , altroot .
1223 Display numbers in parsable
1226 .It Fl T Sy u Ns | Ns Sy d
1227 Display a time stamp. Specify
1229 for a printed representation of the internal representation of time. See
1233 for standard date format. See
1236 Verbose statistics. Reports usage statistics for individual vdevs within the
1237 pool, in addition to the pool-wise statistics.
1243 .Ar pool Ar device Ns ...
1245 Takes the specified physical device offline. While the
1247 is offline, no attempt is made to read or write to the device. This command is
1248 not applicable to spares.
1251 Temporary. Upon reboot, the specified physical device reverts to its previous
1258 .Ar pool Ar device Ns ...
1260 Brings the specified physical device online. This command is not applicable to
1264 Expand the device to use all available space. If the device is part of a mirror
1265 or raidz then all devices must be expanded before the new space will become
1266 available to the pool.
1273 Generates a new unique identifier for the pool. You must ensure that all devices
1274 in this pool are online and healthy before performing this action.
1280 Reopen all the vdevs associated with the pool.
1284 .Ar pool Ar device Ns ...
1286 Removes the specified device from the pool. This command currently only supports
1287 removing hot spares, cache, and log devices. A mirrored log device can be
1288 removed by specifying the top-level mirror for the log. Non-log devices that are
1289 part of a mirrored configuration can be removed using the
1291 command. Non-redundant and raidz devices cannot be removed from a pool.
1296 .Ar pool Ar device Op Ar new_device
1302 This is equivalent to attaching
1304 waiting for it to resilver, and then detaching
1309 must be greater than or equal to the minimum size of all the devices in a mirror
1310 or raidz configuration.
1313 is required if the pool is not redundant. If
1315 is not specified, it defaults to
1317 This form of replacement is useful after an existing disk has failed and has
1318 been physically replaced. In this case, the new disk may have the same
1320 path as the old device, even though it is actually a different disk. ZFS
1326 even if its appears to be in use. Not all devices can be overridden in this
1335 Begins a scrub. The scrub examines all data in the specified pools to verify
1336 that it checksums correctly. For replicated
1338 devices, ZFS automatically repairs any damage discovered during the scrub. The
1340 command reports the progress of the scrub and summarizes the results of the
1341 scrub upon completion.
1343 Scrubbing and resilvering are very similar operations. The difference is that
1344 resilvering only examines data that ZFS knows to be out of date
1346 for example, when attaching a new device to a mirror or replacing an existing
1349 whereas scrubbing examines all data to discover silent errors due to hardware
1350 faults or disk failure.
1352 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1353 one at a time. If a scrub is already in progress, the
1355 command terminates it and starts a new scrub. If a resilver is in progress, ZFS
1356 does not allow a scrub to be started until the resilver completes.
1364 .Ar property Ns = Ns Ar value
1367 Sets the given property on the specified pool. See the
1369 section for more information on what properties can be set and acceptable
1375 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1385 must be mirrors. At the time of the split,
1387 will be a replica of
1391 Do dry run, do not actually perform the split. Print out the expected
1394 .It Fl o Ar property Ns = Ns Ar value
1395 Sets the specified property for
1399 section for more information on the available pool properties.
1407 and automaticaly import it.
1413 .Op Fl T Sy u Ns | Ns Sy d
1414 .Oo Ar pool Oc Ns ...
1415 .Op Ar interval Op Ar count
1417 Displays the detailed health status for the given pools. If no
1419 is specified, then the status of each pool in the system is displayed. For more
1420 information on pool and device health, see the
1421 .Sx Device Failure and Recovery
1424 If a scrub or resilver is in progress, this command reports the percentage done
1425 and the estimated time to completion. Both of these are only approximate,
1426 because the amount of data in the pool and the other workloads on the system can
1430 Display a histogram of deduplication statistics, showing the allocated
1431 .Pq physically present on disk
1433 .Pq logically referenced in the pool
1434 block counts and sizes by reference count.
1435 .It Fl T Sy u Ns | Ns Sy d
1436 Display a time stamp. Specify
1438 for a printed representation of the internal representation of time. See
1442 for standard date format. See
1445 Displays verbose data error information, printing out a complete list of all
1446 data errors since the last complete pool scrub.
1448 Only display status for pools that are exhibiting errors or are otherwise
1449 unavailable. Warnings about pools not using the latest on-disk format will not
1456 Displays pools which do not have all supported features enabled and pools
1457 formatted using a legacy ZFS version number. These pools can continue to be
1458 used, but some features may not be available. Use
1459 .Nm zpool Cm upgrade Fl a
1460 to enable all features on all pools.
1466 Displays legacy ZFS versions supported by the current software. See
1467 .Xr zpool-features 5
1468 for a description of feature flags features supported by the current software.
1473 .Fl a Ns | Ns Ar pool Ns ...
1475 Enables all supported features on the given pool. Once this is done, the pool
1476 will no longer be accessible on systems that do not support feature flags. See
1477 .Xr zpool-features 5
1478 for details on compatibility with systems that support feature flags, but do not
1479 support all features enabled on the pool.
1482 Enables all supported features on all pools.
1484 Upgrade to the specified legacy version. If the
1486 flag is specified, no features will be enabled on the pool. This option can only
1487 be used to increase the version number up to the last supported legacy version
1492 The following exit values are returned:
1495 Successful completion.
1499 Invalid command line options were specified.
1503 .It Sy Example 1 No Creating a RAID-Z Storage Pool
1504 The following command creates a pool with a single raidz root vdev that
1505 consists of six disks.
1507 # zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1509 .It Sy Example 2 No Creating a Mirrored Storage Pool
1510 The following command creates a pool with two mirrors, where each mirror
1513 # zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1515 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
1516 The following command creates an unmirrored pool using two disk slices.
1518 # zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1520 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
1521 The following command creates an unmirrored pool using files. While not
1522 recommended, a pool based on files can be useful for experimental purposes.
1524 # zpool create tank /path/to/file/a /path/to/file/b
1526 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
1527 The following command adds two mirrored disks to the pool
1529 assuming the pool is already made up of two-way mirrors. The additional space
1530 is immediately available to any datasets within the pool.
1532 # zpool add tank mirror c1t0d0 c1t1d0
1534 .It Sy Example 6 No Listing Available ZFS Storage Pools
1535 The following command lists all available pools on the system. In this case,
1538 is faulted due to a missing device. The results from this command are similar
1542 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1543 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
1544 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
1545 zion - - - - - - - FAULTED -
1547 .It Sy Example 7 No Destroying a ZFS Storage Pool
1548 The following command destroys the pool
1550 and any datasets contained within.
1552 # zpool destroy -f tank
1554 .It Sy Example 8 No Exporting a ZFS Storage Pool
1555 The following command exports the devices in pool
1557 so that they can be relocated or later imported.
1561 .It Sy Example 9 No Importing a ZFS Storage Pool
1562 The following command displays available pools, and then imports the pool
1564 for use on the system. The results from this command are similar to the
1569 id: 15451357997522795478
1571 action: The pool can be imported using its name or numeric identifier.
1581 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
1582 The following command upgrades all ZFS Storage pools to the current version of
1586 This system is currently running ZFS version 2.
1588 .It Sy Example 11 No Managing Hot Spares
1589 The following command creates a new pool with an available hot spare:
1591 # zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1594 If one of the disks were to fail, the pool would be reduced to the degraded
1595 state. The failed device can be replaced using the following command:
1597 # zpool replace tank c0t0d0 c0t3d0
1600 Once the data has been resilvered, the spare is automatically removed and is
1601 made available should another device fails. The hot spare can be permanently
1602 removed from the pool using the following command:
1604 # zpool remove tank c0t2d0
1606 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
1607 The following command creates a ZFS storage pool consisting of two, two-way
1608 mirrors and mirrored log devices:
1610 # zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1613 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
1614 The following command adds two disks for use as cache devices to a ZFS storage
1617 # zpool add pool cache c2d0 c3d0
1620 Once added, the cache devices gradually fill with content from main memory.
1621 Depending on the size of your cache devices, it could take over an hour for
1622 them to fill. Capacity and reads can be monitored using the
1626 # zpool iostat -v pool 5
1628 .It Sy Example 14 No Removing a Mirrored Log Device
1629 The following command removes the mirrored log device
1631 Given this configuration:
1635 scrub: none requested
1638 NAME STATE READ WRITE CKSUM
1640 mirror-0 ONLINE 0 0 0
1643 mirror-1 ONLINE 0 0 0
1647 mirror-2 ONLINE 0 0 0
1652 The command to remove the mirrored log
1656 # zpool remove tank mirror-2
1658 .It Sy Example 15 No Displaying expanded space on a device
1659 The following command dipslays the detailed information for the pool
1661 This pool is comprised of a single raidz vdev where one of its devices
1662 increased its capacity by 10GB. In this example, the pool will not be able to
1663 utilize this extra capacity until all the devices under the raidz vdev have
1666 # zpool list -v data
1667 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
1668 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
1669 raidz1 23.9G 14.6G 9.30G 48% -
1675 .Sh INTERFACE STABILITY
1680 .Xr zpool-features 5