4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or https://opensource.org/licenses/CDDL-1.0.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
21 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24 .\" Copyright (c) 2017 Datto Inc.
25 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26 .\" Copyright 2017 Nexenta Systems, Inc.
27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
35 .Nd configure ZFS storage pools
49 command configures ZFS storage pools.
50 A storage pool is a collection of devices that provides physical storage and
51 data replication for ZFS datasets.
52 All datasets within a storage pool share the same space.
55 for information on managing datasets.
57 For an overview of creating and managing ZFS storage pools see the
62 All subcommands that modify state are logged persistently to the pool in their
67 command provides subcommands to create and destroy storage pools, add capacity
68 to storage pools, and provide information about the storage pools.
69 The following subcommands are supported:
75 Displays a help message.
85 Displays the software version of the
87 userland utility and the ZFS kernel module.
90 option to output in JSON format.
96 Creates a new storage pool containing the virtual devices specified on the
98 .It Xr zpool-initialize 8
99 Begins initializing by writing to all unallocated regions on the specified
100 devices, or all eligible devices in the pool if no individual devices are
106 .It Xr zpool-destroy 8
107 Destroys the given pool, freeing up any devices for other use.
108 .It Xr zpool-labelclear 8
109 Removes ZFS label information from the specified
116 .Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8
118 Converts a non-redundant disk into a mirror, or increases
119 the redundancy level of an existing mirror
120 .Cm ( attach Ns ), or performs the inverse operation (
123 .Xr zpool-add 8 Ns / Ns Xr zpool-remove 8
125 Adds the specified virtual devices to the given pool,
126 or removes the specified device from the pool.
127 .It Xr zpool-replace 8
128 Replaces an existing device (which may be faulted) with a new one.
130 Creates a new pool by splitting all mirrors in an existing pool (which decreases
135 Available pool properties listed in the
140 Lists the given pools along with a health status and space usage.
142 .Xr zpool-get 8 Ns / Ns Xr zpool-set 8
144 Retrieves the given list of properties
150 for the specified storage pool(s).
155 .It Xr zpool-status 8
156 Displays the detailed health status for the given pools.
157 .It Xr zpool-iostat 8
158 Displays logical I/O statistics for the given pools/vdevs.
159 Physical I/O operations may be observed via
161 .It Xr zpool-events 8
162 Lists all recent events generated by the ZFS kernel modules.
163 These events are consumed by the
165 and used to automate administrative tasks such as replacing a failed device
167 That manual page also describes the subclasses and event payloads
168 that can be generated.
169 .It Xr zpool-history 8
170 Displays the command history of the specified pool(s) or all pools if no pool is
176 .It Xr zpool-prefetch 8
177 Prefetches specific types of pool data.
179 Begins a scrub or resumes a paused scrub.
180 .It Xr zpool-checkpoint 8
181 Checkpoints the current state of
183 which can be later restored by
184 .Nm zpool Cm import Fl -rewind-to-checkpoint .
186 Initiates an immediate on-demand TRIM operation for all of the free space in a
188 This operation informs the underlying storage devices of all blocks
189 in the pool which are no longer allocated and allows thinly provisioned
190 devices to reclaim the space.
192 This command forces all in-core dirty data to be written to the primary
193 pool storage and not the ZIL.
194 It will also update administrative information including quota reporting.
197 will sync all pools on the system.
198 Otherwise, it will sync only the specified pool(s).
199 .It Xr zpool-upgrade 8
200 Manage the on-disk format version of storage pools.
202 Waits until all background activity of the given types has ceased in the given
209 .Xr zpool-offline 8 Ns / Ns Xr zpool-online 8
211 Takes the specified physical device offline or brings it online.
212 .It Xr zpool-resilver 8
214 If an existing resilver is already running it will be restarted from the
216 .It Xr zpool-reopen 8
217 Reopen all the vdevs associated with the pool.
219 Clears device errors in a pool.
224 .It Xr zpool-import 8
225 Make disks containing ZFS storage pools available for use on the system.
226 .It Xr zpool-export 8
227 Exports the given pools from the system.
228 .It Xr zpool-reguid 8
229 Generates a new unique identifier for the pool.
233 The following exit values are returned:
234 .Bl -tag -compact -offset 4n -width "a"
236 Successful completion.
240 Invalid command line options were specified.
244 .\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8.
245 .\" Examples 6, 14 are shared with zpool-add.8.
246 .\" Examples 7, 16 are shared with zpool-list.8.
247 .\" Examples 8 are shared with zpool-destroy.8.
248 .\" Examples 9 are shared with zpool-export.8.
249 .\" Examples 10 are shared with zpool-import.8.
250 .\" Examples 11 are shared with zpool-upgrade.8.
251 .\" Examples 15 are shared with zpool-remove.8.
252 .\" Examples 17 are shared with zpool-status.8.
253 .\" Examples 14, 17 are also shared with zpool-iostat.8.
254 .\" Make sure to update them omnidirectionally
255 .Ss Example 1 : No Creating a RAID-Z Storage Pool
256 The following command creates a pool with a single raidz root vdev that
257 consists of six disks:
258 .Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf
260 .Ss Example 2 : No Creating a Mirrored Storage Pool
261 The following command creates a pool with two mirrors, where each mirror
263 .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd
265 .Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions
266 The following command creates a non-redundant pool using two disk partitions:
267 .Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2
269 .Ss Example 4 : No Creating a ZFS Storage Pool by Using Files
270 The following command creates a non-redundant pool using files.
271 While not recommended, a pool based on files can be useful for experimental
273 .Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b
275 .Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored
276 The following command converts an existing single device
278 into a mirror by attaching a second device to it,
280 .Dl # Nm zpool Cm attach Ar tank Pa sda sdb
282 .Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool
283 The following command adds two mirrored disks to the pool
285 assuming the pool is already made up of two-way mirrors.
286 The additional space is immediately available to any datasets within the pool.
287 .Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb
289 .Ss Example 7 : No Listing Available ZFS Storage Pools
290 The following command lists all available pools on the system.
291 In this case, the pool
293 is faulted due to a missing device.
294 The results from this command are similar to the following:
295 .Bd -literal -compact -offset Ds
296 .No # Nm zpool Cm list
297 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
298 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
299 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
300 zion - - - - - - - FAULTED -
303 .Ss Example 8 : No Destroying a ZFS Storage Pool
304 The following command destroys the pool
306 and any datasets contained within:
307 .Dl # Nm zpool Cm destroy Fl f Ar tank
309 .Ss Example 9 : No Exporting a ZFS Storage Pool
310 The following command exports the devices in pool
312 so that they can be relocated or later imported:
313 .Dl # Nm zpool Cm export Ar tank
315 .Ss Example 10 : No Importing a ZFS Storage Pool
316 The following command displays available pools, and then imports the pool
318 for use on the system.
319 The results from this command are similar to the following:
320 .Bd -literal -compact -offset Ds
321 .No # Nm zpool Cm import
323 id: 15451357997522795478
325 action: The pool can be imported using its name or numeric identifier.
333 .No # Nm zpool Cm import Ar tank
336 .Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version
337 The following command upgrades all ZFS Storage pools to the current version of
339 .Bd -literal -compact -offset Ds
340 .No # Nm zpool Cm upgrade Fl a
341 This system is currently running ZFS version 2.
344 .Ss Example 12 : No Managing Hot Spares
345 The following command creates a new pool with an available hot spare:
346 .Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc
348 If one of the disks were to fail, the pool would be reduced to the degraded
350 The failed device can be replaced using the following command:
351 .Dl # Nm zpool Cm replace Ar tank Pa sda sdd
353 Once the data has been resilvered, the spare is automatically removed and is
354 made available for use should another device fail.
355 The hot spare can be permanently removed from the pool using the following
357 .Dl # Nm zpool Cm remove Ar tank Pa sdc
359 .Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs
360 The following command creates a ZFS storage pool consisting of two, two-way
361 mirrors and mirrored log devices:
362 .Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf
364 .Ss Example 14 : No Adding Cache Devices to a ZFS Pool
365 The following command adds two disks for use as cache devices to a ZFS storage
367 .Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd
369 Once added, the cache devices gradually fill with content from main memory.
370 Depending on the size of your cache devices, it could take over an hour for
372 Capacity and reads can be monitored using the
374 subcommand as follows:
375 .Dl # Nm zpool Cm iostat Fl v Ar pool 5
377 .Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device
378 The following commands remove the mirrored log device
380 and mirrored top-level data device
383 Given this configuration:
384 .Bd -literal -compact -offset Ds
387 scrub: none requested
390 NAME STATE READ WRITE CKSUM
392 mirror-0 ONLINE 0 0 0
395 mirror-1 ONLINE 0 0 0
399 mirror-2 ONLINE 0 0 0
404 The command to remove the mirrored log
406 .Dl # Nm zpool Cm remove Ar tank mirror-2
408 The command to remove the mirrored data
410 .Dl # Nm zpool Cm remove Ar tank mirror-1
412 .Ss Example 16 : No Displaying expanded space on a device
413 The following command displays the detailed information for the pool
415 This pool is comprised of a single raidz vdev where one of its devices
416 increased its capacity by 10 GiB.
417 In this example, the pool will not be able to utilize this extra capacity until
418 all the devices under the raidz vdev have been expanded.
419 .Bd -literal -compact -offset Ds
420 .No # Nm zpool Cm list Fl v Ar data
421 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
422 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
423 raidz1 23.9G 14.6G 9.30G - 48%
429 .Ss Example 17 : No Adding output columns
430 Additional columns can be added to the
431 .Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c .
432 .Bd -literal -compact -offset Ds
433 .No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size
434 NAME STATE READ WRITE CKSUM vendor model size
436 mirror-0 ONLINE 0 0 0
437 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
438 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
439 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
440 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
441 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
442 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
444 .No # Nm zpool Cm iostat Fl vc Pa size
445 capacity operations bandwidth
446 pool alloc free read write read write size
447 ---------- ----- ----- ----- ----- ----- ----- ----
448 rpool 14.6G 54.9G 4 55 250K 2.69M
449 sda1 14.6G 54.9G 4 55 250K 2.69M 70G
450 ---------- ----- ----- ----- ----- ----- ----- ----
453 .Sh ENVIRONMENT VARIABLES
454 .Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE"
458 to dump core on exit for the purposes of running
466 .It Sy ZPOOL_AUTO_POWER_ON_SLOT
467 Automatically attempt to turn on the drives enclosure slot power to a drive when
473 This has the same effect as passing the
475 option to those commands.
476 .It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS
477 The maximum time in milliseconds to wait for a slot power sysfs value
478 to return the correct value after writing it.
479 For example, after writing "on" to the sysfs enclosure slot power_control file,
480 it can take some time for the enclosure to power down the slot and return
481 "on" if you read back the 'power_control' value.
482 Defaults to 30 seconds (30000ms) if not set.
483 .It Sy ZPOOL_IMPORT_PATH
484 The search path for devices or files to use with the pool.
485 This is a colon-separated list of directories in which
487 looks for device nodes and files.
492 .It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS
493 The maximum time in milliseconds that
495 will wait for an expected device to be available.
496 .It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
497 If set, suppress warning about non-native vdev ashift in
498 .Nm zpool Cm status .
499 The value is not used, only the presence or absence of the variable matters.
500 .It Sy ZPOOL_VDEV_NAME_GUID
503 subcommands to output vdev guids by default.
504 This behavior is identical to the
505 .Nm zpool Cm status Fl g
507 .It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS
510 subcommands to follow links for vdev names by default.
511 This behavior is identical to the
512 .Nm zpool Cm status Fl L
514 .It Sy ZPOOL_VDEV_NAME_PATH
517 subcommands to output full vdev path names by default.
518 This behavior is identical to the
519 .Nm zpool Cm status Fl P
521 .It Sy ZFS_VDEV_DEVID_OPT_OUT
522 Older OpenZFS implementations had issues when attempting to display pool
523 config vdev names if a
525 NVP value is present in the pool's config.
527 For example, a pool that originated on illumos platform would have a
529 value in the config and
531 would fail when listing the config.
532 This would also be true for future Linux-based pools.
534 A pool can be stripped of any
536 values on import or prevented from adding
542 .Sy ZFS_VDEV_DEVID_OPT_OUT .
544 .It Sy ZPOOL_SCRIPTS_AS_ROOT
545 Allow a privileged user to run
546 .Nm zpool Cm status Ns / Ns Cm iostat Fl c .
547 Normally, only unprivileged users are allowed to run
549 .It Sy ZPOOL_SCRIPTS_PATH
550 The search path for scripts when running
551 .Nm zpool Cm status Ns / Ns Cm iostat Fl c .
552 This is a colon-separated list of directories and overrides the default
557 .It Sy ZPOOL_SCRIPTS_ENABLED
559 .Nm zpool Cm status Ns / Ns Cm iostat Fl c .
561 .Sy ZPOOL_SCRIPTS_ENABLED
562 is not set, it is assumed that the user is allowed to run
563 .Nm zpool Cm status Ns / Ns Cm iostat Fl c .
564 .\" Shared with zfs.8
565 .It Sy ZFS_MODULE_TIMEOUT
566 Time, in seconds, to wait for
572 .Sy 600 Pq 10 minutes .
580 .Sh INTERFACE STABILITY
585 .Xr zpool-features 7 ,
586 .Xr zpoolconcepts 7 ,
592 .Xr zpool-checkpoint 8 ,
595 .Xr zpool-ddtprune 8 ,
596 .Xr zpool-destroy 8 ,
601 .Xr zpool-history 8 ,
603 .Xr zpool-initialize 8 ,
605 .Xr zpool-labelclear 8 ,
607 .Xr zpool-offline 8 ,
609 .Xr zpool-prefetch 8 ,
613 .Xr zpool-replace 8 ,
614 .Xr zpool-resilver 8 ,
621 .Xr zpool-upgrade 8 ,