1 # ZFS Test Suite README
3 ### 1) Building and installing the ZFS Test Suite
5 The ZFS Test Suite runs under the test-runner framework. This framework
6 is built along side the standard ZFS utilities and is included as part of
7 zfs-test package. The zfs-test package can be built from source as follows:
12 The resulting packages can be installed using the rpm or dpkg command as
13 appropriate for your distributions. Alternately, if you have installed
14 ZFS from a distributions repository (not from source) the zfs-test package
15 may be provided for your distribution.
17 - Installed from source
18 $ rpm -ivh ./zfs-test*.rpm, or
19 $ dpkg -i ./zfs-test*.deb,
21 - Installed from package repository
22 $ yum install zfs-test
23 $ apt-get install zfs-test
25 ### 2) Running the ZFS Test Suite
27 The pre-requisites for running the ZFS Test Suite are:
30 * Specify the disks you wish to use in the $DISKS variable, as a
31 space delimited list like this: DISKS='vdb vdc vdd'. By default
32 the zfs-tests.sh script will construct three loopback devices to
33 be used for testing: DISKS='loop0 loop1 loop2'.
34 * A non-root user with a full set of basic privileges and the ability
35 to sudo(8) to root without a password to run the test.
36 * Specify any pools you wish to preserve as a space delimited list in
37 the $KEEP variable. All pools detected at the start of testing are
39 * The ZFS Test Suite will add users and groups to test machine to
40 verify functionality. Therefore it is strongly advised that a
41 dedicated test machine, which can be a VM, be used for testing.
42 * On FreeBSD, mountd(8) must use `/etc/zfs/exports`
43 as one of its export files – by default this can be done by setting
44 `zfs_enable=yes` in `/etc/rc.conf`.
46 Once the pre-requisites are satisfied simply run the zfs-tests.sh script:
48 $ /usr/share/zfs/zfs-tests.sh
50 Alternately, the zfs-tests.sh script can be run from the source tree to allow
51 developers to rapidly validate their work. In this mode the ZFS utilities and
52 modules from the source tree will be used (rather than those installed on the
53 system). In order to avoid certain types of failures you will need to ensure
54 the ZFS udev rules are installed. This can be done manually or by ensuring
55 some version of ZFS is installed on the system.
57 $ ./scripts/zfs-tests.sh
59 The following zfs-tests.sh options are supported:
61 -v Verbose zfs-tests.sh output When specified additional
62 information describing the test environment will be logged
63 prior to invoking test-runner. This includes the runfile
64 being used, the DISKS targeted, pools to keep, etc.
66 -q Quiet test-runner output. When specified it is passed to
67 test-runner(1) which causes output to be written to the
68 console only for tests that do not pass and the results
71 -x Remove all testpools, dm, lo, and files (unsafe). When
72 specified the script will attempt to remove any leftover
73 configuration from a previous test run. This includes
74 destroying any pools named testpool, unused DM devices,
75 and loopback devices backed by file-vdevs. This operation
76 can be DANGEROUS because it is possible that the script
77 will mistakenly remove a resource not related to the testing.
79 -k Disable cleanup after test failure. When specified the
80 zfs-tests.sh script will not perform any additional cleanup
81 when test-runner exists. This is useful when the results of
82 a specific test need to be preserved for further analysis.
84 -f Use sparse files directly instead of loopback devices for
85 the testing. When running in this mode certain tests will
86 be skipped which depend on real block devices.
88 -c Only create and populate constrained path
90 -I NUM Number of iterations
92 -d DIR Create sparse files for vdevs in the DIR directory. By
93 default these files are created under /var/tmp/.
94 This directory must be world-writable.
96 -s SIZE Use vdevs of SIZE (default: 4G)
98 -r RUNFILES Run tests in RUNFILES (default: common.run,linux.run)
100 -t PATH Run single test at PATH relative to test suite
102 -T TAGS Comma separated list of tags (default: 'functional')
104 -u USER Run single test as USER (default: root)
107 The ZFS Test Suite allows the user to specify a subset of the tests via a
108 runfile or list of tags.
110 The format of the runfile is explained in test-runner(1), and
111 the files that zfs-tests.sh uses are available for reference under
112 /usr/share/zfs/runfiles. To specify a custom runfile, use the -r option:
114 $ /usr/share/zfs/zfs-tests.sh -r my_tests.run
116 Otherwise user can set needed tags to run only specific tests.
120 While the ZFS Test Suite is running, one informational line is printed at the
121 end of each test, and a results summary is printed at the end of the run. The
122 results summary includes the location of the complete logs, which is logged in
123 the form `/var/tmp/test_results/[ISO 8601 date]`. A normal test run launched
124 with the `zfs-tests.sh` wrapper script will look something like this:
126 $ /usr/share/zfs/zfs-tests.sh -v -d /tmp/test
128 --- Configuration ---
129 Runfile: /usr/share/zfs/runfiles/linux.run
130 STF_TOOLS: /usr/share/zfs/test-runner
131 STF_SUITE: /usr/share/zfs/zfs-tests
132 STF_PATH: /var/tmp/constrained_path.G0Sf
134 FILES: /tmp/test/file-vdev0 /tmp/test/file-vdev1 /tmp/test/file-vdev2
135 LOOPBACKS: /dev/loop0 /dev/loop1 /dev/loop2
136 DISKS: loop0 loop1 loop2
144 /usr/share/zfs/test-runner/bin/test-runner.py -c /usr/share/zfs/runfiles/linux.run \
145 -T functional -i /usr/share/zfs/zfs-tests -I 1
146 Test: /usr/share/zfs/zfs-tests/tests/functional/arc/setup (run as root) [00:00] [PASS]
147 ...more than 1100 additional tests...
148 Test: /usr/share/zfs/zfs-tests/tests/functional/zvol/zvol_swap/cleanup (run as root) [00:00] [PASS]
154 Running Time: 02:35:33
155 Percent passed: 95.6%
156 Log directory: /var/tmp/test_results/20180515T054509
158 ### 4) Example of adding and running test-case (zpool_example)
160 This broadly boils down to 5 steps
161 1. Create/Set password-less sudo for user running test case.
162 2. Edit configure.ac, Makefile.am appropriately
163 3. Create/Modify .run files
164 4. Create actual test-scripts
167 Will look at each of them in depth.
169 * Set password-less sudo for 'Test' user as test script cannot be run as root
170 * Edit file **configure.ac** and include line under AC_CONFIG_FILES section
172 tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile
174 * Edit file **tests/runfiles/Makefile.am** and add line *zpool_example*.
176 pkgdatadir = $(datadir)/@PACKAGE@/runfiles
177 dist_pkgdata_DATA = \
183 perf-regression.run \
187 * Create file **tests/runfiles/zpool_example.run**. This defines the most
188 common properties when run with test-runner.py or zfs-tests.sh.
192 outputdir = /var/tmp/test_results
193 tags = ['functional']
195 tests = ['zpool_example_001_pos']
197 If adding test-case to an already existing suite the runfile would
198 already be present and it needs to be only updated. For example, adding
199 **zpool_example_002_pos** to the above runfile only update the **"tests ="**
200 section of the runfile as shown below
204 outputdir = /var/tmp/test_results
205 tags = ['functional']
207 tests = ['zpool_example_001_pos', 'zpool_example_002_pos']
210 * Edit **tests/zfs-tests/tests/functional/cli_root/Makefile.am** and add line
213 zpool_example \ (Make sure to escape the line end as there will be other folders names following)
215 * Create new file **tests/zfs-tests/tests/functional/cli_root/zpool_example/Makefile.am**
216 the contents of the file could be as below. What it says it that now we have
217 a test case *zpool_example_001_pos.ksh*
219 pkgdatadir = $(datadir)/@PACKAGE@/zfs-tests/tests/functional/cli_root/zpool_example
220 dist_pkgdata_SCRIPTS = \
221 zpool_example_001_pos.ksh
223 * We can now create our test-case zpool_example_001_pos.ksh under
224 **tests/zfs-tests/tests/functional/cli_root/zpool_example/**.
230 # 1. Demo a very basic test case
233 DISKS_DEV1="/dev/loop0"
234 DISKS_DEV2="/dev/loop1"
235 TESTPOOL=EXAMPLE_POOL
240 destroy_pool $TESTPOOL
241 log_must rm -f $DISKS_DEV1
242 log_must rm -f $DISKS_DEV2
245 log_assert "zpool_example"
246 # Run function "cleanup" on exit
249 # Prep backend device
250 log_must dd if=/dev/zero of=$DISKS_DEV1 bs=512 count=140000
251 log_must dd if=/dev/zero of=$DISKS_DEV2 bs=512 count=140000
254 log_must zpool create $TESTPOOL $type $DISKS_DEV1 $DISKS_DEV2
256 log_pass "zpool_example"
258 * Run Test case, which can be done in two ways. Described in detail above in
260 * test-runner.py (This takes run file as input. See *zpool_example.run*)
261 * zfs-tests.sh. Can execute the run file or individual tests