1 The extended testsuite only works with UID=0. It consists of the subdirectories
2 named "test/TEST-??-*", each of which contains a description of an OS image and
3 a test which consists of systemd units and scripts to execute in this image.
4 The same image is used for execution under `systemd-nspawn` and `qemu`.
6 To run the extended testsuite do the following:
8 $ ninja -C build # Avoid building anything as root later
9 $ sudo test/run-integration-tests.sh
10 ninja: Entering directory `/home/zbyszek/src/systemd/build'
12 --x-- Running TEST-01-BASIC --x--
13 + make -C TEST-01-BASIC clean setup run
14 make: Entering directory '/home/zbyszek/src/systemd/test/TEST-01-BASIC'
15 TEST-01-BASIC CLEANUP: Basic systemd setup
16 TEST-01-BASIC SETUP: Basic systemd setup
18 TEST-01-BASIC RUN: Basic systemd setup [OK]
19 make: Leaving directory '/home/zbyszek/src/systemd/test/TEST-01-BASIC'
20 --x-- Result of TEST-01-BASIC: 0 --x--
21 --x-- Running TEST-02-CRYPTSETUP --x--
22 + make -C TEST-02-CRYPTSETUP clean setup run
24 If one of the tests fails, then $subdir/test.log contains the log file of
27 To run just one of the cases:
29 $ sudo make -C test/TEST-01-BASIC clean setup run
31 Specifying the build directory
32 ==============================
34 If the build directory is not detected automatically, it can be specified
37 $ sudo BUILD_DIR=some-other-build/ test/run-integration-tests
41 $ sudo make -C test/TEST-01-BASIC BUILD_DIR=../../some-other-build/ ...
43 Note that in the second case, the path is relative to the test case directory.
44 An absolute path may also be used in both cases.
46 Testing installed binaries instead of built
47 ===========================================
49 To run the extended testsuite using the systemd installed on the system instead
50 of the systemd from a build, use the NO_BUILD=1:
52 $ sudo NO_BUILD=1 test/run-integration-tests
54 Configuration variables
55 =======================
58 Don't run tests under qemu
61 Run only tests that require qemu
64 Don't run tests under systemd-nspawn
67 Run all tests that do not require qemu under systemd-nspawn
70 Disable qemu KVM auto-detection (may be necessary when you're trying to run the
71 *vanilla* qemu and have both qemu and qemu-kvm installed)
74 Allow tests to run with nested KVM. By default, the testsuite disables
75 nested KVM if the host machine already runs under KVM. Setting this
76 variable disables such checks
79 Configure amount of memory for qemu VMs (defaults to 512M)
82 Configure number of CPUs for qemu VMs (defaults to 1)
85 Append additional parameters to the kernel command line
87 NSPAWN_ARGUMENTS='...'
88 Specify additional arguments for systemd-nspawn
91 Set a timeout for tests under qemu (defaults to 1800 sec)
93 NSPAWN_TIMEOUT=infinity
94 Set a timeout for tests under systemd-nspawn (defaults to 1800 sec)
97 Configure the machine to be more *user-friendly* for interactive debuggung
98 (e.g. by setting a usable default terminal, suppressing the shutdown after
101 TEST_MATCH_SUBTEST=subtest
102 If the test makes use of `run_subtests` use this variable to provide
103 a POSIX extended regex to run only subtests matching the expression
105 TEST_MATCH_TESTCASE=testcase
106 Same as $TEST_MATCH_SUBTEST but for subtests that make use of `run_testcases`
108 The kernel and initrd can be specified with $KERNEL_BIN and $INITRD. (Fedora's
109 or Debian's default kernel path and initrd are used by default.)
111 A script will try to find your qemu binary. If you want to specify a different
114 Debugging the qemu image
115 ========================
117 If you want to log in the testsuite virtual machine, use INTERACTIVE_DEBUG=1
120 $ sudo make -C test/TEST-01-BASIC INTERACTIVE_DEBUG=1 run
122 The root password is empty.
127 New PR submitted to the project are run through regression tests, and one set
128 of those is the 'autopkgtest' runs for several different architectures, called
129 'Ubuntu CI'. Part of that testing is to run all these tests. Sometimes these
130 tests are temporarily deny-listed from running in the 'autopkgtest' tests while
131 debugging a flaky test; that is done by creating a file in the test directory
132 named 'deny-list-ubuntu-ci', for example to prevent the TEST-01-BASIC test from
133 running in the 'autopkgtest' runs, create the file
134 'TEST-01-BASIC/deny-list-ubuntu-ci'.
136 The tests may be disabled only for specific archs, by creating a deny-list file
137 with the arch name at the end, e.g.
138 'TEST-01-BASIC/deny-list-ubuntu-ci-arm64' to disable the TEST-01-BASIC test
139 only on test runs for the 'arm64' architecture.
141 Note the arch naming is not from 'uname -m', it is Debian arch names:
142 https://wiki.debian.org/ArchitectureSpecificsMemo
144 For PRs that fix a currently deny-listed test, the PR should include removal
145 of the deny-list file.
147 In case a test fails, the full set of artifacts, including the journal of the
148 failed run, can be downloaded from the artifacts.tar.gz archive which will be
149 reachable in the same URL parent directory as the logs.gz that gets linked on
150 the Github CI status.
152 To add new dependencies or new binaries to the packages used during the tests,
153 a merge request can be sent to: https://salsa.debian.org/systemd-team/systemd
154 targeting the 'upstream-ci' branch.
156 The cloud-side infrastructure, that is hooked into the Github interface, is
159 https://git.launchpad.net/autopkgtest-cloud/
161 In case of infrastructure issues with this CI, things might go wrong in two
164 - starting a job: this is done via a Github webhook, so check if the HTTP POST
165 are failing on https://github.com/systemd/systemd/settings/hooks
166 - running a job: all currently running jobs are listed at
167 https://autopkgtest.ubuntu.com/running#pkg-systemd-upstream in case the PR
168 does not show the status for some reason
169 - reporting the job result: this is done on Canonical's cloud infrastructure,
170 if jobs are started and running but no status is visible on the PR, then it is
171 likely that reporting back is not working
173 For infrastructure help, reaching out to Canonical via the #ubuntu-devel channel
174 on libera.chat is an effective way to receive support in general.
176 Manually running a part of the Ubuntu CI test suite
177 ===================================================
179 In some situations one may want/need to run one of the tests run by Ubuntu CI
180 locally for debugging purposes. For this, you need a machine (or a VM) with
181 the same Ubuntu release as is used by Ubuntu CI (Jammy ATTOW).
183 First of all, clone the Debian systemd repository and sync it with the code of
184 the PR (set by the $UPSTREAM_PULL_REQUEST env variable) you'd like to debug:
186 # git clone https://salsa.debian.org/systemd-team/systemd.git
188 # git checkout upstream-ci
189 # TEST_UPSTREAM=1 UPSTREAM_PULL_REQUEST=12345 ./debian/extra/checkout-upstream
191 Now install necessary build & test dependencies:
193 ## PPA with some newer Ubuntu packages required by upstream systemd
194 # add-apt-repository -y --enable-source ppa:upstream-systemd-ci/systemd-ci
195 # apt build-dep -y systemd
196 # apt install -y autopkgtest debhelper genisoimage git qemu-system-x86 \
197 libcurl4-openssl-dev libfdisk-dev libtss2-dev libfido2-dev \
198 libssl-dev python3-pefile
200 Build systemd deb packages with debug info:
202 # TEST_UPSTREAM=1 DEB_BUILD_OPTIONS="nocheck nostrip noopt" dpkg-buildpackage -us -uc
205 Prepare a testbed image for autopkgtest (tweak the release as necessary):
207 # autopkgtest-buildvm-ubuntu-cloud --ram-size 1024 -v -a amd64 -r jammy
209 And finally run the autopkgtest itself:
211 # autopkgtest -o logs *.deb systemd/ \
212 --env=TEST_UPSTREAM=1 \
214 --test-name=boot-and-services \
216 -- autopkgtest-virt-qemu --cpus 4 --ram-size 2048 autopkgtest-jammy-amd64.img
218 where --test-name= is the name of the test you want to run/debug. The
219 --shell-fail option will pause the execution in case the test fails and shows
220 you the information how to connect to the testbed for further debugging.
222 Manually running CodeQL analysis
223 =====================================
225 This is mostly useful for debugging various CodeQL quirks.
227 Download the CodeQL Bundle from https://github.com/github/codeql-action/releases
228 and unpack it somewhere. From now the 'tutorial' assumes you have the `codeql`
229 binary from the unpacked archive in $PATH for brevity.
231 Switch to the systemd repository if not already:
235 Create an initial CodeQL database:
237 $ CCACHE_DISABLE=1 codeql database create codeqldb --language=cpp -vvv
239 Disabling ccache is important, otherwise you might see CodeQL complaining:
241 No source code was seen and extracted to /home/mrc0mmand/repos/@ci-incubator/systemd/codeqldb.
242 This can occur if the specified build commands failed to compile or process any code.
243 - Confirm that there is some source code for the specified language in the project.
244 - For codebases written in Go, JavaScript, TypeScript, and Python, do not specify
245 an explicit --command.
246 - For other languages, the --command must specify a "clean" build which compiles
247 all the source code files without reusing existing build artefacts.
249 If you want to run all queries systemd uses in CodeQL, run:
251 $ codeql database analyze codeqldb/ --format csv --output results.csv .github/codeql-custom.qls .github/codeql-queries/*.ql -vvv
253 Note: this will take a while.
255 If you're interested in a specific check, the easiest way (without hunting down
256 the specific CodeQL query file) is to create a custom query suite. For example:
258 $ cat >test.qls <<EOF
260 from: codeql/cpp-queries
266 And then execute it in the same way as above:
268 $ codeql database analyze codeqldb/ --format csv --output results.csv test.qls -vvv
270 More about query suites here: https://codeql.github.com/docs/codeql-cli/creating-codeql-query-suites/
272 The results are then located in the `results.csv` file as a comma separated
273 values list (obviously), which is the most human-friendly output format the
274 CodeQL utility provides (so far).
279 We have a daily cron job in CentOS CI which runs all unit and integration tests,
280 collects coverage using gcov/lcov, and uploads the report to Coveralls[0]. In
281 order to collect the most accurate coverage information, some measures have
282 to be taken regarding sandboxing, namely:
284 - ProtectSystem= and ProtectHome= need to be turned off
285 - the $BUILD_DIR with necessary .gcno files needs to be present in the image
286 and needs to be writable by all processes
288 The first point is relatively easy to handle and is handled automagically by
289 our test "framework" by creating necessary dropins.
291 Making the $BUILD_DIR accessible to _everything_ is slightly more complicated.
292 First, and foremost, the $BUILD_DIR has a POSIX ACL that makes it writable
293 to everyone. However, this is not enough in some cases, like for services
294 that use DynamicUser=yes, since that implies ProtectSystem=strict that can't
295 be turned off. A solution to this is to use ReadWritePaths=$BUILD_DIR, which
296 works for the majority of cases, but can't be turned on globally, since
297 ReadWritePaths= creates its own mount namespace which might break some
298 services. Hence, the ReadWritePaths=$BUILD_DIR is enabled for all services
299 with the `test-` prefix (i.e. test-foo.service or test-foo-bar.service), both
300 in the system and the user managers.
302 So, if you're considering writing an integration test that makes use
303 of DynamicUser=yes, or other sandboxing stuff that implies it, please prefix
304 the test unit (be it a static one or a transient one created via systemd-run),
305 with `test-`, unless the test unit needs to be able to install mount points
306 in the main mount namespace - in that case use IGNORE_MISSING_COVERAGE=yes
307 in the test definition (i.e. TEST-*-NAME/test.sh), which will skip the post-test
308 check for missing coverage for the respective test.
310 [0] https://coveralls.io/github/systemd/systemd