1 <!-- doc/src/sgml/regress.sgml -->
4 <title>Regression Tests
</title>
6 <indexterm zone=
"regress">
7 <primary>regression tests
</primary>
10 <indexterm zone=
"regress">
11 <primary>test
</primary>
15 The regression tests are a comprehensive set of tests for the SQL
16 implementation in
<productname>PostgreSQL
</productname>. They test
17 standard SQL operations as well as the extended capabilities of
18 <productname>PostgreSQL
</productname>.
21 <sect1 id=
"regress-run">
22 <title>Running the Tests
</title>
25 The regression tests can be run against an already installed and
26 running server, or using a temporary installation within the build
27 tree. Furthermore, there is a
<quote>parallel
</quote> and a
28 <quote>sequential
</quote> mode for running the tests. The
29 sequential method runs each test script alone, while the
30 parallel method starts up multiple server processes to run groups
31 of tests in parallel. Parallel testing adds confidence that
32 interprocess communication and locking are working correctly.
33 Some tests may run sequentially even in the
<quote>parallel
</quote>
34 mode in case this is required by the test.
37 <sect2 id=
"regress-run-temp-inst">
38 <title>Running the Tests Against a Temporary Installation
</title>
41 To run the parallel regression tests after building but before installation,
46 in the top-level directory. (Or you can change to
47 <filename>src/test/regress
</filename> and run the command there.)
48 Tests which are run in parallel are prefixed with
<quote>+
</quote>, and
49 tests which run sequentially are prefixed with
<quote>-
</quote>.
50 At the end you should see something like:
53 # All
213 tests passed.
56 or otherwise a note about which tests failed. See
<xref
57 linkend=
"regress-evaluation"/> below before assuming that a
58 <quote>failure
</quote> represents a serious problem.
62 Because this test method runs a temporary server, it will not work
63 if you did the build as the root user, since the server will not start as
64 root. Recommended procedure is not to do the build as root, or else to
65 perform testing after completing the installation.
69 If you have configured
<productname>PostgreSQL
</productname> to install
70 into a location where an older
<productname>PostgreSQL
</productname>
71 installation already exists, and you perform
<literal>make check
</literal>
72 before installing the new version, you might find that the tests fail
73 because the new programs try to use the already-installed shared
74 libraries. (Typical symptoms are complaints about undefined symbols.)
75 If you wish to run the tests before overwriting the old installation,
76 you'll need to build with
<literal>configure --disable-rpath
</literal>.
77 It is not recommended that you use this option for the final installation,
82 The parallel regression test starts quite a few processes under your
83 user ID. Presently, the maximum concurrency is twenty parallel test
84 scripts, which means forty processes: there's a server process and a
85 <application>psql
</application> process for each test script.
86 So if your system enforces a per-user limit on the number of processes,
87 make sure this limit is at least fifty or so, else you might get
88 random-seeming failures in the parallel test. If you are not in
89 a position to raise the limit, you can cut down the degree of parallelism
90 by setting the
<literal>MAX_CONNECTIONS
</literal> parameter. For example:
92 make MAX_CONNECTIONS=
10 check
94 runs no more than ten tests concurrently.
98 <sect2 id=
"regress-run-existing-inst">
99 <title>Running the Tests Against an Existing Installation
</title>
102 To run the tests after installation (see
<xref linkend=
"installation"/>),
103 initialize a data directory and start the
104 server as explained in
<xref linkend=
"runtime"/>, then type:
108 or for a parallel test:
110 make installcheck-parallel
112 The tests will expect to contact the server at the local host and the
113 default port number, unless directed otherwise by
<envar>PGHOST
</envar> and
114 <envar>PGPORT
</envar> environment variables. The tests will be run in a
115 database named
<literal>regression
</literal>; any existing database by this name
120 The tests will also transiently create some cluster-wide objects, such as
121 roles, tablespaces, and subscriptions. These objects will have names
122 beginning with
<literal>regress_
</literal>. Beware of
123 using
<literal>installcheck
</literal> mode with an installation that has
124 any actual global objects named that way.
128 <sect2 id=
"regress-additional">
129 <title>Additional Test Suites
</title>
132 The
<literal>make check
</literal> and
<literal>make installcheck
</literal> commands
133 run only the
<quote>core
</quote> regression tests, which test built-in
134 functionality of the
<productname>PostgreSQL
</productname> server. The source
135 distribution contains many additional test suites, most of them having
136 to do with add-on functionality such as optional procedural languages.
140 To run all test suites applicable to the modules that have been selected
141 to be built, including the core tests, type one of these commands at the
142 top of the build tree:
145 make installcheck-world
147 These commands run the tests using temporary servers or an
148 already-installed server, respectively, just as previously explained
149 for
<literal>make check
</literal> and
<literal>make installcheck
</literal>. Other
150 considerations are the same as previously explained for each method.
151 Note that
<literal>make check-world
</literal> builds a separate instance
152 (temporary data directory) for each tested module, so it requires more
153 time and disk space than
<literal>make installcheck-world
</literal>.
157 On a modern machine with multiple CPU cores and no tight operating-system
158 limits, you can make things go substantially faster with parallelism.
159 The recipe that most PostgreSQL developers actually use for running all
160 tests is something like
162 make check-world -j8
>/dev/null
164 with a
<option>-j
</option> limit near to or a bit more than the number
165 of available cores. Discarding
<systemitem>stdout
</systemitem>
166 eliminates chatter that's not interesting when you just want to verify
167 success. (In case of failure, the
<systemitem>stderr
</systemitem>
168 messages are usually enough to determine where to look closer.)
172 Alternatively, you can run individual test suites by typing
173 <literal>make check
</literal> or
<literal>make installcheck
</literal> in the appropriate
174 subdirectory of the build tree. Keep in mind that
<literal>make
175 installcheck
</literal> assumes you've installed the relevant module(s), not
176 only the core server.
180 The additional tests that can be invoked this way include:
186 Regression tests for optional procedural languages.
187 These are located under
<filename>src/pl
</filename>.
192 Regression tests for
<filename>contrib
</filename> modules,
193 located under
<filename>contrib
</filename>.
194 Not all
<filename>contrib
</filename> modules have tests.
199 Regression tests for the interface libraries,
200 located in
<filename>src/interfaces/libpq/test
</filename> and
201 <filename>src/interfaces/ecpg/test
</filename>.
206 Tests for core-supported authentication methods,
207 located in
<filename>src/test/authentication
</filename>.
208 (See below for additional authentication-related tests.)
213 Tests stressing behavior of concurrent sessions,
214 located in
<filename>src/test/isolation
</filename>.
219 Tests for crash recovery and physical replication,
220 located in
<filename>src/test/recovery
</filename>.
225 Tests for logical replication,
226 located in
<filename>src/test/subscription
</filename>.
231 Tests of client programs, located under
<filename>src/bin
</filename>.
237 When using
<literal>installcheck
</literal> mode, these tests will create
238 and destroy test databases whose names
239 include
<literal>regression
</literal>, for
240 example
<literal>pl_regression
</literal>
241 or
<literal>contrib_regression
</literal>. Beware of
242 using
<literal>installcheck
</literal> mode with an installation that has
243 any non-test databases named that way.
247 Some of these auxiliary test suites use the TAP infrastructure explained
248 in
<xref linkend=
"regress-tap"/>.
249 The TAP-based tests are run only when PostgreSQL was configured with the
250 option
<option>--enable-tap-tests
</option>. This is recommended for
251 development, but can be omitted if there is no suitable Perl installation.
255 Some test suites are not run by default, either because they are not secure
256 to run on a multiuser system, because they require special software or
257 because they are resource intensive. You can decide which test suites to
258 run additionally by setting the
<command>make
</command> or environment
259 variable
<varname>PG_TEST_EXTRA
</varname> to a whitespace-separated list,
262 make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
264 The following values are currently supported:
267 <term><literal>kerberos
</literal></term>
270 Runs the test suite under
<filename>src/test/kerberos
</filename>. This
271 requires an MIT Kerberos installation and opens TCP/IP listen sockets.
277 <term><literal>ldap
</literal></term>
280 Runs the test suite under
<filename>src/test/ldap
</filename>. This
281 requires an
<productname>OpenLDAP
</productname> installation and opens
282 TCP/IP listen sockets.
288 <term><literal>ssl
</literal></term>
291 Runs the test suite under
<filename>src/test/ssl
</filename>. This opens TCP/IP listen sockets.
297 <term><literal>load_balance
</literal></term>
300 Runs the test
<filename>src/interfaces/libpq/t/
004_load_balance_dns.pl
</filename>.
301 This requires editing the system
<filename>hosts
</filename> file and
302 opens TCP/IP listen sockets.
308 <term><literal>libpq_encryption
</literal></term>
311 Runs the test
<filename>src/interfaces/libpq/t/
005_negotiate_encryption.pl
</filename>.
312 This opens TCP/IP listen sockets. If
<varname>PG_TEST_EXTRA
</varname>
313 also includes
<literal>kerberos
</literal>, additional tests that require
314 an MIT Kerberos installation are enabled.
320 <term><literal>wal_consistency_checking
</literal></term>
323 Uses
<literal>wal_consistency_checking=all
</literal> while running
324 certain tests under
<filename>src/test/recovery
</filename>. Not
325 enabled by default because it is resource intensive.
331 <term><literal>xid_wraparound
</literal></term>
334 Runs the test suite under
<filename>src/test/modules/xid_wraparound
</filename>.
335 Not enabled by default because it is resource intensive.
341 Tests for features that are not supported by the current build
342 configuration are not run even if they are mentioned in
343 <varname>PG_TEST_EXTRA
</varname>.
347 In addition, there are tests in
<filename>src/test/modules
</filename>
348 which will be run by
<literal>make check-world
</literal> but not
349 by
<literal>make installcheck-world
</literal>. This is because they
350 install non-production extensions or have other side-effects that are
351 considered undesirable for a production installation. You can
352 use
<literal>make install
</literal> and
<literal>make
353 installcheck
</literal> in one of those subdirectories if you wish,
354 but it's not recommended to do so with a non-test server.
358 <sect2 id=
"regress-run-locale">
359 <title>Locale and Encoding
</title>
362 By default, tests using a temporary installation use the
363 locale defined in the current environment and the corresponding
364 database encoding as determined by
<command>initdb
</command>. It
365 can be useful to test different locales by setting the appropriate
366 environment variables, for example:
369 make check LC_COLLATE=en_US.utf8 LC_CTYPE=fr_CA.utf8
371 For implementation reasons, setting
<envar>LC_ALL
</envar> does not
372 work for this purpose; all the other locale-related environment
377 When testing against an existing installation, the locale is
378 determined by the existing database cluster and cannot be set
379 separately for the test run.
383 You can also choose the database encoding explicitly by setting
384 the variable
<envar>ENCODING
</envar>, for example:
386 make check LANG=C ENCODING=EUC_JP
388 Setting the database encoding this way typically only makes sense
389 if the locale is C; otherwise the encoding is chosen automatically
390 from the locale, and specifying an encoding that does not match
391 the locale will result in an error.
395 The database encoding can be set for tests against either a temporary or
396 an existing installation, though in the latter case it must be
397 compatible with the installation's locale.
401 <sect2 id=
"regress-run-custom-settings">
402 <title>Custom Server Settings
</title>
405 There are several ways to use custom server settings when running a test
406 suite. This can be useful to enable additional logging, adjust resource
407 limits, or enable extra run-time checks such as
<xref
408 linkend=
"guc-debug-discard-caches"/>. But note that not all tests can be
409 expected to pass cleanly with arbitrary settings.
413 Extra options can be passed to the various
<command>initdb
</command>
414 commands that are run internally during test setup using the environment
415 variable
<envar>PG_TEST_INITDB_EXTRA_OPTS
</envar>. For example, to run a
416 test with checksums enabled and a custom WAL segment size and
417 <varname>work_mem
</varname> setting, use:
419 make check PG_TEST_INITDB_EXTRA_OPTS='-k --wal-segsize=
4 -c work_mem=
50MB'
424 For the core regression test suite and other tests driven by
425 <command>pg_regress
</command>, custom run-time server settings can also be
426 set in the
<varname>PGOPTIONS
</varname> environment variable (for settings
427 that allow this), for example:
429 make check
PGOPTIONS=
"-c debug_parallel_query=regress -c work_mem=50MB"
431 (This makes use of functionality provided by libpq; see
<xref
432 linkend=
"libpq-connect-options"/> for details.)
436 When running against a temporary installation, custom settings can also be
437 set by supplying a pre-written
<filename>postgresql.conf
</filename>:
439 echo 'log_checkpoints = on'
> test_postgresql.conf
440 echo 'work_mem =
50MB'
>> test_postgresql.conf
441 make check
EXTRA_REGRESS_OPTS=
"--temp-config=test_postgresql.conf"
447 <sect2 id=
"regress-run-extra-tests">
448 <title>Extra Tests
</title>
451 The core regression test suite contains a few test files that are not
452 run by default, because they might be platform-dependent or take a
453 very long time to run. You can run these or other extra test
454 files by setting the variable
<envar>EXTRA_TESTS
</envar>. For
455 example, to run the
<literal>numeric_big
</literal> test:
457 make check EXTRA_TESTS=numeric_big
463 <sect1 id=
"regress-evaluation">
464 <title>Test Evaluation
</title>
467 Some properly installed and fully functional
468 <productname>PostgreSQL
</productname> installations can
469 <quote>fail
</quote> some of these regression tests due to
470 platform-specific artifacts such as varying floating-point representation
471 and message wording. The tests are currently evaluated using a simple
472 <command>diff
</command> comparison against the outputs
473 generated on a reference system, so the results are sensitive to
474 small system differences. When a test is reported as
475 <quote>failed
</quote>, always examine the differences between
476 expected and actual results; you might find that the
477 differences are not significant. Nonetheless, we still strive to
478 maintain accurate reference files across all supported platforms,
479 so it can be expected that all tests pass.
483 The actual outputs of the regression tests are in files in the
484 <filename>src/test/regress/results
</filename> directory. The test
485 script uses
<command>diff
</command> to compare each output
486 file against the reference outputs stored in the
487 <filename>src/test/regress/expected
</filename> directory. Any
488 differences are saved for your inspection in
489 <filename>src/test/regress/regression.diffs
</filename>.
490 (When running a test suite other than the core tests, these files
491 of course appear in the relevant subdirectory,
492 not
<filename>src/test/regress
</filename>.)
497 like the
<command>diff
</command> options that are used by default, set the
498 environment variable
<envar>PG_REGRESS_DIFF_OPTS
</envar>, for
499 instance
<literal>PG_REGRESS_DIFF_OPTS='-c'
</literal>. (Or you
500 can run
<command>diff
</command> yourself, if you prefer.)
504 If for some reason a particular platform generates a
<quote>failure
</quote>
505 for a given test, but inspection of the output convinces you that
506 the result is valid, you can add a new comparison file to silence
507 the failure report in future test runs. See
508 <xref linkend=
"regress-variant"/> for details.
511 <sect2 id=
"regress-evaluation-message-differences">
512 <title>Error Message Differences
</title>
515 Some of the regression tests involve intentional invalid input
516 values. Error messages can come from either the
517 <productname>PostgreSQL
</productname> code or from the host
518 platform system routines. In the latter case, the messages can
519 vary between platforms, but should reflect similar
520 information. These differences in messages will result in a
521 <quote>failed
</quote> regression test that can be validated by
526 <sect2 id=
"regress-evaluation-locale-differences">
527 <title>Locale Differences
</title>
530 If you run the tests against a server that was
531 initialized with a collation-order locale other than C, then
532 there might be differences due to sort order and subsequent
533 failures. The regression test suite is set up to handle this
534 problem by providing alternate result files that together are
535 known to handle a large number of locales.
539 To run the tests in a different locale when using the
540 temporary-installation method, pass the appropriate
541 locale-related environment variables on
542 the
<command>make
</command> command line, for example:
544 make check LANG=de_DE.utf8
546 (The regression test driver unsets
<envar>LC_ALL
</envar>, so it
547 does not work to choose the locale using that variable.) To use
548 no locale, either unset all locale-related environment variables
549 (or set them to
<literal>C
</literal>) or use the following
552 make check NO_LOCALE=
1
554 When running the tests against an existing installation, the
555 locale setup is determined by the existing installation. To
556 change it, initialize the database cluster with a different
557 locale by passing the appropriate options
558 to
<command>initdb
</command>.
562 In general, it is advisable to try to run the
563 regression tests in the locale setup that is wanted for
564 production use, as this will exercise the locale- and
565 encoding-related code portions that will actually be used in
566 production. Depending on the operating system environment, you
567 might get failures, but then you will at least know what
568 locale-specific behaviors to expect when running real
573 <sect2 id=
"regress-evaluation-date-time-differences">
574 <title>Date and Time Differences
</title>
577 Most of the date and time results are dependent on the time zone
578 environment. The reference files are generated for time zone
579 <literal>PST8PDT
</literal> (Berkeley, California), and there will be
580 apparent failures if the tests are not run with that time zone setting.
581 The regression test driver sets environment variable
582 <envar>PGTZ
</envar> to
<literal>PST8PDT
</literal>, which normally
583 ensures proper results.
587 <sect2 id=
"regress-evaluation-float-differences">
588 <title>Floating-Point Differences
</title>
591 Some of the tests involve computing
64-bit floating-point numbers (
<type>double
592 precision
</type>) from table columns. Differences in
593 results involving mathematical functions of
<type>double
594 precision
</type> columns have been observed. The
<literal>float8
</literal> and
595 <literal>geometry
</literal> tests are particularly prone to small differences
596 across platforms, or even with different compiler optimization settings.
597 Human eyeball comparison is needed to determine the real
598 significance of these differences which are usually
10 places to
599 the right of the decimal point.
603 Some systems display minus zero as
<literal>-
0</literal>, while others
604 just show
<literal>0</literal>.
608 Some systems signal errors from
<function>pow()
</function> and
609 <function>exp()
</function> differently from the mechanism
610 expected by the current
<productname>PostgreSQL
</productname>
615 <sect2 id=
"regress-evaluation-ordering-differences">
616 <title>Row Ordering Differences
</title>
619 You might see differences in which the same rows are output in a
620 different order than what appears in the expected file. In most cases
621 this is not, strictly speaking, a bug. Most of the regression test
622 scripts are not so pedantic as to use an
<literal>ORDER BY
</literal> for every single
623 <literal>SELECT
</literal>, and so their result row orderings are not well-defined
624 according to the SQL specification. In practice, since we are
625 looking at the same queries being executed on the same data by the same
626 software, we usually get the same result ordering on all platforms,
627 so the lack of
<literal>ORDER BY
</literal> is not a problem. Some queries do exhibit
628 cross-platform ordering differences, however. When testing against an
629 already-installed server, ordering differences can also be caused by
630 non-C locale settings or non-default parameter settings, such as custom values
631 of
<varname>work_mem
</varname> or the planner cost parameters.
635 Therefore, if you see an ordering difference, it's not something to
636 worry about, unless the query does have an
<literal>ORDER BY
</literal> that your
637 result is violating. However, please report it anyway, so that we can add an
638 <literal>ORDER BY
</literal> to that particular query to eliminate the bogus
639 <quote>failure
</quote> in future releases.
643 You might wonder why we don't order all the regression test queries explicitly
644 to get rid of this issue once and for all. The reason is that that would
645 make the regression tests less useful, not more, since they'd tend
646 to exercise query plan types that produce ordered results to the
647 exclusion of those that don't.
651 <sect2 id=
"regress-evaluation-stack-depth">
652 <title>Insufficient Stack Depth
</title>
655 If the
<literal>errors
</literal> test results in a server crash
656 at the
<literal>select infinite_recurse()
</literal> command, it means that
657 the platform's limit on process stack size is smaller than the
658 <xref linkend=
"guc-max-stack-depth"/> parameter indicates. This
659 can be fixed by running the server under a higher stack
660 size limit (
4MB is recommended with the default value of
661 <varname>max_stack_depth
</varname>). If you are unable to do that, an
662 alternative is to reduce the value of
<varname>max_stack_depth
</varname>.
666 On platforms supporting
<function>getrlimit()
</function>, the server should
667 automatically choose a safe value of
<varname>max_stack_depth
</varname>;
668 so unless you've manually overridden this setting, a failure of this
669 kind is a reportable bug.
673 <sect2 id=
"regress-evaluation-random-test">
674 <title>The
<quote>random
</quote> Test
</title>
677 The
<literal>random
</literal> test script is intended to produce
678 random results. In very rare cases, this causes that regression
679 test to fail. Typing:
681 diff results/random.out expected/random.out
683 should produce only one or a few lines of differences. You need
684 not worry unless the random test fails repeatedly.
688 <sect2 id=
"regress-evaluation-config-params">
689 <title>Configuration Parameters
</title>
692 When running the tests against an existing installation, some non-default
693 parameter settings could cause the tests to fail. For example, changing
694 parameters such as
<varname>enable_seqscan
</varname> or
695 <varname>enable_indexscan
</varname> could cause plan changes that would
696 affect the results of tests that use
<command>EXPLAIN
</command>.
701 <!-- We might want to move the following section into the developer's guide. -->
702 <sect1 id=
"regress-variant">
703 <title>Variant Comparison Files
</title>
706 Since some of the tests inherently produce environment-dependent
707 results, we have provided ways to specify alternate
<quote>expected
</quote>
708 result files. Each regression test can have several comparison files
709 showing possible results on different platforms. There are two
710 independent mechanisms for determining which comparison file is used
715 The first mechanism allows comparison files to be selected for
716 specific platforms. There is a mapping file,
717 <filename>src/test/regress/resultmap
</filename>, that defines
718 which comparison file to use for each platform.
719 To eliminate bogus test
<quote>failures
</quote> for a particular platform,
720 you first choose or make a variant result file, and then add a line to the
721 <filename>resultmap
</filename> file.
725 Each line in the mapping file is of the form
727 testname:output:platformpattern=comparisonfilename
729 The test name is just the name of the particular regression test
730 module. The output value indicates which output file to check. For the
731 standard regression tests, this is always
<literal>out
</literal>. The
732 value corresponds to the file extension of the output file.
733 The platform pattern is a pattern in the style of the Unix
734 tool
<command>expr
</command> (that is, a regular expression with an implicit
735 <literal>^
</literal> anchor at the start). It is matched against the
736 platform name as printed by
<command>config.guess
</command>.
737 The comparison file name is the base name of the substitute result
742 For example: some systems lack a working
<literal>strtof
</literal> function,
743 for which our workaround causes rounding errors in the
744 <filename>float4
</filename> regression test.
745 Therefore, we provide a variant comparison file,
746 <filename>float4-misrounded-input.out
</filename>, which includes
747 the results to be expected on these systems. To silence the bogus
748 <quote>failure
</quote> message on
<systemitem>Cygwin
</systemitem>
749 platforms,
<filename>resultmap
</filename> includes:
751 float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
753 which will trigger on any machine where the output of
754 <command>config.guess
</command> matches
<literal>.*-.*-cygwin.*
</literal>.
755 Other lines in
<filename>resultmap
</filename> select the variant comparison
756 file for other platforms where it's appropriate.
760 The second selection mechanism for variant comparison files is
761 much more automatic: it simply uses the
<quote>best match
</quote> among
762 several supplied comparison files. The regression test driver
763 script considers both the standard comparison file for a test,
764 <literal><replaceable>testname
</replaceable>.out
</literal>, and variant files named
765 <literal><replaceable>testname
</replaceable>_
<replaceable>digit
</replaceable>.out
</literal>
766 (where the
<replaceable>digit
</replaceable> is any single digit
767 <literal>0</literal>-
<literal>9</literal>). If any such file is an exact match,
768 the test is considered to pass; otherwise, the one that generates
769 the shortest diff is used to create the failure report. (If
770 <filename>resultmap
</filename> includes an entry for the particular
771 test, then the base
<replaceable>testname
</replaceable> is the substitute
772 name given in
<filename>resultmap
</filename>.)
776 For example, for the
<literal>char
</literal> test, the comparison file
777 <filename>char.out
</filename> contains results that are expected
778 in the
<literal>C
</literal> and
<literal>POSIX
</literal> locales, while
779 the file
<filename>char_1.out
</filename> contains results sorted as
780 they appear in many other locales.
784 The best-match mechanism was devised to cope with locale-dependent
785 results, but it can be used in any situation where the test results
786 cannot be predicted easily from the platform name alone. A limitation of
787 this mechanism is that the test driver cannot tell which variant is
788 actually
<quote>correct
</quote> for the current environment; it will just pick
789 the variant that seems to work best. Therefore it is safest to use this
790 mechanism only for variant results that you are willing to consider
791 equally valid in all contexts.
796 <sect1 id=
"regress-tap">
797 <title>TAP Tests
</title>
800 Various tests, particularly the client program tests
801 under
<filename>src/bin
</filename>, use the Perl TAP tools and are run
802 using the Perl testing program
<command>prove
</command>. You can pass
803 command-line options to
<command>prove
</command> by setting
804 the
<command>make
</command> variable
<varname>PROVE_FLAGS
</varname>, for example:
806 make -C src/bin check PROVE_FLAGS='--timer'
808 See the manual page of
<command>prove
</command> for more information.
812 The
<command>make
</command> variable
<varname>PROVE_TESTS
</varname>
813 can be used to define a whitespace-separated list of paths relative
814 to the
<filename>Makefile
</filename> invoking
<command>prove
</command>
815 to run the specified subset of tests instead of the default
816 <filename>t/*.pl
</filename>. For example:
818 make check PROVE_TESTS='t/
001_test1.pl t/
003_test3.pl'
823 The TAP tests require the Perl module
<literal>IPC::Run
</literal>.
824 This module is available from
825 <ulink url=
"https://metacpan.org/dist/IPC-Run">CPAN
</ulink>
826 or an operating system package.
827 They also require
<productname>PostgreSQL
</productname> to be
828 configured with the option
<option>--enable-tap-tests
</option>.
832 Generically speaking, the TAP tests will test the executables in a
833 previously-installed installation tree if you say
<literal>make
834 installcheck
</literal>, or will build a new local installation tree from
835 current sources if you say
<literal>make check
</literal>. In either
836 case they will initialize a local instance (data directory) and
837 transiently run a server in it. Some of these tests run more than one
838 server. Thus, these tests can be fairly resource-intensive.
842 It's important to realize that the TAP tests will start test server(s)
843 even when you say
<literal>make installcheck
</literal>; this is unlike
844 the traditional non-TAP testing infrastructure, which expects to use an
845 already-running test server in that case. Some PostgreSQL
846 subdirectories contain both traditional-style and TAP-style tests,
847 meaning that
<literal>make installcheck
</literal> will produce a mix of
848 results from temporary servers and the already-running test server.
851 <sect2 id=
"regress-tap-vars">
852 <title>Environment Variables
</title>
855 Data directories are named according to the test filename, and will be
856 retained if a test fails. If the environment variable
857 <varname>PG_TEST_NOCLEAN
</varname> is set, data directories will be
858 retained regardless of test status. For example, retaining the data
859 directory regardless of test results when running the
860 <application>pg_dump
</application> tests:
862 PG_TEST_NOCLEAN=
1 make -C src/bin/pg_dump check
864 This environment variable also prevents the test's temporary directories
869 Many operations in the test suites use a
180-second timeout, which on slow
870 hosts may lead to load-induced timeouts. Setting the environment variable
871 <varname>PG_TEST_TIMEOUT_DEFAULT
</varname> to a higher number will change
872 the default to avoid this.
878 <sect1 id=
"regress-coverage">
879 <title>Test Coverage Examination
</title>
882 The PostgreSQL source code can be compiled with coverage testing
883 instrumentation, so that it becomes possible to examine which
884 parts of the code are covered by the regression tests or any other
885 test suite that is run with the code. This is currently supported
886 when compiling with GCC, and it requires the
<literal>gcov
</literal>
887 and
<literal>lcov
</literal> packages.
890 <sect2 id=
"regress-coverage-configure">
891 <title>Coverage with Autoconf and Make
</title>
893 A typical workflow looks like this:
895 ./configure --enable-coverage ... OTHER OPTIONS ...
897 make check # or other test suite
900 Then point your HTML browser
901 to
<filename>coverage/index.html
</filename>.
905 If you don't have
<command>lcov
</command> or prefer text output over an
906 HTML report, you can run
910 instead of
<literal>make coverage-html
</literal>, which will
911 produce
<filename>.gcov
</filename> output files for each source file
912 relevant to the test. (
<literal>make coverage
</literal> and
<literal>make
913 coverage-html
</literal> will overwrite each other's files, so mixing them
918 You can run several different tests before making the coverage report;
919 the execution counts will accumulate. If you want
920 to reset the execution counts between test runs, run:
927 You can run the
<literal>make coverage-html
</literal> or
<literal>make
928 coverage
</literal> command in a subdirectory if you want a coverage
929 report for only a portion of the code tree.
933 Use
<literal>make distclean
</literal> to clean up when done.
937 <sect2 id=
"regress-coverage-meson">
938 <title>Coverage with Meson
</title>
940 A typical workflow looks like this:
942 meson setup -Db_coverage=true ... OTHER OPTIONS ... builddir/
943 meson compile -C builddir/
944 meson test -C builddir/
948 Then point your HTML browser
949 to
<filename>./meson-logs/coveragereport/index.html
</filename>.
953 You can run several different tests before making the coverage report;
954 the execution counts will accumulate.