1 <!-- doc/src/sgml/regress.sgml -->
4 <title>Regression Tests
</title>
6 <indexterm zone=
"regress">
7 <primary>regression tests
</primary>
10 <indexterm zone=
"regress">
11 <primary>test
</primary>
15 The regression tests are a comprehensive set of tests for the SQL
16 implementation in
<productname>PostgreSQL
</productname>. They test
17 standard SQL operations as well as the extended capabilities of
18 <productname>PostgreSQL
</productname>.
21 <sect1 id=
"regress-run">
22 <title>Running the Tests
</title>
25 The regression tests can be run against an already installed and
26 running server, or using a temporary installation within the build
27 tree. Furthermore, there is a
<quote>parallel
</quote> and a
28 <quote>sequential
</quote> mode for running the tests. The
29 sequential method runs each test script alone, while the
30 parallel method starts up multiple server processes to run groups
31 of tests in parallel. Parallel testing adds confidence that
32 interprocess communication and locking are working correctly.
33 Some tests may run sequentially even in the
<quote>parallel
</quote>
34 mode in case this is required by the test.
37 <sect2 id=
"regress-run-temp-inst">
38 <title>Running the Tests Against a Temporary Installation
</title>
41 To run the parallel regression tests after building but before installation,
46 in the top-level directory. (Or you can change to
47 <filename>src/test/regress
</filename> and run the command there.)
48 Tests which are run in parallel are prefixed with
<quote>+
</quote>, and
49 tests which run sequentially are prefixed with
<quote>-
</quote>.
50 At the end you should see something like:
53 # All
213 tests passed.
56 or otherwise a note about which tests failed. See
<xref
57 linkend=
"regress-evaluation"/> below before assuming that a
58 <quote>failure
</quote> represents a serious problem.
62 Because this test method runs a temporary server, it will not work
63 if you did the build as the root user, since the server will not start as
64 root. Recommended procedure is not to do the build as root, or else to
65 perform testing after completing the installation.
69 If you have configured
<productname>PostgreSQL
</productname> to install
70 into a location where an older
<productname>PostgreSQL
</productname>
71 installation already exists, and you perform
<literal>make check
</literal>
72 before installing the new version, you might find that the tests fail
73 because the new programs try to use the already-installed shared
74 libraries. (Typical symptoms are complaints about undefined symbols.)
75 If you wish to run the tests before overwriting the old installation,
76 you'll need to build with
<literal>configure --disable-rpath
</literal>.
77 It is not recommended that you use this option for the final installation,
82 The parallel regression test starts quite a few processes under your
83 user ID. Presently, the maximum concurrency is twenty parallel test
84 scripts, which means forty processes: there's a server process and a
85 <application>psql
</application> process for each test script.
86 So if your system enforces a per-user limit on the number of processes,
87 make sure this limit is at least fifty or so, else you might get
88 random-seeming failures in the parallel test. If you are not in
89 a position to raise the limit, you can cut down the degree of parallelism
90 by setting the
<literal>MAX_CONNECTIONS
</literal> parameter. For example:
92 make MAX_CONNECTIONS=
10 check
94 runs no more than ten tests concurrently.
98 <sect2 id=
"regress-run-existing-inst">
99 <title>Running the Tests Against an Existing Installation
</title>
102 To run the tests after installation (see
<xref linkend=
"installation"/>),
103 initialize a data directory and start the
104 server as explained in
<xref linkend=
"runtime"/>, then type:
108 or for a parallel test:
110 make installcheck-parallel
112 The tests will expect to contact the server at the local host and the
113 default port number, unless directed otherwise by
<envar>PGHOST
</envar> and
114 <envar>PGPORT
</envar> environment variables. The tests will be run in a
115 database named
<literal>regression
</literal>; any existing database by this name
120 The tests will also transiently create some cluster-wide objects, such as
121 roles, tablespaces, and subscriptions. These objects will have names
122 beginning with
<literal>regress_
</literal>. Beware of
123 using
<literal>installcheck
</literal> mode with an installation that has
124 any actual global objects named that way.
128 <sect2 id=
"regress-additional">
129 <title>Additional Test Suites
</title>
132 The
<literal>make check
</literal> and
<literal>make installcheck
</literal> commands
133 run only the
<quote>core
</quote> regression tests, which test built-in
134 functionality of the
<productname>PostgreSQL
</productname> server. The source
135 distribution contains many additional test suites, most of them having
136 to do with add-on functionality such as optional procedural languages.
140 To run all test suites applicable to the modules that have been selected
141 to be built, including the core tests, type one of these commands at the
142 top of the build tree:
145 make installcheck-world
147 These commands run the tests using temporary servers or an
148 already-installed server, respectively, just as previously explained
149 for
<literal>make check
</literal> and
<literal>make installcheck
</literal>. Other
150 considerations are the same as previously explained for each method.
151 Note that
<literal>make check-world
</literal> builds a separate instance
152 (temporary data directory) for each tested module, so it requires more
153 time and disk space than
<literal>make installcheck-world
</literal>.
157 On a modern machine with multiple CPU cores and no tight operating-system
158 limits, you can make things go substantially faster with parallelism.
159 The recipe that most PostgreSQL developers actually use for running all
160 tests is something like
162 make check-world -j8
>/dev/null
164 with a
<option>-j
</option> limit near to or a bit more than the number
165 of available cores. Discarding
<systemitem>stdout
</systemitem>
166 eliminates chatter that's not interesting when you just want to verify
167 success. (In case of failure, the
<systemitem>stderr
</systemitem>
168 messages are usually enough to determine where to look closer.)
172 Alternatively, you can run individual test suites by typing
173 <literal>make check
</literal> or
<literal>make installcheck
</literal> in the appropriate
174 subdirectory of the build tree. Keep in mind that
<literal>make
175 installcheck
</literal> assumes you've installed the relevant module(s), not
176 only the core server.
180 The additional tests that can be invoked this way include:
186 Regression tests for optional procedural languages.
187 These are located under
<filename>src/pl
</filename>.
192 Regression tests for
<filename>contrib
</filename> modules,
193 located under
<filename>contrib
</filename>.
194 Not all
<filename>contrib
</filename> modules have tests.
199 Regression tests for the interface libraries,
200 located in
<filename>src/interfaces/libpq/test
</filename> and
201 <filename>src/interfaces/ecpg/test
</filename>.
206 Tests for core-supported authentication methods,
207 located in
<filename>src/test/authentication
</filename>.
208 (See below for additional authentication-related tests.)
213 Tests stressing behavior of concurrent sessions,
214 located in
<filename>src/test/isolation
</filename>.
219 Tests for crash recovery and physical replication,
220 located in
<filename>src/test/recovery
</filename>.
225 Tests for logical replication,
226 located in
<filename>src/test/subscription
</filename>.
231 Tests of client programs, located under
<filename>src/bin
</filename>.
237 When using
<literal>installcheck
</literal> mode, these tests will create
238 and destroy test databases whose names
239 include
<literal>regression
</literal>, for
240 example
<literal>pl_regression
</literal>
241 or
<literal>contrib_regression
</literal>. Beware of
242 using
<literal>installcheck
</literal> mode with an installation that has
243 any non-test databases named that way.
247 Some of these auxiliary test suites use the TAP infrastructure explained
248 in
<xref linkend=
"regress-tap"/>.
249 The TAP-based tests are run only when PostgreSQL was configured with the
250 option
<option>--enable-tap-tests
</option>. This is recommended for
251 development, but can be omitted if there is no suitable Perl installation.
255 Some test suites are not run by default, either because they are not secure
256 to run on a multiuser system, because they require special software or
257 because they are resource intensive. You can decide which test suites to
258 run additionally by setting the
<command>make
</command> or environment
259 variable
<varname>PG_TEST_EXTRA
</varname> to a whitespace-separated list,
262 make check-world PG_TEST_EXTRA='kerberos ldap ssl load_balance libpq_encryption'
264 The following values are currently supported:
267 <term><literal>kerberos
</literal></term>
270 Runs the test suite under
<filename>src/test/kerberos
</filename>. This
271 requires an MIT Kerberos installation and opens TCP/IP listen sockets.
277 <term><literal>ldap
</literal></term>
280 Runs the test suite under
<filename>src/test/ldap
</filename>. This
281 requires an
<productname>OpenLDAP
</productname> installation and opens
282 TCP/IP listen sockets.
288 <term><literal>sepgsql
</literal></term>
291 Runs the test suite under
<filename>contrib/sepgsql
</filename>. This
292 requires an SELinux environment that is set up in a specific way; see
293 <xref linkend=
"sepgsql-regression"/>.
299 <term><literal>ssl
</literal></term>
302 Runs the test suite under
<filename>src/test/ssl
</filename>. This opens TCP/IP listen sockets.
308 <term><literal>load_balance
</literal></term>
311 Runs the test
<filename>src/interfaces/libpq/t/
004_load_balance_dns.pl
</filename>.
312 This requires editing the system
<filename>hosts
</filename> file and
313 opens TCP/IP listen sockets.
319 <term><literal>libpq_encryption
</literal></term>
322 Runs the test
<filename>src/interfaces/libpq/t/
005_negotiate_encryption.pl
</filename>.
323 This opens TCP/IP listen sockets. If
<varname>PG_TEST_EXTRA
</varname>
324 also includes
<literal>kerberos
</literal>, additional tests that require
325 an MIT Kerberos installation are enabled.
331 <term><literal>wal_consistency_checking
</literal></term>
334 Uses
<literal>wal_consistency_checking=all
</literal> while running
335 certain tests under
<filename>src/test/recovery
</filename>. Not
336 enabled by default because it is resource intensive.
342 <term><literal>xid_wraparound
</literal></term>
345 Runs the test suite under
<filename>src/test/modules/xid_wraparound
</filename>.
346 Not enabled by default because it is resource intensive.
352 Tests for features that are not supported by the current build
353 configuration are not run even if they are mentioned in
354 <varname>PG_TEST_EXTRA
</varname>.
358 In addition, there are tests in
<filename>src/test/modules
</filename>
359 which will be run by
<literal>make check-world
</literal> but not
360 by
<literal>make installcheck-world
</literal>. This is because they
361 install non-production extensions or have other side-effects that are
362 considered undesirable for a production installation. You can
363 use
<literal>make install
</literal> and
<literal>make
364 installcheck
</literal> in one of those subdirectories if you wish,
365 but it's not recommended to do so with a non-test server.
369 <sect2 id=
"regress-run-locale">
370 <title>Locale and Encoding
</title>
373 By default, tests using a temporary installation use the
374 locale defined in the current environment and the corresponding
375 database encoding as determined by
<command>initdb
</command>. It
376 can be useful to test different locales by setting the appropriate
377 environment variables, for example:
380 make check LC_COLLATE=en_US.utf8 LC_CTYPE=fr_CA.utf8
382 For implementation reasons, setting
<envar>LC_ALL
</envar> does not
383 work for this purpose; all the other locale-related environment
388 When testing against an existing installation, the locale is
389 determined by the existing database cluster and cannot be set
390 separately for the test run.
394 You can also choose the database encoding explicitly by setting
395 the variable
<envar>ENCODING
</envar>, for example:
397 make check LANG=C ENCODING=EUC_JP
399 Setting the database encoding this way typically only makes sense
400 if the locale is C; otherwise the encoding is chosen automatically
401 from the locale, and specifying an encoding that does not match
402 the locale will result in an error.
406 The database encoding can be set for tests against either a temporary or
407 an existing installation, though in the latter case it must be
408 compatible with the installation's locale.
412 <sect2 id=
"regress-run-custom-settings">
413 <title>Custom Server Settings
</title>
416 There are several ways to use custom server settings when running a test
417 suite. This can be useful to enable additional logging, adjust resource
418 limits, or enable extra run-time checks such as
<xref
419 linkend=
"guc-debug-discard-caches"/>. But note that not all tests can be
420 expected to pass cleanly with arbitrary settings.
424 Extra options can be passed to the various
<command>initdb
</command>
425 commands that are run internally during test setup using the environment
426 variable
<envar>PG_TEST_INITDB_EXTRA_OPTS
</envar>. For example, to run a
427 test with checksums enabled and a custom WAL segment size and
428 <varname>work_mem
</varname> setting, use:
430 make check PG_TEST_INITDB_EXTRA_OPTS='-k --wal-segsize=
4 -c work_mem=
50MB'
435 For the core regression test suite and other tests driven by
436 <command>pg_regress
</command>, custom run-time server settings can also be
437 set in the
<varname>PGOPTIONS
</varname> environment variable (for settings
438 that allow this), for example:
440 make check
PGOPTIONS=
"-c debug_parallel_query=regress -c work_mem=50MB"
442 (This makes use of functionality provided by libpq; see
<xref
443 linkend=
"libpq-connect-options"/> for details.)
447 When running against a temporary installation, custom settings can also be
448 set by supplying a pre-written
<filename>postgresql.conf
</filename>:
450 echo 'log_checkpoints = on'
> test_postgresql.conf
451 echo 'work_mem =
50MB'
>> test_postgresql.conf
452 make check
EXTRA_REGRESS_OPTS=
"--temp-config=test_postgresql.conf"
458 <sect2 id=
"regress-run-extra-tests">
459 <title>Extra Tests
</title>
462 The core regression test suite contains a few test files that are not
463 run by default, because they might be platform-dependent or take a
464 very long time to run. You can run these or other extra test
465 files by setting the variable
<envar>EXTRA_TESTS
</envar>. For
466 example, to run the
<literal>numeric_big
</literal> test:
468 make check EXTRA_TESTS=numeric_big
474 <sect1 id=
"regress-evaluation">
475 <title>Test Evaluation
</title>
478 Some properly installed and fully functional
479 <productname>PostgreSQL
</productname> installations can
480 <quote>fail
</quote> some of these regression tests due to
481 platform-specific artifacts such as varying floating-point representation
482 and message wording. The tests are currently evaluated using a simple
483 <command>diff
</command> comparison against the outputs
484 generated on a reference system, so the results are sensitive to
485 small system differences. When a test is reported as
486 <quote>failed
</quote>, always examine the differences between
487 expected and actual results; you might find that the
488 differences are not significant. Nonetheless, we still strive to
489 maintain accurate reference files across all supported platforms,
490 so it can be expected that all tests pass.
494 The actual outputs of the regression tests are in files in the
495 <filename>src/test/regress/results
</filename> directory. The test
496 script uses
<command>diff
</command> to compare each output
497 file against the reference outputs stored in the
498 <filename>src/test/regress/expected
</filename> directory. Any
499 differences are saved for your inspection in
500 <filename>src/test/regress/regression.diffs
</filename>.
501 (When running a test suite other than the core tests, these files
502 of course appear in the relevant subdirectory,
503 not
<filename>src/test/regress
</filename>.)
508 like the
<command>diff
</command> options that are used by default, set the
509 environment variable
<envar>PG_REGRESS_DIFF_OPTS
</envar>, for
510 instance
<literal>PG_REGRESS_DIFF_OPTS='-c'
</literal>. (Or you
511 can run
<command>diff
</command> yourself, if you prefer.)
515 If for some reason a particular platform generates a
<quote>failure
</quote>
516 for a given test, but inspection of the output convinces you that
517 the result is valid, you can add a new comparison file to silence
518 the failure report in future test runs. See
519 <xref linkend=
"regress-variant"/> for details.
522 <sect2 id=
"regress-evaluation-message-differences">
523 <title>Error Message Differences
</title>
526 Some of the regression tests involve intentional invalid input
527 values. Error messages can come from either the
528 <productname>PostgreSQL
</productname> code or from the host
529 platform system routines. In the latter case, the messages can
530 vary between platforms, but should reflect similar
531 information. These differences in messages will result in a
532 <quote>failed
</quote> regression test that can be validated by
537 <sect2 id=
"regress-evaluation-locale-differences">
538 <title>Locale Differences
</title>
541 If you run the tests against a server that was
542 initialized with a collation-order locale other than C, then
543 there might be differences due to sort order and subsequent
544 failures. The regression test suite is set up to handle this
545 problem by providing alternate result files that together are
546 known to handle a large number of locales.
550 To run the tests in a different locale when using the
551 temporary-installation method, pass the appropriate
552 locale-related environment variables on
553 the
<command>make
</command> command line, for example:
555 make check LANG=de_DE.utf8
557 (The regression test driver unsets
<envar>LC_ALL
</envar>, so it
558 does not work to choose the locale using that variable.) To use
559 no locale, either unset all locale-related environment variables
560 (or set them to
<literal>C
</literal>) or use the following
563 make check NO_LOCALE=
1
565 When running the tests against an existing installation, the
566 locale setup is determined by the existing installation. To
567 change it, initialize the database cluster with a different
568 locale by passing the appropriate options
569 to
<command>initdb
</command>.
573 In general, it is advisable to try to run the
574 regression tests in the locale setup that is wanted for
575 production use, as this will exercise the locale- and
576 encoding-related code portions that will actually be used in
577 production. Depending on the operating system environment, you
578 might get failures, but then you will at least know what
579 locale-specific behaviors to expect when running real
584 <sect2 id=
"regress-evaluation-date-time-differences">
585 <title>Date and Time Differences
</title>
588 Most of the date and time results are dependent on the time zone
589 environment. The reference files are generated for time zone
590 <literal>America/Los_Angeles
</literal>, and there will be
591 apparent failures if the tests are not run with that time zone setting.
592 The regression test driver sets environment variable
593 <envar>PGTZ
</envar> to
<literal>America/Los_Angeles
</literal>,
594 which normally ensures proper results.
598 <sect2 id=
"regress-evaluation-float-differences">
599 <title>Floating-Point Differences
</title>
602 Some of the tests involve computing
64-bit floating-point numbers (
<type>double
603 precision
</type>) from table columns. Differences in
604 results involving mathematical functions of
<type>double
605 precision
</type> columns have been observed. The
<literal>float8
</literal> and
606 <literal>geometry
</literal> tests are particularly prone to small differences
607 across platforms, or even with different compiler optimization settings.
608 Human eyeball comparison is needed to determine the real
609 significance of these differences which are usually
10 places to
610 the right of the decimal point.
614 Some systems display minus zero as
<literal>-
0</literal>, while others
615 just show
<literal>0</literal>.
619 Some systems signal errors from
<function>pow()
</function> and
620 <function>exp()
</function> differently from the mechanism
621 expected by the current
<productname>PostgreSQL
</productname>
626 <sect2 id=
"regress-evaluation-ordering-differences">
627 <title>Row Ordering Differences
</title>
630 You might see differences in which the same rows are output in a
631 different order than what appears in the expected file. In most cases
632 this is not, strictly speaking, a bug. Most of the regression test
633 scripts are not so pedantic as to use an
<literal>ORDER BY
</literal> for every single
634 <literal>SELECT
</literal>, and so their result row orderings are not well-defined
635 according to the SQL specification. In practice, since we are
636 looking at the same queries being executed on the same data by the same
637 software, we usually get the same result ordering on all platforms,
638 so the lack of
<literal>ORDER BY
</literal> is not a problem. Some queries do exhibit
639 cross-platform ordering differences, however. When testing against an
640 already-installed server, ordering differences can also be caused by
641 non-C locale settings or non-default parameter settings, such as custom values
642 of
<varname>work_mem
</varname> or the planner cost parameters.
646 Therefore, if you see an ordering difference, it's not something to
647 worry about, unless the query does have an
<literal>ORDER BY
</literal> that your
648 result is violating. However, please report it anyway, so that we can add an
649 <literal>ORDER BY
</literal> to that particular query to eliminate the bogus
650 <quote>failure
</quote> in future releases.
654 You might wonder why we don't order all the regression test queries explicitly
655 to get rid of this issue once and for all. The reason is that that would
656 make the regression tests less useful, not more, since they'd tend
657 to exercise query plan types that produce ordered results to the
658 exclusion of those that don't.
662 <sect2 id=
"regress-evaluation-stack-depth">
663 <title>Insufficient Stack Depth
</title>
666 If the
<literal>errors
</literal> test results in a server crash
667 at the
<literal>select infinite_recurse()
</literal> command, it means that
668 the platform's limit on process stack size is smaller than the
669 <xref linkend=
"guc-max-stack-depth"/> parameter indicates. This
670 can be fixed by running the server under a higher stack
671 size limit (
4MB is recommended with the default value of
672 <varname>max_stack_depth
</varname>). If you are unable to do that, an
673 alternative is to reduce the value of
<varname>max_stack_depth
</varname>.
677 On platforms supporting
<function>getrlimit()
</function>, the server should
678 automatically choose a safe value of
<varname>max_stack_depth
</varname>;
679 so unless you've manually overridden this setting, a failure of this
680 kind is a reportable bug.
684 <sect2 id=
"regress-evaluation-random-test">
685 <title>The
<quote>random
</quote> Test
</title>
688 The
<literal>random
</literal> test script is intended to produce
689 random results. In very rare cases, this causes that regression
690 test to fail. Typing:
692 diff results/random.out expected/random.out
694 should produce only one or a few lines of differences. You need
695 not worry unless the random test fails repeatedly.
699 <sect2 id=
"regress-evaluation-config-params">
700 <title>Configuration Parameters
</title>
703 When running the tests against an existing installation, some non-default
704 parameter settings could cause the tests to fail. For example, changing
705 parameters such as
<varname>enable_seqscan
</varname> or
706 <varname>enable_indexscan
</varname> could cause plan changes that would
707 affect the results of tests that use
<command>EXPLAIN
</command>.
712 <!-- We might want to move the following section into the developer's guide. -->
713 <sect1 id=
"regress-variant">
714 <title>Variant Comparison Files
</title>
717 Since some of the tests inherently produce environment-dependent
718 results, we have provided ways to specify alternate
<quote>expected
</quote>
719 result files. Each regression test can have several comparison files
720 showing possible results on different platforms. There are two
721 independent mechanisms for determining which comparison file is used
726 The first mechanism allows comparison files to be selected for
727 specific platforms. There is a mapping file,
728 <filename>src/test/regress/resultmap
</filename>, that defines
729 which comparison file to use for each platform.
730 To eliminate bogus test
<quote>failures
</quote> for a particular platform,
731 you first choose or make a variant result file, and then add a line to the
732 <filename>resultmap
</filename> file.
736 Each line in the mapping file is of the form
738 testname:output:platformpattern=comparisonfilename
740 The test name is just the name of the particular regression test
741 module. The output value indicates which output file to check. For the
742 standard regression tests, this is always
<literal>out
</literal>. The
743 value corresponds to the file extension of the output file.
744 The platform pattern is a pattern in the style of the Unix
745 tool
<command>expr
</command> (that is, a regular expression with an implicit
746 <literal>^
</literal> anchor at the start). It is matched against the
747 platform name as printed by
<command>config.guess
</command>.
748 The comparison file name is the base name of the substitute result
753 For example: some systems lack a working
<literal>strtof
</literal> function,
754 for which our workaround causes rounding errors in the
755 <filename>float4
</filename> regression test.
756 Therefore, we provide a variant comparison file,
757 <filename>float4-misrounded-input.out
</filename>, which includes
758 the results to be expected on these systems. To silence the bogus
759 <quote>failure
</quote> message on
<systemitem>Cygwin
</systemitem>
760 platforms,
<filename>resultmap
</filename> includes:
762 float4:out:.*-.*-cygwin.*=float4-misrounded-input.out
764 which will trigger on any machine where the output of
765 <command>config.guess
</command> matches
<literal>.*-.*-cygwin.*
</literal>.
766 Other lines in
<filename>resultmap
</filename> select the variant comparison
767 file for other platforms where it's appropriate.
771 The second selection mechanism for variant comparison files is
772 much more automatic: it simply uses the
<quote>best match
</quote> among
773 several supplied comparison files. The regression test driver
774 script considers both the standard comparison file for a test,
775 <literal><replaceable>testname
</replaceable>.out
</literal>, and variant files named
776 <literal><replaceable>testname
</replaceable>_
<replaceable>digit
</replaceable>.out
</literal>
777 (where the
<replaceable>digit
</replaceable> is any single digit
778 <literal>0</literal>-
<literal>9</literal>). If any such file is an exact match,
779 the test is considered to pass; otherwise, the one that generates
780 the shortest diff is used to create the failure report. (If
781 <filename>resultmap
</filename> includes an entry for the particular
782 test, then the base
<replaceable>testname
</replaceable> is the substitute
783 name given in
<filename>resultmap
</filename>.)
787 For example, for the
<literal>char
</literal> test, the comparison file
788 <filename>char.out
</filename> contains results that are expected
789 in the
<literal>C
</literal> and
<literal>POSIX
</literal> locales, while
790 the file
<filename>char_1.out
</filename> contains results sorted as
791 they appear in many other locales.
795 The best-match mechanism was devised to cope with locale-dependent
796 results, but it can be used in any situation where the test results
797 cannot be predicted easily from the platform name alone. A limitation of
798 this mechanism is that the test driver cannot tell which variant is
799 actually
<quote>correct
</quote> for the current environment; it will just pick
800 the variant that seems to work best. Therefore it is safest to use this
801 mechanism only for variant results that you are willing to consider
802 equally valid in all contexts.
807 <sect1 id=
"regress-tap">
808 <title>TAP Tests
</title>
811 Various tests, particularly the client program tests
812 under
<filename>src/bin
</filename>, use the Perl TAP tools and are run
813 using the Perl testing program
<command>prove
</command>. You can pass
814 command-line options to
<command>prove
</command> by setting
815 the
<command>make
</command> variable
<varname>PROVE_FLAGS
</varname>, for example:
817 make -C src/bin check PROVE_FLAGS='--timer'
819 See the manual page of
<command>prove
</command> for more information.
823 The
<command>make
</command> variable
<varname>PROVE_TESTS
</varname>
824 can be used to define a whitespace-separated list of paths relative
825 to the
<filename>Makefile
</filename> invoking
<command>prove
</command>
826 to run the specified subset of tests instead of the default
827 <filename>t/*.pl
</filename>. For example:
829 make check PROVE_TESTS='t/
001_test1.pl t/
003_test3.pl'
834 The TAP tests require the Perl module
<literal>IPC::Run
</literal>.
835 This module is available from
836 <ulink url=
"https://metacpan.org/dist/IPC-Run">CPAN
</ulink>
837 or an operating system package.
838 They also require
<productname>PostgreSQL
</productname> to be
839 configured with the option
<option>--enable-tap-tests
</option>.
843 Generically speaking, the TAP tests will test the executables in a
844 previously-installed installation tree if you say
<literal>make
845 installcheck
</literal>, or will build a new local installation tree from
846 current sources if you say
<literal>make check
</literal>. In either
847 case they will initialize a local instance (data directory) and
848 transiently run a server in it. Some of these tests run more than one
849 server. Thus, these tests can be fairly resource-intensive.
853 It's important to realize that the TAP tests will start test server(s)
854 even when you say
<literal>make installcheck
</literal>; this is unlike
855 the traditional non-TAP testing infrastructure, which expects to use an
856 already-running test server in that case. Some PostgreSQL
857 subdirectories contain both traditional-style and TAP-style tests,
858 meaning that
<literal>make installcheck
</literal> will produce a mix of
859 results from temporary servers and the already-running test server.
862 <sect2 id=
"regress-tap-vars">
863 <title>Environment Variables
</title>
866 Data directories are named according to the test filename, and will be
867 retained if a test fails. If the environment variable
868 <varname>PG_TEST_NOCLEAN
</varname> is set, data directories will be
869 retained regardless of test status. For example, retaining the data
870 directory regardless of test results when running the
871 <application>pg_dump
</application> tests:
873 PG_TEST_NOCLEAN=
1 make -C src/bin/pg_dump check
875 This environment variable also prevents the test's temporary directories
880 Many operations in the test suites use a
180-second timeout, which on slow
881 hosts may lead to load-induced timeouts. Setting the environment variable
882 <varname>PG_TEST_TIMEOUT_DEFAULT
</varname> to a higher number will change
883 the default to avoid this.
889 <sect1 id=
"regress-coverage">
890 <title>Test Coverage Examination
</title>
893 The PostgreSQL source code can be compiled with coverage testing
894 instrumentation, so that it becomes possible to examine which
895 parts of the code are covered by the regression tests or any other
896 test suite that is run with the code. This is currently supported
897 when compiling with GCC, and it requires the
<literal>gcov
</literal>
898 and
<literal>lcov
</literal> packages.
901 <sect2 id=
"regress-coverage-configure">
902 <title>Coverage with Autoconf and Make
</title>
904 A typical workflow looks like this:
906 ./configure --enable-coverage ... OTHER OPTIONS ...
908 make check # or other test suite
911 Then point your HTML browser
912 to
<filename>coverage/index.html
</filename>.
916 If you don't have
<command>lcov
</command> or prefer text output over an
917 HTML report, you can run
921 instead of
<literal>make coverage-html
</literal>, which will
922 produce
<filename>.gcov
</filename> output files for each source file
923 relevant to the test. (
<literal>make coverage
</literal> and
<literal>make
924 coverage-html
</literal> will overwrite each other's files, so mixing them
929 You can run several different tests before making the coverage report;
930 the execution counts will accumulate. If you want
931 to reset the execution counts between test runs, run:
938 You can run the
<literal>make coverage-html
</literal> or
<literal>make
939 coverage
</literal> command in a subdirectory if you want a coverage
940 report for only a portion of the code tree.
944 Use
<literal>make distclean
</literal> to clean up when done.
948 <sect2 id=
"regress-coverage-meson">
949 <title>Coverage with Meson
</title>
951 A typical workflow looks like this:
953 meson setup -Db_coverage=true ... OTHER OPTIONS ... builddir/
954 meson compile -C builddir/
955 meson test -C builddir/
959 Then point your HTML browser
960 to
<filename>./meson-logs/coveragereport/index.html
</filename>.
964 You can run several different tests before making the coverage report;
965 the execution counts will accumulate.