15 Piglit is a collection of automated tests for OpenGL implementations.
17 The goal of Piglit is to help improve the quality of open source
18 OpenGL drivers by providing developers with a simple means to
19 perform regression tests.
21 The original tests have been taken from
22 - Glean ( http://glean.sf.net/ ) and
23 - Mesa ( http://www.mesa3d.org/ )
29 First of all, you need to make sure that the following are installed:
31 - Python 2.4 or greater
32 - cmake (http://www.cmake.org)
33 - GL, glu and glut libraries and development packages (i.e. headers)
34 - X11 libraries and development packages (i.e. headers)
35 - libpng, libtiff and related development packages (i.e. headers)
37 Now configure the build system:
41 This will start cmake's configuration tool, just follow the onscreen
42 instructions. The default settings should be fine, but I recommend you:
43 - Press 'c' once (this will also check for dependencies) and then
44 - Set "CMAKE_BUILD_TYPE" to "Debug"
45 Now you can press 'c' again and then 'g' to generate the build system.
55 Make sure that everything is set up correctly:
57 $ ./piglit-run.py tests/sanity.tests results/sanity.results
59 This will run some minimal tests. Use
63 to learn more about the command's syntax. Have a look into the tests/
64 directory to see what test profiles are available:
73 To create some nice formatted test summaries, run
75 $ ./piglit-summary-html.py summary/sanity results/sanity.results
77 Hint: You can combine multiple test results into a single summary.
78 During development, you can use this to watch for regressions:
80 $ ./piglit-summary-html.py summary/compare results/baseline.results results/current.results
82 You can combine as many testruns as you want this way(in theory;
83 the HTML layout becomes awkward when the number of testruns increases)
85 Have a look at the results with a browser:
87 $ xdg-open summary/sanity/index.html
89 The summary shows the 'status' of a test:
91 pass This test has completed successfully.
93 warn The test completed successfully, but something unexpected happened.
94 Look at the details for more information.
98 skip The test was skipped.
100 [Note: Once performance tests are implemented, 'fail' will mean that the test
101 rendered incorrectly or didn't complete, while 'warn' will indicate a
102 performance regression]
103 [Note: For performance tests, result and status will be different concepts.
104 While status is always restricted to one of the four values above,
105 the result can contain a performance number like frames per second]
108 4. Available test sets
109 ----------------------
111 Test sets are specified as Python scripts in the tests directory.
112 The following test sets are currently available:
115 This suite contains minimal sanity tests. These tests must
116 pass, otherwise the other tests will not generate reliable results.
119 This suite contains all tests.
122 Run all tests, but cut down significantly on their runtime
123 (and thus on the number of problems they can find).
124 In particular, this runs Glean with the --quick option, which
125 reduces the number of visuals and state combinations tested.
130 These test suites are adaptations of all.tests, with some tweaks
131 to account for hardware limitations in Radeon chips.
134 5. How to write tests
135 ---------------------
137 Every test is run as a separate process. This minimizes the impact that
138 severe bugs like memory corruption have on the testing process.
140 Therefore, tests can be implemented in an arbitrary standalone language.
141 I recommend C, C++ and Python, as these are the languages that are already
144 All new tests must be added to the all.tests profile. The test profiles
145 are simply Python scripts. There are currently two supported test types:
148 This test starts a new process and watches the process output (stdout and
149 stderr). Lines that start with "PIGLIT:" are collected and interpreted as
150 a Python dictionary that contains test result details.
153 This is a test that is only used to integrate Glean tests
155 Additional test types (e.g. for automatic image comparison) would have to
159 Test process that exit with a nonzero returncode are considered to have
162 Output on stderr causes a warning.
168 Get automated tests into widespread use ;)
170 Automate and integrate tests and demos from Mesa
171 Add code that automatically tests whether the test has rendered correctly
173 Performance regression tests
174 Ideally, this should be done by summarizing / comparing a history of
176 Note that while some small artificial micro-benchmark could be added to
177 Piglit, the Phoronix test suite is probably a better place for
178 realistic performance testing.