4 ============================
5 Writing tests for Subversion
6 ============================
16 * Differences between on-disk and status trees
24 Tests start with a clean repository and working copy. For the
25 purpose of testing, we use a versioned tree going by the name
26 'greektree'. See subversion/tests/greek-tree.txt for more.
28 This tree is then modified to become in a state in which we want
29 to test our program. This can involve changing the working copy
30 as well as the repository. Several commands (add, rm, update,
31 commit) can be required to bring the repository/working copy in
34 If the working copy and repository are in the required
35 pre-condition, the command-to-be-tested is executed. After
36 execution, the output (stdout, stderr), on-disk state and
37 'svn status' are checked to verify the command worked as expected.
39 If you need commands to construct the working copy+repository state,
40 checks as described above apply to each of the intermediate commands
41 just as they do to the final command. That way, failure of the final
42 command can be narrowed down to just that command, because the
43 working copy/repository combination was knowingly in the correct
50 Tests can generate 2 results:
52 - Success, signalled by normal function termination
53 - Failure, signalled by raising an exception
54 In case of python tests: an exception of type SVNFailure
55 In case of C tests: return an svn_error_t * != SVN_NO_ERROR
57 Sometimes it's necessary to code tests which are supposed to fail,
58 if Subversion should behave a certain way, but does not yet. Tests
59 like these are marked XFail (eXpected-to-FAIL). If the program is
60 changed to support the tested behaviour, but the test is not adjusted,
61 it will XPASS (uneXpectedly-PASS).
63 Next to normal and XFAIL status tests, there's also conditional
64 execution of tests, by marking them Skip(). A condition can be
65 given for which the skip should take effect, executing the test
73 (Could someone fill in this section please?!)
80 The python tests abstract from ordering problems by storing status
81 information in trees. Comparing expected and actual status means
82 comparing trees - there are routines to do the comparison for you.
84 Every command you issue should use the
85 svntest.actions.run_and_verify_* API. If there's no such function
86 for the operation you want to execute, you can use
87 svntest.main.run_svn. Note that this is an escape route only:
88 the results of this command are not checked meaning you should
89 include any checks in your test yourself.
95 On-disk state objects can be generated with the
96 svntest.tree.build_tree_from_wc() function which describe the actual
97 state on disk. If you need an object which describes the unchanged
98 (virginal) state, you can use svntest.actions.get_virginal_state().
100 Testing for on-disk states is required in several instances, among
102 - Checking for specific file contents (after a merge for example)
103 - Checking for properties and their values
109 Normally any change is at least validated (pre- and post-processing)
110 by running run_and_verify_status, or passing an expected_status to
111 one of the other run_and_verify_* methods.
113 A clean expected_status can be obtained by calling
114 svntest.actions.get_virginal_state(<wc_dir>, <revision>).
117 Differences between on-disk and status trees
118 ============================================
120 Both on-disk and status information is recorded in equal structures,
121 but there are some differences in the elements that are assigned to
124 Fieldname On-disk status
130 ###Note: maybe others?
135 Most methods in the run_and_verify_* API take an expected_output
136 parameter. This parameter describes which actions the command line
137 client should report to be taking on each target. So far there are:
148 * Minimize the use of 'run_command' and 'run_svn'
150 The output of these commands is not checked by the test suite
151 itself, so if you really need to use them, be sure to check
152 any relevant output yourself.
154 If you have any choice at all not to use them, please don't.
156 * Tests which check for failure as expected behaviour should PASS
158 The XFAIL test status is *only* meant for tests which check for
159 not-yet-but-expected-to-be supported program behaviour.
161 * File accesses can't use hardcoded '/' characters
163 Because the tests need to run on platforms with different path
164 separators too (MS Windows), you need to use the os.path.join()
165 function to concatenate path strings.
167 * Paths within status structures *do* use '/' characters
169 Paths within expected_status or expected_disk structures use '/'
170 characters as path separators.
172 * Don't forget to check output for correct output
174 You need to check not only whether a command generated output, but
175 also if that output meets your expectations:
177 - If the program is supposed to generate an error, check
178 if it generates the error you expect it to.
179 - If the program does not generate an error, check that
180 it gives you the confirmation you expect it to.
182 * Don't forget to check pre- and post-command conditions
184 You need to verify that the status and on-disk structures are
185 actually what you think they are before invoking the command
186 you're testing. Likewise, you need to verify that the command
187 resulted in expected output, status and on-disk structure.
189 * Don't forget to check!
191 Yes, just check anything you can check. If you don't, your test
192 may be passing for all the wrong reasons.