7 The LLDB test suite consists of three different kinds of test:
9 * **Unit tests**: written in C++ using the googletest unit testing library.
10 * **Shell tests**: Integration tests that test the debugger through the command
11 line. These tests interact with the debugger either through the command line
12 driver or through ``lldb-test`` which is a tool that exposes the internal
13 data structures in an easy-to-parse way for testing. Most people will know
14 these as *lit tests* in LLVM, although lit is the test driver and ShellTest
15 is the test format that uses ``RUN:`` lines. `FileCheck
16 <https://llvm.org/docs/CommandGuide/FileCheck.html>`_ is used to verify
18 * **API tests**: Integration tests that interact with the debugger through the
19 SB API. These are written in Python and use LLDB's ``dotest.py`` testing
20 framework on top of Python's `unittest2
21 <https://docs.python.org/2/library/unittest.html>`_.
23 All three test suites use ``lit`` (`LLVM Integrated Tester
24 <https://llvm.org/docs/CommandGuide/lit.html>`_ ) as the test driver. The test
25 suites can be run as a whole or separately.
31 Unit tests are located under ``lldb/unittests``. If it's possible to test
32 something in isolation or as a single unit, you should make it a unit test.
34 Often you need instances of the core objects such as a debugger, target or
35 process, in order to test something meaningful. We already have a handful of
36 tests that have the necessary boiler plate, but this is something we could
37 abstract away and make it more user friendly.
42 Shell tests are located under ``lldb/test/Shell``. These tests are generally
43 built around checking the output of ``lldb`` (the command line driver) or
44 ``lldb-test`` using ``FileCheck``. Shell tests are generally small and fast to
45 write because they require little boilerplate.
47 ``lldb-test`` is a relatively new addition to the test suite. It was the first
48 tool that was added that is designed for testing. Since then it has been
49 continuously extended with new subcommands, improving our test coverage. Among
50 other things you can use it to query lldb for symbol files, for object files
53 Obviously shell tests are great for testing the command line driver itself or
54 the subcomponents already exposed by lldb-test. But when it comes to LLDB's
55 vast functionality, most things can be tested both through the driver as well
56 as the Python API. For example, to test setting a breakpoint, you could do it
57 from the command line driver with ``b main`` or you could use the SB API and do
58 something like ``target.BreakpointCreateByName`` [#]_.
60 A good rule of thumb is to prefer shell tests when what is being tested is
61 relatively simple. Expressivity is limited compared to the API tests, which
62 means that you have to have a well-defined test scenario that you can easily
63 match with ``FileCheck``.
65 Another thing to consider are the binaries being debugged, which we call
66 inferiors. For shell tests, they have to be relatively simple. The
67 ``dotest.py`` test framework has extensive support for complex build scenarios
68 and different variants, which is described in more detail below, while shell
69 tests are limited to single lines of shell commands with compiler and linker
72 On the same topic, another interesting aspect of the shell tests is that there
73 you can often get away with a broken or incomplete binary, whereas the API
74 tests almost always require a fully functional executable. This enables testing
75 of (some) aspects of handling of binaries with non-native architectures or
78 Finally, the shell tests always run in batch mode. You start with some input
79 and the test verifies the output. The debugger can be sensitive to its
80 environment, such as the platform it runs on. It can be hard to express
81 that the same test might behave slightly differently on macOS and Linux.
82 Additionally, the debugger is an interactive tool, and the shell test provide
83 no good way of testing those interactive aspects, such as tab completion for
89 API tests are located under ``lldb/test/API``. They are run with the
90 ``dotest.py``. Tests are written in Python and test binaries (inferiors) are
91 compiled with Make. The majority of API tests are end-to-end tests that compile
92 programs from source, run them, and debug the processes.
94 As mentioned before, ``dotest.py`` is LLDB's testing framework. The
95 implementation is located under ``lldb/packages/Python/lldbsuite``. We have
96 several extensions and custom test primitives on top of what's offered by
97 `unittest2 <https://docs.python.org/2/library/unittest.html>`_. Those can be
99 `lldbtest.py <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbtest.py>`_.
101 Below is the directory layout of the `example API test
102 <https://github.com/llvm/llvm-project/tree/main/lldb/test/API/sample_test>`_.
103 The test directory will always contain a python file, starting with ``Test``.
104 Most of the tests are structured as a binary being debugged, so there will be
105 one or more source files and a ``Makefile``.
111 ├── TestSampleTest.py
114 Let's start with the Python test file. Every test is its own class and can have
115 one or more test methods, that start with ``test_``. Many tests define
116 multiple test methods and share a bunch of common code. For example, for a
117 fictive test that makes sure we can set breakpoints we might have one test
118 method that ensures we can set a breakpoint by address, on that sets a
119 breakpoint by name and another that sets the same breakpoint by file and line
120 number. The setup, teardown and everything else other than setting the
121 breakpoint could be shared.
123 Our testing framework also has a bunch of utilities that abstract common
124 operations, such as creating targets, setting breakpoints etc. When code is
125 shared across tests, we extract it into a utility in ``lldbutil``. It's always
126 worth taking a look at `lldbutil
127 <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/lldbutil.py>`_
128 to see if there's a utility to simplify some of the testing boiler plate.
129 Because we can't always audit every existing test, this is doubly true when
130 looking at an existing test for inspiration.
132 It's possible to skip or `XFAIL
133 <https://ftp.gnu.org/old-gnu/Manuals/dejagnu-1.3/html_node/dejagnu_6.html>`_
134 tests using decorators. You'll see them a lot. The debugger can be sensitive to
135 things like the architecture, the host and target platform, the compiler
136 version etc. LLDB comes with a range of predefined decorators for these
141 @expectedFailureAll(archs=["aarch64"], oslist=["linux"]
143 Another great thing about these decorators is that they're very easy to extend,
144 it's even possible to define a function in a test case that determines whether
145 the test should be run or not.
149 @expectedFailure(checking_function_name)
151 In addition to providing a lot more flexibility when it comes to writing the
152 test, the API test also allow for much more complex scenarios when it comes to
153 building inferiors. Every test has its own ``Makefile``, most of them only a
154 few lines long. A shared ``Makefile`` (``Makefile.rules``) with about a
155 thousand lines of rules takes care of most if not all of the boiler plate,
156 while individual make files can be used to build more advanced tests.
158 Here's an example of a simple ``Makefile`` used by the example test.
163 CFLAGS_EXTRAS := -std=c99
165 include Makefile.rules
167 Finding the right variables to set can be tricky. You can always take a look at
168 `Makefile.rules <https://github.com/llvm/llvm-project/blob/main/lldb/packages/Python/lldbsuite/test/make/Makefile.rules>`_
169 but often it's easier to find an existing ``Makefile`` that does something
170 similar to what you want to do.
172 Another thing this enables is having different variants for the same test
173 case. By default, we run every test for two debug info formats, once with
174 DWARF from the object files and another with a dSYM on macOS or split
175 DWARF (DWO) on Linux. But there are many more things we can test
176 that are orthogonal to the test itself. On GreenDragon we have a matrix bot
177 that runs the test suite under different configurations, with older host
178 compilers and different DWARF versions.
180 As you can imagine, this quickly lead to combinatorial explosion in the number
181 of variants. It's very tempting to add more variants because it's an easy way
182 to increase test coverage. It doesn't scale. It's easy to set up, but increases
183 the runtime of the tests and has a large ongoing cost.
185 The test variants are most useful when developing a larger feature (e.g. support
186 for a new DWARF version). The test suite contains a large number of fairly
187 generic tests, so running the test suite with the feature enabled is a good way
188 to gain confidence that you haven't missed an important aspect. However, this
189 genericness makes them poor regression tests. Because it's not clear what a
190 specific test covers, a random modification to the test case can make it start
191 (or stop) testing a completely different part of your feature. And since these
192 tests tend to look very similar, it's easy for a simple bug to cause hundreds of
193 tests to fail in the same way.
195 For this reason, we recommend using test variants only while developing a new
196 feature. This can often be done by running the test suite with different
197 arguments -- without any modifications to the code. You can create a focused
198 test for any bug found that way. Often, there will be many tests failing, but a
199 lot of then will have the same root cause. These tests will be easier to debug
200 and will not put undue burden on all other bots and developers.
202 In conclusion, you'll want to opt for an API test to test the API itself or
203 when you need the expressivity, either for the test case itself or for the
204 program being debugged. The fact that the API tests work with different
205 variants mean that more general tests should be API tests, so that they can be
206 run against the different variants.
208 Guidelines for API tests
209 ^^^^^^^^^^^^^^^^^^^^^^^^
211 API tests are expected to be fast, reliable and maintainable. To achieve this
212 goal, API tests should conform to the following guidelines in addition to normal
213 good testing practices.
215 **Don't unnecessarily launch the test executable.**
216 Launching a process and running to a breakpoint can often be the most
217 expensive part of a test and should be avoided if possible. A large part
218 of LLDB's functionality is available directly after creating an `SBTarget`
219 of the test executable.
221 The part of the SB API that can be tested with just a target includes
222 everything that represents information about the executable and its
223 debug information (e.g., `SBTarget`, `SBModule`, `SBSymbolContext`,
224 `SBFunction`, `SBInstruction`, `SBCompileUnit`, etc.). For test executables
225 written in languages with a type system that is mostly defined at compile
226 time (e.g., C and C++) there is also usually no process necessary to test
227 the `SBType`-related parts of the API. With those languages it's also
228 possible to test `SBValue` by running expressions with
229 `SBTarget.EvaluateExpression` or the ``expect_expr`` testing utility.
231 Functionality that always requires a running process is everything that
232 tests the `SBProcess`, `SBThread`, and `SBFrame` classes. The same is true
233 for tests that exercise breakpoints, watchpoints and sanitizers.
234 Languages such as Objective-C that have a dependency on a runtime
235 environment also always require a running process.
237 **Don't unnecessarily include system headers in test sources.**
238 Including external headers slows down the compilation of the test executable
239 and it makes reproducing test failures on other operating systems or
240 configurations harder.
242 **Avoid specifying test-specific compiler flags when including system headers.**
243 If a test requires including a system header (e.g., a test for a libc++
244 formatter includes a libc++ header), try to avoid specifying custom compiler
245 flags if possible. Certain debug information formats such as ``gmodules``
246 use a cache that is shared between all API tests and that contains
247 precompiled system headers. If you add or remove a specific compiler flag
248 in your test (e.g., adding ``-DFOO`` to the ``Makefile`` or ``self.build``
249 arguments), then the test will not use the shared precompiled header cache
250 and expensively recompile all system headers from scratch. If you depend on
251 a specific compiler flag for the test, you can avoid this issue by either
252 removing all system header includes or decorating the test function with
253 ``@no_debug_info_test`` (which will avoid running all debug information
254 variants including ``gmodules``).
256 **Test programs should be kept simple.**
257 Test executables should do the minimum amount of work to bring the process
258 into the state that is required for the test. Simulating a 'real' program
259 that actually tries to do some useful task rarely helps with catching bugs
260 and makes the test much harder to debug and maintain. The test programs
261 should always be deterministic (i.e., do not generate and check against
264 **Identifiers in tests should be simple and descriptive.**
265 Often test programs need to declare functions and classes which require
266 choosing some form of identifier for them. These identifiers should always
267 either be kept simple for small tests (e.g., ``A``, ``B``, ...) or have some
268 descriptive name (e.g., ``ClassWithTailPadding``, ``inlined_func``, ...).
269 Never choose identifiers that are already used anywhere else in LLVM or
270 other programs (e.g., don't name a class ``VirtualFileSystem``, a function
271 ``llvm_unreachable``, or a namespace ``rapidxml``) as this will mislead
272 people ``grep``'ing the LLVM repository for those strings.
274 **Prefer LLDB testing utilities over directly working with the SB API.**
275 The ``lldbutil`` module and the ``TestBase`` class come with a large amount
276 of utility functions that can do common test setup tasks (e.g., starting a
277 test executable and running the process to a breakpoint). Using these
278 functions not only keeps the test shorter and free of duplicated code, but
279 they also follow best test suite practices and usually give much clearer
280 error messages if something goes wrong. The test utilities also contain
281 custom asserts and checks that should be preferably used (e.g.
282 ``self.assertSuccess``).
284 **Prefer calling the SB API over checking command output.**
285 Avoid writing your tests on top of ``self.expect(...)`` calls that check
286 the output of LLDB commands and instead try calling into the SB API. Relying
287 on LLDB commands makes changing (and improving) the output/syntax of
288 commands harder and the resulting tests are often prone to accepting
289 incorrect test results. Especially improved error messages that contain
290 more information might cause these ``self.expect`` calls to unintentionally
291 find the required ``substrs``. For example, the following ``self.expect``
292 check will unexpectedly pass if it's ran as the first expression in a test:
296 self.expect("expr 2 + 2", substrs=["0"])
298 When running the same command in LLDB the reason for the unexpected success
299 is that '0' is found in the name of the implicitly created result variable:
305 ^ The '0' substring is found here.
307 A better way to write the test above would be using LLDB's testing function
308 ``expect_expr`` will only pass if the expression produces a value of 0:
312 self.expect_expr("2 + 2", result_value="0")
314 **Prefer using specific asserts over the generic assertTrue/assertFalse.**.
315 The ``self.assertTrue``/``self.assertFalse`` functions should always be your
316 last option as they give non-descriptive error messages. The test class has
317 several expressive asserts such as ``self.assertIn`` that automatically
318 generate an explanation how the received values differ from the expected
319 ones. Check the documentation of Python's ``unittest`` module to see what
320 asserts are available. LLDB also has a few custom asserts that are tailored
321 to our own data types.
323 +-----------------------------------------------+-----------------------------------------------------------------+
324 | **Assert** | **Description** |
325 +-----------------------------------------------+-----------------------------------------------------------------+
326 | ``assertSuccess`` | Assert that an ``lldb.SBError`` is in the "success" state. |
327 +-----------------------------------------------+-----------------------------------------------------------------+
328 | ``assertState`` | Assert that two states (``lldb.eState*``) are equal. |
329 +-----------------------------------------------+-----------------------------------------------------------------+
330 | ``assertStopReason`` | Assert that two stop reasons (``lldb.eStopReason*``) are equal. |
331 +-----------------------------------------------+-----------------------------------------------------------------+
333 If you can't find a specific assert that fits your needs and you fall back
334 to a generic assert, make sure you put useful information into the assert's
335 ``msg`` argument that helps explain the failure.
339 # Bad. Will print a generic error such as 'False is not True'.
340 self.assertTrue(expected_string in list_of_results)
341 # Good. Will print expected_string and the contents of list_of_results.
342 self.assertIn(expected_string, list_of_results)
344 **Do not use hard-coded line numbers in your test case.**
346 Instead, try to tag the line with some distinguishing pattern, and use the function line_number() defined in lldbtest.py which takes
347 filename and string_to_match as arguments and returns the line number.
349 As an example, take a look at test/API/functionalities/breakpoint/breakpoint_conditions/main.c which has these
354 return c(val); // Find the line number of c's parent call here.
360 return val + 3; // Find the line number of function "c" here.
362 The Python test case TestBreakpointConditions.py uses the comment strings to find the line numbers during setUp(self) and use them
363 later on to verify that the correct breakpoint is being stopped on and that its parent frame also has the correct line number as
364 intended through the breakpoint condition.
366 **Take advantage of the unittest framework's decorator features.**
368 These features can be use to properly mark your test class or method for platform-specific tests, compiler specific, version specific.
370 As an example, take a look at test/API/lang/c/forward/TestForwardDeclaration.py which has these lines:
372 .. code-block:: python
376 @skipIf(compiler=no_match("clang"))
377 @skipIf(compiler_version=["<", "8.0"])
378 @expectedFailureAll(oslist=["windows"])
379 def test_debug_names(self):
380 """Test that we are able to find complete types when using DWARF v5
381 accelerator tables"""
382 self.do_test(dict(CFLAGS_EXTRAS="-gdwarf-5 -gpubnames"))
384 This tells the test harness that unless we are running "linux" and clang version equal & above 8.0, the test should be skipped.
386 **Class-wise cleanup after yourself.**
388 TestBase.tearDownClass(cls) provides a mechanism to invoke the platform-specific cleanup after finishing with a test class. A test
389 class can have more than one test methods, so the tearDownClass(cls) method gets run after all the test methods have been executed by
392 The default cleanup action performed by the packages/Python/lldbsuite/test/lldbtest.py module invokes the "make clean" os command.
394 If this default cleanup is not enough, individual class can provide an extra cleanup hook with a class method named classCleanup ,
395 for example, in test/API/terminal/TestSTTYBeforeAndAfter.py:
397 .. code-block:: python
400 def classCleanup(cls):
401 """Cleanup the test byproducts."""
402 cls.RemoveTempFile("child_send1.txt")
405 The 'child_send1.txt' file gets generated during the test run, so it makes sense to explicitly spell out the action in the same
406 TestSTTYBeforeAndAfter.py file to do the cleanup instead of artificially adding it as part of the default cleanup action which serves to
407 cleanup those intermediate and a.out files.
412 LLVM Buildbot is the place where volunteers provide machines for building and
413 testing. Everyone can `add a buildbot for LLDB <https://llvm.org/docs/HowToAddABuilder.html>`_.
415 An overview of all LLDB builders can be found here:
417 `https://lab.llvm.org/buildbot/#/builders?tags=lldb <https://lab.llvm.org/buildbot/#/builders?tags=lldb>`_
419 Building and testing for macOS uses a different platform called GreenDragon. It
420 has a dedicated tab for LLDB: `https://green.lab.llvm.org/green/view/LLDB/
421 <https://green.lab.llvm.org/green/view/LLDB/>`_
429 On Windows any invocations of python should be replaced with python_d, the
430 debug interpreter, when running the test suite against a debug version of
435 On NetBSD you must export ``LD_LIBRARY_PATH=$PWD/lib`` in your environment.
436 This is due to lack of the ``$ORIGIN`` linker feature.
438 Running the Full Test Suite
439 ```````````````````````````
441 The easiest way to run the LLDB test suite is to use the ``check-lldb`` build
444 By default, the ``check-lldb`` target builds the test programs with the same
445 compiler that was used to build LLDB. To build the tests with a different
446 compiler, you can set the ``LLDB_TEST_COMPILER`` CMake variable.
448 It is possible to customize the architecture of the test binaries and compiler
449 used by appending ``-A`` and ``-C`` options respectively to the CMake variable
450 ``LLDB_TEST_USER_ARGS``. For example, to test LLDB against 32-bit binaries
451 built with a custom version of clang, do:
455 $ cmake -DLLDB_TEST_USER_ARGS="-A i386 -C /path/to/custom/clang" -G Ninja
458 Note that multiple ``-A`` and ``-C`` flags can be specified to
459 ``LLDB_TEST_USER_ARGS``.
461 Running a Single Test Suite
462 ```````````````````````````
464 Each test suite can be run separately, similar to running the whole test suite
467 * Use ``check-lldb-unit`` to run just the unit tests.
468 * Use ``check-lldb-api`` to run just the SB API tests.
469 * Use ``check-lldb-shell`` to run just the shell tests.
471 You can run specific subdirectories by appending the directory name to the
472 target. For example, to run all the tests in ``ObjectFile``, you can use the
473 target ``check-lldb-shell-objectfile``. However, because the unit tests and API
474 tests don't actually live under ``lldb/test``, this convenience is only
475 available for the shell tests.
477 Running a Single Test
478 `````````````````````
480 The recommended way to run a single test is by invoking the lit driver with a
481 filter. This ensures that the test is run with the same configuration as when
482 run as part of a test suite.
486 $ ./bin/llvm-lit -sv tools/lldb/test --filter <test>
489 Because lit automatically scans a directory for tests, it's also possible to
490 pass a subdirectory to run a specific subset of the tests.
494 $ ./bin/llvm-lit -sv tools/lldb/test/Shell/Commands/CommandScriptImmediateOutput
497 For the SB API tests it is possible to forward arguments to ``dotest.py`` by
498 passing ``--param`` to lit and setting a value for ``dotest-args``.
502 $ ./bin/llvm-lit -sv tools/lldb/test --param dotest-args='-C gcc'
505 Below is an overview of running individual test in the unit and API test suites
506 without going through the lit driver.
508 Running a Specific Test or Set of Tests: API Tests
509 ``````````````````````````````````````````````````
511 In addition to running all the LLDB test suites with the ``check-lldb`` CMake
512 target above, it is possible to run individual LLDB tests. If you have a CMake
513 build you can use the ``lldb-dotest`` binary, which is a wrapper around
514 ``dotest.py`` that passes all the arguments configured by CMake.
516 Alternatively, you can use ``dotest.py`` directly, if you want to run a test
517 one-off with a different configuration.
519 For example, to run the test cases defined in TestInferiorCrashing.py, run:
523 $ ./bin/lldb-dotest -p TestInferiorCrashing.py
528 $ python dotest.py --executable <path-to-lldb> -p TestInferiorCrashing.py ../packages/Python/lldbsuite/test
530 If the test is not specified by name (e.g. if you leave the ``-p`` argument
531 off), all tests in that directory will be executed:
536 $ ./bin/lldb-dotest functionalities/data-formatter
540 $ python dotest.py --executable <path-to-lldb> functionalities/data-formatter
542 Many more options that are available. To see a list of all of them, run:
546 $ python dotest.py -h
549 Running a Specific Test or Set of Tests: Unit Tests
550 ```````````````````````````````````````````````````
552 The unit tests are simple executables, located in the build directory under ``tools/lldb/unittests``.
554 To run them, just run the test binary, for example, to run all the Host tests:
558 $ ./tools/lldb/unittests/Host/HostTests
561 To run a specific test, pass a filter, for example:
565 $ ./tools/lldb/unittests/Host/HostTests --gtest_filter=SocketTest.DomainListenConnectAccept
568 Running the Test Suite Remotely
569 ```````````````````````````````
571 Running the test-suite remotely is similar to the process of running a local
572 test suite, but there are two things to have in mind:
574 1. You must have the lldb-server running on the remote system, ready to accept
575 multiple connections. For more information on how to setup remote debugging
576 see the Remote debugging page.
577 2. You must tell the test-suite how to connect to the remote system. This is
578 achieved using the ``--platform-name``, ``--platform-url`` and
579 ``--platform-working-dir`` parameters to ``dotest.py``. These parameters
580 correspond to the platform select and platform connect LLDB commands. You
581 will usually also need to specify the compiler and architecture for the
584 Currently, running the remote test suite is supported only with ``dotest.py`` (or
585 dosep.py with a single thread), but we expect this issue to be addressed in the
588 Running tests in QEMU System Emulation Environment
589 ``````````````````````````````````````````````````
591 QEMU can be used to test LLDB in an emulation environment in the absence of
592 actual hardware. :doc:`/use/qemu-testing` describes how to setup an
593 emulation environment using QEMU helper scripts found in
594 ``llvm-project/lldb/scripts/lldb-test-qemu``. These scripts currently
595 work with Arm or AArch64, but support for other architectures can be added easily.
597 Debugging Test Failures
598 -----------------------
600 On non-Windows platforms, you can use the ``-d`` option to ``dotest.py`` which
601 will cause the script to print out the pid of the test and wait for a while
602 until a debugger is attached. Then run ``lldb -p <pid>`` to attach.
604 To instead debug a test's python source, edit the test and insert
605 ``import pdb; pdb.set_trace()`` at the point you want to start debugging. In
606 addition to pdb's debugging facilities, lldb commands can be executed with the
607 help of a pdb alias. For example ``lldb bt`` and ``lldb v some_var``. Add this
608 line to your ``~/.pdbrc``:
612 alias lldb self.dbg.HandleCommand("%*")
614 Debugging Test Failures on Windows
615 ``````````````````````````````````
617 On Windows, it is strongly recommended to use Python Tools for Visual Studio
618 for debugging test failures. It can seamlessly step between native and managed
619 code, which is very helpful when you need to step through the test itself, and
620 then into the LLDB code that backs the operations the test is performing.
622 A quick guide to getting started with PTVS is as follows:
625 #. Create a Visual Studio Project for the Python code.
626 #. Go to File -> New -> Project -> Python -> From Existing Python Code.
627 #. Choose llvm/tools/lldb as the directory containing the Python code.
628 #. When asked where to save the .pyproj file, choose the folder ``llvm/tools/lldb/pyproj``. This is a special folder that is ignored by the ``.gitignore`` file, since it is not checked in.
629 #. Set test/dotest.py as the startup file
630 #. Make sure there is a Python Environment installed for your distribution. For example, if you installed Python to ``C:\Python35``, PTVS needs to know that this is the interpreter you want to use for running the test suite.
631 #. Go to Tools -> Options -> Python Tools -> Environment Options
632 #. Click Add Environment, and enter Python 3.5 Debug for the name. Fill out the values correctly.
633 #. Configure the project to use this debug interpreter.
634 #. Right click the Project node in Solution Explorer.
635 #. In the General tab, Make sure Python 3.5 Debug is the selected Interpreter.
636 #. In Debug/Search Paths, enter the path to your ninja/lib/site-packages directory.
637 #. In Debug/Environment Variables, enter ``VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\``.
638 #. If you want to enabled mixed mode debugging, check Enable native code debugging (this slows down debugging, so enable it only on an as-needed basis.)
639 #. Set the command line for the test suite to run.
640 #. Right click the project in solution explorer and choose the Debug tab.
641 #. Enter the arguments to dotest.py.
642 #. Example command options:
647 # Path to debug lldb.exe
648 --executable D:/src/llvmbuild/ninja/bin/lldb.exe
649 # Directory to store log files
650 -s D:/src/llvmbuild/ninja/lldb-test-traces
651 -u CXXFLAGS -u CFLAGS
652 # If a test crashes, show JIT debugging dialog.
653 --enable-crash-dialog
654 # Path to release clang.exe
655 -C d:\src\llvmbuild\ninja_release\bin\clang.exe
656 # Path to the particular test you want to debug.
659 D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test
663 --arch=i686 --executable D:/src/llvmbuild/ninja/bin/lldb.exe -s D:/src/llvmbuild/ninja/lldb-test-traces -u CXXFLAGS -u CFLAGS --enable-crash-dialog -C d:\src\llvmbuild\ninja_release\bin\clang.exe -p TestPaths.py D:\src\llvm\tools\lldb\packages\Python\lldbsuite\test --no-multiprocess
665 .. [#] `https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName <https://lldb.llvm.org/python_reference/lldb.SBTarget-class.html#BreakpointCreateByName>`_