1 # MBR - MLIR Benchmark Runner
2 MBR is a tool to run benchmarks. It measures compilation and running times of
3 benchmark programs. It uses MLIR's python bindings for MLIR benchmarks.
6 To build and enable MLIR benchmarks, pass `-DMLIR_ENABLE_PYTHON_BENCHMARKS=ON`
7 while building MLIR. If you make some changes to the `mbr` files itself, build
8 again with `-DMLIR_ENABLE_PYTHON_BENCHMARKS=ON`.
11 As mentioned in the intro, this tool measures compilation and running times.
12 An MBR benchmark is a python function that returns two callables, a compiler
13 and a runner. Here's an outline of a benchmark; we explain its working after
17 def benchmark_something():
20 # Compiles a program and creates an "executable object" that can be
21 # called to invoke the compiled program.
24 def runner(executable_object):
25 # Sets up arguments for executable_object and calls it. The
26 # executable_object is returned by the compiler.
27 # Returns an integer representing running time in nanoseconds.
30 return compiler, runner
33 The benchmark function's name must be prefixed by `"benchmark_"` and benchmarks
34 must be in the python files prefixed by `"benchmark_` for them to be
35 discoverable. The file and function prefixes are configurable using the
36 configuration file `mbr/config.ini` relative to this README's directory.
38 A benchmark returns two functions, a `compiler` and a `runner`. The `compiler`
39 returns a callable which is accepted as an argument by the runner function.
40 So the two functions work like this
41 1. `compiler`: configures and returns a callable.
42 2. `runner`: takes that callable in as input, sets up its arguments, and calls
43 it. Returns an int representing running time in nanoseconds.
45 The `compiler` callable is optional if there is no compilation step, for
46 example, for benchmarks involving numpy. In that case, the benchmarks look
50 def benchmark_something():
53 # Run the program and return the running time in nanoseconds.
58 In this case, the runner does not take any input as there is no compiled object
62 MLIR benchmarks can be run like this
65 PYTHONPATH=<path_to_python_mlir_core> <other_env_vars> python <llvm-build-path>/bin/mlir-mbr --machine <machine_identifier> --revision <revision_string> --result-stdout <path_to_start_search_for_benchmarks>
67 For a description of command line arguments, run
70 python mlir/utils/mbr/mbr/main.py -h
72 And to learn more about the other arguments, check out the LNT's
73 documentation page [here](https://llvm.org/docs/lnt/concepts.html).
75 If you want to run only specific benchmarks, you can use the positional argument
76 `top_level_path` appropriately.
78 1. If you want to run benchmarks in a specific directory or a file, set
79 `top_level_path` to that.
80 2. If you want to run a specific benchmark function, set the `top_level_path` to
81 the file containing that benchmark function, followed by a `::`, and then the
82 benchmark function name. For example, `mlir/benchmark/python/benchmark_sparse.py::benchmark_sparse_mlir_multiplication`.
85 Various aspects about the framework can be configured using the configuration
86 file in the `mbr/config.ini` relative to the directory of this README.