Documentation / dev-tools / kunit / running_tips.rst


Based on kernel version 6.8. Page generated on 2024-03-11 21:26 EST.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448
.. SPDX-License-Identifier: GPL-2.0

============================
Tips For Running KUnit Tests
============================

Using ``kunit.py run`` ("kunit tool")
=====================================

Running from any directory
--------------------------

It can be handy to create a bash function like:

.. code-block:: bash

	function run_kunit() {
	  ( cd "$(git rev-parse --show-toplevel)" && ./tools/testing/kunit/kunit.py run "$@" )
	}

.. note::
	Early versions of ``kunit.py`` (before 5.6) didn't work unless run from
	the kernel root, hence the use of a subshell and ``cd``.

Running a subset of tests
-------------------------

``kunit.py run`` accepts an optional glob argument to filter tests. The format
is ``"<suite_glob>[.test_glob]"``.

Say that we wanted to run the sysctl tests, we could do so via:

.. code-block:: bash

	$ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
	$ ./tools/testing/kunit/kunit.py run 'sysctl*'

We can filter down to just the "write" tests via:

.. code-block:: bash

	$ echo -e 'CONFIG_KUNIT=y\nCONFIG_KUNIT_ALL_TESTS=y' > .kunit/.kunitconfig
	$ ./tools/testing/kunit/kunit.py run 'sysctl*.*write*'

We're paying the cost of building more tests than we need this way, but it's
easier than fiddling with ``.kunitconfig`` files or commenting out
``kunit_suite``'s.

However, if we wanted to define a set of tests in a less ad hoc way, the next
tip is useful.

Defining a set of tests
-----------------------

``kunit.py run`` (along with ``build``, and ``config``) supports a
``--kunitconfig`` flag. So if you have a set of tests that you want to run on a
regular basis (especially if they have other dependencies), you can create a
specific ``.kunitconfig`` for them.

E.g. kunit has one for its tests:

.. code-block:: bash

	$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit/.kunitconfig

Alternatively, if you're following the convention of naming your
file ``.kunitconfig``, you can just pass in the dir, e.g.

.. code-block:: bash

	$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit

.. note::
	This is a relatively new feature (5.12+) so we don't have any
	conventions yet about on what files should be checked in versus just
	kept around locally. It's up to you and your maintainer to decide if a
	config is useful enough to submit (and therefore have to maintain).

.. note::
	Having ``.kunitconfig`` fragments in a parent and child directory is
	iffy. There's discussion about adding an "import" statement in these
	files to make it possible to have a top-level config run tests from all
	child directories. But that would mean ``.kunitconfig`` files are no
	longer just simple .config fragments.

	One alternative would be to have kunit tool recursively combine configs
	automagically, but tests could theoretically depend on incompatible
	options, so handling that would be tricky.

Setting kernel commandline parameters
-------------------------------------

You can use ``--kernel_args`` to pass arbitrary kernel arguments, e.g.

.. code-block:: bash

	$ ./tools/testing/kunit/kunit.py run --kernel_args=param=42 --kernel_args=param2=false


Generating code coverage reports under UML
------------------------------------------

.. note::
	TODO(brendanhiggins@google.com): There are various issues with UML and
	versions of gcc 7 and up. You're likely to run into missing ``.gcda``
	files or compile errors.

This is different from the "normal" way of getting coverage information that is
documented in Documentation/dev-tools/gcov.rst.

Instead of enabling ``CONFIG_GCOV_KERNEL=y``, we can set these options:

.. code-block:: none

	CONFIG_DEBUG_KERNEL=y
	CONFIG_DEBUG_INFO=y
	CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
	CONFIG_GCOV=y


Putting it together into a copy-pastable sequence of commands:

.. code-block:: bash

	# Append coverage options to the current config
	$ ./tools/testing/kunit/kunit.py run --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
	# Extract the coverage information from the build dir (.kunit/)
	$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/

	# From here on, it's the same process as with CONFIG_GCOV_KERNEL=y
	# E.g. can generate an HTML report in a tmp dir like so:
	$ genhtml -o /tmp/coverage_html coverage.info


If your installed version of gcc doesn't work, you can tweak the steps:

.. code-block:: bash

	$ ./tools/testing/kunit/kunit.py run --make_options=CC=/usr/bin/gcc-6
	$ lcov -t "my_kunit_tests" -o coverage.info -c -d .kunit/ --gcov-tool=/usr/bin/gcov-6

Alternatively, LLVM-based toolchains can also be used:

.. code-block:: bash

	# Build with LLVM and append coverage options to the current config
	$ ./tools/testing/kunit/kunit.py run --make_options LLVM=1 --kunitconfig=.kunit/ --kunitconfig=tools/testing/kunit/configs/coverage_uml.config
	$ llvm-profdata merge -sparse default.profraw -o default.profdata
	$ llvm-cov export --format=lcov .kunit/vmlinux -instr-profile default.profdata > coverage.info
	# The coverage.info file is in lcov-compatible format and it can be used to e.g. generate HTML report
	$ genhtml -o /tmp/coverage_html coverage.info


Running tests manually
======================

Running tests without using ``kunit.py run`` is also an important use case.
Currently it's your only option if you want to test on architectures other than
UML.

As running the tests under UML is fairly straightforward (configure and compile
the kernel, run the ``./linux`` binary), this section will focus on testing
non-UML architectures.


Running built-in tests
----------------------

When setting tests to ``=y``, the tests will run as part of boot and print
results to dmesg in TAP format. So you just need to add your tests to your
``.config``, build and boot your kernel as normal.

So if we compiled our kernel with:

.. code-block:: none

	CONFIG_KUNIT=y
	CONFIG_KUNIT_EXAMPLE_TEST=y

Then we'd see output like this in dmesg signaling the test ran and passed:

.. code-block:: none

	TAP version 14
	1..1
	    # Subtest: example
	    1..1
	    # example_simple_test: initializing
	    ok 1 - example_simple_test
	ok 1 - example

Running tests as modules
------------------------

Depending on the tests, you can build them as loadable modules.

For example, we'd change the config options from before to

.. code-block:: none

	CONFIG_KUNIT=y
	CONFIG_KUNIT_EXAMPLE_TEST=m

Then after booting into our kernel, we can run the test via

.. code-block:: none

	$ modprobe kunit-example-test

This will then cause it to print TAP output to stdout.

.. note::
	The ``modprobe`` will *not* have a non-zero exit code if any test
	failed (as of 5.13). But ``kunit.py parse`` would, see below.

.. note::
	You can set ``CONFIG_KUNIT=m`` as well, however, some features will not
	work and thus some tests might break. Ideally tests would specify they
	depend on ``KUNIT=y`` in their ``Kconfig``'s, but this is an edge case
	most test authors won't think about.
	As of 5.13, the only difference is that ``current->kunit_test`` will
	not exist.

Pretty-printing results
-----------------------

You can use ``kunit.py parse`` to parse dmesg for test output and print out
results in the same familiar format that ``kunit.py run`` does.

.. code-block:: bash

	$ ./tools/testing/kunit/kunit.py parse /var/log/dmesg


Retrieving per suite results
----------------------------

Regardless of how you're running your tests, you can enable
``CONFIG_KUNIT_DEBUGFS`` to expose per-suite TAP-formatted results:

.. code-block:: none

	CONFIG_KUNIT=y
	CONFIG_KUNIT_EXAMPLE_TEST=m
	CONFIG_KUNIT_DEBUGFS=y

The results for each suite will be exposed under
``/sys/kernel/debug/kunit/<suite>/results``.
So using our example config:

.. code-block:: bash

	$ modprobe kunit-example-test > /dev/null
	$ cat /sys/kernel/debug/kunit/example/results
	... <TAP output> ...

	# After removing the module, the corresponding files will go away
	$ modprobe -r kunit-example-test
	$ cat /sys/kernel/debug/kunit/example/results
	/sys/kernel/debug/kunit/example/results: No such file or directory

Generating code coverage reports
--------------------------------

See Documentation/dev-tools/gcov.rst for details on how to do this.

The only vaguely KUnit-specific advice here is that you probably want to build
your tests as modules. That way you can isolate the coverage from tests from
other code executed during boot, e.g.

.. code-block:: bash

	# Reset coverage counters before running the test.
	$ echo 0 > /sys/kernel/debug/gcov/reset
	$ modprobe kunit-example-test


Test Attributes and Filtering
=============================

Test suites and cases can be marked with test attributes, such as speed of
test. These attributes will later be printed in test output and can be used to
filter test execution.

Marking Test Attributes
-----------------------

Tests are marked with an attribute by including a ``kunit_attributes`` object
in the test definition.

Test cases can be marked using the ``KUNIT_CASE_ATTR(test_name, attributes)``
macro to define the test case instead of ``KUNIT_CASE(test_name)``.

.. code-block:: c

	static const struct kunit_attributes example_attr = {
		.speed = KUNIT_VERY_SLOW,
	};

	static struct kunit_case example_test_cases[] = {
		KUNIT_CASE_ATTR(example_test, example_attr),
	};

.. note::
	To mark a test case as slow, you can also use ``KUNIT_CASE_SLOW(test_name)``.
	This is a helpful macro as the slow attribute is the most commonly used.

Test suites can be marked with an attribute by setting the "attr" field in the
suite definition.

.. code-block:: c

	static const struct kunit_attributes example_attr = {
		.speed = KUNIT_VERY_SLOW,
	};

	static struct kunit_suite example_test_suite = {
		...,
		.attr = example_attr,
	};

.. note::
	Not all attributes need to be set in a ``kunit_attributes`` object. Unset
	attributes will remain uninitialized and act as though the attribute is set
	to 0 or NULL. Thus, if an attribute is set to 0, it is treated as unset.
	These unset attributes will not be reported and may act as a default value
	for filtering purposes.

Reporting Attributes
--------------------

When a user runs tests, attributes will be present in the raw kernel output (in
KTAP format). Note that attributes will be hidden by default in kunit.py output
for all passing tests but the raw kernel output can be accessed using the
``--raw_output`` flag. This is an example of how test attributes for test cases
will be formatted in kernel output:

.. code-block:: none

	# example_test.speed: slow
	ok 1 example_test

This is an example of how test attributes for test suites will be formatted in
kernel output:

.. code-block:: none

	  KTAP version 2
	  # Subtest: example_suite
	  # module: kunit_example_test
	  1..3
	  ...
	ok 1 example_suite

Additionally, users can output a full attribute report of tests with their
attributes, using the command line flag ``--list_tests_attr``:

.. code-block:: bash

	kunit.py run "example" --list_tests_attr

.. note::
	This report can be accessed when running KUnit manually by passing in the
	module_param ``kunit.action=list_attr``.

Filtering
---------

Users can filter tests using the ``--filter`` command line flag when running
tests. As an example:

.. code-block:: bash

	kunit.py run --filter speed=slow


You can also use the following operations on filters: "<", ">", "<=", ">=",
"!=", and "=". Example:

.. code-block:: bash

	kunit.py run --filter "speed>slow"

This example will run all tests with speeds faster than slow. Note that the
characters < and > are often interpreted by the shell, so they may need to be
quoted or escaped, as above.

Additionally, you can use multiple filters at once. Simply separate filters
using commas. Example:

.. code-block:: bash

	kunit.py run --filter "speed>slow, module=kunit_example_test"

.. note::
	You can use this filtering feature when running KUnit manually by passing
	the filter as a module param: ``kunit.filter="speed>slow, speed<=normal"``.

Filtered tests will not run or show up in the test output. You can use the
``--filter_action=skip`` flag to skip filtered tests instead. These tests will be
shown in the test output in the test but will not run. To use this feature when
running KUnit manually, use the module param ``kunit.filter_action=skip``.

Rules of Filtering Procedure
----------------------------

Since both suites and test cases can have attributes, there may be conflicts
between attributes during filtering. The process of filtering follows these
rules:

- Filtering always operates at a per-test level.

- If a test has an attribute set, then the test's value is filtered on.

- Otherwise, the value falls back to the suite's value.

- If neither are set, the attribute has a global "default" value, which is used.

List of Current Attributes
--------------------------

``speed``

This attribute indicates the speed of a test's execution (how slow or fast the
test is).

This attribute is saved as an enum with the following categories: "normal",
"slow", or "very_slow". The assumed default speed for tests is "normal". This
indicates that the test takes a relatively trivial amount of time (less than
1 second), regardless of the machine it is running on. Any test slower than
this could be marked as "slow" or "very_slow".

The macro ``KUNIT_CASE_SLOW(test_name)`` can be easily used to set the speed
of a test case to "slow".

``module``

This attribute indicates the name of the module associated with the test.

This attribute is automatically saved as a string and is printed for each suite.
Tests can also be filtered using this attribute.

``is_init``

This attribute indicates whether the test uses init data or functions.

This attribute is automatically saved as a boolean and tests can also be
filtered using this attribute.