Description
The Kernel Testing & Dependability Micro-Conference (a.k.a. Testing MC) focuses on advancing the current state of testing of the Linux Kernel and its related infrastructure.
Building upon the momentum from previous years, the Testing MC's main purpose is to promote collaboration between all communities and individuals involved with kernel testing and dependability. We aim to create connections between folks working on related projects in the wider ecosystem and foster their development. This should serve applications and products that require predictability and trust in the kernel.
We ask that all discussions focus on some identified issues, aiming at finding potential solutions or alternatives to resolving them. The Testing MC is open to all topics related to testing on Linux, not necessarily in the kernel space.
In particular, here are some popular topics from past editions:
- KernelCI: Rolling out new infrastructure with new web dashboard - see also strategic updates
- KCIDB: integrating more data sources
- Better sanitizers: KFENCE, improving KCSAN
- Using Clang for better testing coverage: Now that the kernel fully supports building with Clang, how can all that work be leveraged into using Clang's features?
- Consolidating toolchains: reference collection for increased reproducibility and quality control.
- How to spread KUnit throughout the kernel?
- Building and testing in-kernel Rust code.
- Identify missing features that will provide assurance in safety critical systems.
- Which test coverage infrastructures are most effective to provide evidence for kernel quality assurance? How should it be measured?
- Explore ways to improve testing framework and tests in the kernel with a specific goal to increase traceability and code coverage.
- Regression Testing for safety: Prioritize configurations and tests critical and important for quality and dependability.
- Transitioning to test-driven kernel release cycles for mainline and stable: How to start relying on passing tests before releasing a new tag?
- Explore how do SBOMs figure into dependability?
Things accomplished from last year:
- Storing and Outputting Test Information: KUnit Attributes and KTAPv2 has been upstreamed.
- KUnit APIs for managing devices has been upstreamed.
We have been working on an LLVM-based toolchain for measuring test adequacy of existing kernel tests from test suites including KUnit [1], kselftest [2], LTP [3], test suites from RHEL [4] and more in KCIDB [5]. We measure different adequacy metrics including basic metrics statement coverage and branch coverage, and advanced metric Modified Condition/Decision Coverage (MC/DC) [6].
This talk...
A large percentage of the functionality provided by the kernel to userspace
comes from the different devices in the system. For that reason, having a proper
common approach in mainline to test devices and detect regressions is of the
utmost importance for the kernel's reliability.
Devices are exposed through a diverse set of interfaces (uAPIs) and fully
testing them requires just as...
CI systems can generate a big amount of test results, so processing and interacting with that data in a timely, efficient manner is paramount. At KernelCI, we are investing a lot into improving the quality of the test results through automatic post-processing, grouping and filtering to find common patterns and surface the most important test failures to the kernel community.
In this...