18–20 Sept 2024
Europe/Vienna timezone

Session

Kernel Testing & Dependability MC

20 Sept 2024, 10:00

Description

The Kernel Testing & Dependability Micro-Conference (a.k.a. Testing MC) focuses on advancing the current state of testing of the Linux Kernel and its related infrastructure.

Building upon the momentum from previous years, the Testing MC's main purpose is to promote collaboration between all communities and individuals involved with kernel testing and dependability. We aim to create connections between folks working on related projects in the wider ecosystem and foster their development. This should serve applications and products that require predictability and trust in the kernel.

We ask that all discussions focus on some identified issues, aiming at finding potential solutions or alternatives to resolving them. The Testing MC is open to all topics related to testing on Linux, not necessarily in the kernel space.

In particular, here are some popular topics from past editions:

  • KernelCI: Rolling out new infrastructure with new web dashboard - see also strategic updates
  • KCIDB: integrating more data sources
  • Better sanitizers: KFENCE, improving KCSAN
  • Using Clang for better testing coverage: Now that the kernel fully supports building with Clang, how can all that work be leveraged into using Clang's features?
  • Consolidating toolchains: reference collection for increased reproducibility and quality control.
  • How to spread KUnit throughout the kernel?
  • Building and testing in-kernel Rust code.
  • Identify missing features that will provide assurance in safety critical systems.
  • Which test coverage infrastructures are most effective to provide evidence for kernel quality assurance? How should it be measured?
  • Explore ways to improve testing framework and tests in the kernel with a specific goal to increase traceability and code coverage.
  • Regression Testing for safety: Prioritize configurations and tests critical and important for quality and dependability.
  • Transitioning to test-driven kernel release cycles for mainline and stable: How to start relying on passing tests before releasing a new tag?
  • Explore how do SBOMs figure into dependability?

Things accomplished from last year:

  • Storing and Outputting Test Information: KUnit Attributes and KTAPv2 has been upstreamed.
  • KUnit APIs for managing devices has been upstreamed.

Presentation materials

Tingxu Ren (University of Illinois at Urbana-Champaign), Wentao Zhang (University of Illinois Urbana-Champaign), Darko Marinov (University of Illinois at Urbana-Champaign), Jinghao Jia (University of Illinois Urbana-Champaign), Tianyin Xu (University of Illinois at Urbana-Champaign)
20/09/2024, 12:00

We have been working on an LLVM-based toolchain for measuring test adequacy of existing kernel tests from test suites including KUnit [1], kselftest [2], LTP [3], test suites from RHEL [4] and more in KCIDB [5]. We measure different adequacy metrics including basic metrics statement coverage and branch coverage, and advanced metric Modified Condition/Decision Coverage (MC/DC) [6].

This talk...

Building timetable...