18–20 Sept 2024
Europe/Vienna timezone

Session

Kernel Testing & Dependability MC

20 Sept 2024, 10:00

Description

The Kernel Testing & Dependability Micro-Conference (a.k.a. Testing MC) focuses on advancing the current state of testing of the Linux Kernel and its related infrastructure.

Building upon the momentum from previous years, the Testing MC's main purpose is to promote collaboration between all communities and individuals involved with kernel testing and dependability. We aim to create connections between folks working on related projects in the wider ecosystem and foster their development. This should serve applications and products that require predictability and trust in the kernel.

We ask that all discussions focus on some identified issues, aiming at finding potential solutions or alternatives to resolving them. The Testing MC is open to all topics related to testing on Linux, not necessarily in the kernel space.

In particular, here are some popular topics from past editions:

  • KernelCI: Rolling out new infrastructure with new web dashboard - see also strategic updates
  • KCIDB: integrating more data sources
  • Better sanitizers: KFENCE, improving KCSAN
  • Using Clang for better testing coverage: Now that the kernel fully supports building with Clang, how can all that work be leveraged into using Clang's features?
  • Consolidating toolchains: reference collection for increased reproducibility and quality control.
  • How to spread KUnit throughout the kernel?
  • Building and testing in-kernel Rust code.
  • Identify missing features that will provide assurance in safety critical systems.
  • Which test coverage infrastructures are most effective to provide evidence for kernel quality assurance? How should it be measured?
  • Explore ways to improve testing framework and tests in the kernel with a specific goal to increase traceability and code coverage.
  • Regression Testing for safety: Prioritize configurations and tests critical and important for quality and dependability.
  • Transitioning to test-driven kernel release cycles for mainline and stable: How to start relying on passing tests before releasing a new tag?
  • Explore how do SBOMs figure into dependability?

Things accomplished from last year:

  • Storing and Outputting Test Information: KUnit Attributes and KTAPv2 has been upstreamed.
  • KUnit APIs for managing devices has been upstreamed.

Presentation materials

  1. Tim Bird (Sony)
    20/09/2024, 10:00

    Benchmark test results are difficult to interpret in an automated fashion. They often require human interpretation to detect regressions because they depend on a number of variables, including configuration, cpu count, processor speed, storage speed, memory size, and other factors. Tim proposes a new system for managing benchmark data and interpretation in kselftest. It consist of 3 parts:...

    Go to contribution page
  2. David Gow (Google)
    20/09/2024, 10:30

    There are several different testing frameworks for kernel and kernel-adjacent code, but KUnit is one of the most consistent and user-friendly. This means that KUnit is being used for things beyond its nominal scope of 'unit tests'. This includes stress tests, integration tests, and performance tests.

    On the flipside, there are unit tests in the kernel tree for which KUnit's in-kernel nature...

    Go to contribution page
  3. Rae Moar
    20/09/2024, 11:00

    Currently, kunit.py provides its own KTAP parser (in kunit_parser.py), specifically for KUnit use. While it can be used to parse KTAP from other sources, this is rarely done. This may be due to KUnit-specific features or difficulty accessing the parser. Unfortunately, this can lead to developers coding and maintaining other KTAP parsers that heavily overlap with this existing tooling.

    We...

    Go to contribution page
  4. Tingxu Ren (University of Illinois at Urbana-Champaign), Wentao Zhang (University of Illinois Urbana-Champaign), Darko Marinov (University of Illinois at Urbana-Champaign), Jinghao Jia (University of Illinois Urbana-Champaign), Tianyin Xu (University of Illinois at Urbana-Champaign)
    20/09/2024, 12:00

    We have been working on an LLVM-based toolchain for measuring test adequacy of existing kernel tests from test suites including KUnit [1], kselftest [2], LTP [3], test suites from RHEL [4] and more in KCIDB [5]. We measure different adequacy metrics including basic metrics statement coverage and branch coverage, and advanced metric Modified Condition/Decision Coverage (MC/DC) [6].

    This talk...

    Go to contribution page
  5. Nicolas Prado (Collabora)
    20/09/2024, 12:30

    A large percentage of the functionality provided by the kernel to userspace
    comes from the different devices in the system. For that reason, having a proper
    common approach in mainline to test devices and detect regressions is of the
    utmost importance for the kernel's reliability.

    Devices are exposed through a diverse set of interfaces (uAPIs) and fully
    testing them requires just as...

    Go to contribution page
  6. Helen Koike (Collabora), Ricardo Cañuelo
    20/09/2024, 13:00

    CI systems can generate a big amount of test results, so processing and interacting with that data in a timely, efficient manner is paramount. At KernelCI, we are investing a lot into improving the quality of the test results through automatic post-processing, grouping and filtering to find common patterns and surface the most important test failures to the kernel community.

    In this...

    Go to contribution page
Building timetable...
Diamond Sponsor
Platinum Sponsors
Gold Sponsors
Silver Sponsors
Conference Services Provided by