Microconferences proposed for LPC 2023
- Build Systems
- Compute eXpress Link (CXL)
- Confidential Computing
- Containers and checkpoint/restore
- CPU Isolation
- Internet of Things
- Kernel Testing & Dependability
- Linux in Space
- Linux Kernel Debugging
- Live Patching
- Power Management and Thermal Control
- Real-time and Scheduling MC
The Android Micro Conference brings the upstream community and Android systems developers together to discuss issues and changes to the Android platform and their dependencies and interactions with the Linux kernel, allowing for collaboration on solutions for upstream.
Currently planned discussion topics include:
- List item
- 16k Pages
- android-mainline on Pixel6
- Updates on Binder
- BPF usage w/ Android
- Kernel and platform integration testing
- Vendor Hook Usage
- Building Modules for Android GKI Kernels
- Resolving Priority Inversion w/ Proxy Execution
- AOSP Devboards
- And likely more...
MC leads: John Stultz, Karim Yaghmour, Amit Pundir, Sumit Semwal
In the Linux ecosystems there are many ways to build all the software used to put together a running system. Whether it’s building all the binary packages for a binary Linux distribution, using a source-based distribution, or building an embedded system from scratch, there are a lot of shared challenges which each system solves in their own way.
This microconference is a way to get people who work on disparate build systems to discuss common problems and possible shared solutions across the entire problem space.
- Bootstrapping the build system
- Cross building software
- Make, autoconf, and other similar software build tools
- Package build systems, bitbake, emerge/portage, pacman, etc
- Packaging formats
- Managing software with language specific package managers
- Patch sharing
- License gathering and verification
- Security updates
- Software chain-of-trust
- Repeatable builds
- Documentation and education
- Finding the next generation of maintainers
- Build-system visibility within the wider Plumbers attendees
Developers and maintainers in projects such as (though not a definitive list):
- Arch Linux
- ChromeOS build
- Yocto Project
- Other traditional Binary Packaged distributions
MC leads: Behan Webster, Philip Balister
Compute Express Link is a cache coherent fabric that in recent years has been gaining momentum in the industry. CXL 3.0 launched just before Plumbers 2022 (where very early discussions were had), bringing new challenges such as dynamic capacity devices and large scale fabrics, two features that bring significant challenges to Linux. There has also been controversy and confusion in the Linux kernel community about the state and future of CXL, regarding its usage and integration into, for example, the core memory management subsystem. Many concerns have been put to rest through proper clarification and setting of expectations.
The Compute Express Link microconference focuses on how to evolve the Linux CXL kernel driver and userspace components for support of the CXL 2.0 spec (and beyond). The microconference provides a space to open the discussion, incorporate more perspectives, and grow the CXL community with a goal that the CXL Linux plumbing serves the needs of the CXL ecosystem while balancing the needs of the Linux project. Specifically, this microconference welcomes submissions detailing industry and academia use cases in order to develop usage model scenarios. Finally, it will be a good opportunity to have existing upstream CXL developers available in a forum to discuss current CXL support and to communicate areas that need additional involvement.
- Ecosystem & Architectural review
- Dynamic Capacity Devices
- Fabric Management
- QEMU support
- Security (ie: IDE/SPDM)
- Managing vendor specificity
- Type 2 accelerator support (bias flip management)
- Coherence management of type2/3 memory (back-invalidation)
- P2P (UIO)
- RAS (GPF, AER)
- Hotplug (qos policies, daxctl)
- Hot remove
- Memory tiering topics that can relate to cxl (out of scope of MM/performance MCs)
- Industry and academia use cases
MC Leads: Dan Williams, Adam Manzanares, Jonathan Cameron, Davidlohr Bueso
The Confidential Computing microconferences in the past years brought together developers working secure execution features in hypervisors, firmware, Linux Kernel, over low-level user space up to container runtimes. A broad range of topics was discussed ranging from enablement for hardware features up to generic attestation workflows.
Over the last year there was progress on the development of Confidential Computing in the Linux kernel and user-space. The patch-sets for Intel TDX guest support and AMD SEV-SNP guest support were merged into the Linux kernel. Support for running as a CVM under Hyper-V has also been partially merged.
But there is still some way to go and problems to solve before a secure Confidential Computing stack with open source software and Linux as the hypervisor becomes a reality. The most pressing problems right now are:
- Support for restricted memory (Unmapped Private Memory patch-set, UPM) is still under development and discussion
- AMD SEV-SNP and Intel TDX host support
Other potential problems to discuss are:
- Support for un-accepted memory
- Confidential Computing support for ARM64
- Attestation workflows
- Secure VM service module (SVSM) and paravisor architecture and implementation (LINUX-SVSM vs. COCONUT-SVSM)
- Confidential Computing threat model
- Secure IO and device attestation
- Intel TDX Connect
- AMD SEV-TIO
- RISC-V CoVE (Task group and patches)
- Debuggability and live migration of confidential virtual machines
The Confidential Computing Microconference wants to bring developers
working on confidential computing together again to discuss these and other open problems.
MC Leads: Dhaval Giani, Joerg Roedel
The usual containers and checkpoint/restore micro-conference.
We will be discussing recent advancements in container technologies with some of the usual candidates being:
- CGroupV2 feature parity with CGroupV1
- Emulation of various files and system calls through FUSE and/or Seccomp
- Dealing with the eBPF-ification of the world
- Making user namespaces more accessible
- VFS idmap improvements
On the checkpoint/restore front, some of the potential topics include:
- Restoring FUSE services
- Handling GPUs
- Dealing with restartable sequences
And quite likely a variety of other container and checkpoint/restore topics as things evolve between now and the event.
MC Leads: Christian Brauner, Stéphane Graber, Mike Rapoport, Adrian Reber
CPU Isolation is currently a not well defined infrastructure to run a userspace task on a CPU without suffering any disturbance from the kernel.
A lot of problems need to be solved in this area:
- How to deal with vmstat?
- Do we still need a cpusets interface?
- Do we want to optimize power consumption? (sysidle)
- Do we want a quiescing interface?
- Status of deferred IPIs?
- Other topics?
MC Leads: Frederic Weisbecker
The IoT Microconference is back for its fifth year and there is a lot to discuss, as usual.
Since last year, there have been a number of technical topics with significant updates.
- Opportunities in IoT and Edge computing with the Linux /dev/accel API
- Using the Thrift RPC framework between Linux and Zephyr
- Zephyr’s new HTTP Server (a GSoC project)
- RISC-V support in Zephyr and the LTS backport
- Rust in the Zephyr RTOS: Benefits, Challenges and Missing Pieces
- BeagleConnect Freedom Updates, Greybus, and the Linux Interface
- Linux-wpan updates on 6lowpan, 802.15.4 PAN coordinators and UWB
On a slightly less technical topic.
- Reflections after Two Years of Zephyr LTSv2
We hope you will join us either in-person or remote for what is shaping up to be another great event full of collaboration, discussion, and interesting perspectives.
MC Leads: Christopher Friedt, Stefan Schmidt
The Linux Plumbers 2023 Kernel Testing & Dependability track focuses on advancing the current state of testing of the Linux Kernel and its related infrastructure. The main purpose is to improve software quality and dependability for applications that require predictability and trust. We aim to create connections between folks working on similar projects, and help individual projects make progress.
This track is intended to promote collaboration between all the communities and people interested in the Kernel testing & dependability. This will help move the conversation forward from where we left off at the LPC 2022 Kernel Testing & Dependability MC.
We ask that any topic discussions focus on issues/problems they are facing and possible alternatives to resolving them. The Microconference is open to all topics related to testing on Linux, not necessarily in the kernel space.
Potential testing and dependability topics:
- KernelCI: Topics on improvements and enhancements for test coverage
- Growing KCIDB, integrating more sources (https://kernelci.org/docs/kcidb/)
- Better sanitizers: KFENCE, improving KCSAN.
- Using Clang for better testing coverage: Now that the kernel fully
supports building with clang, how can all that work be leveraged into
using clang's features?
- How to spread KUnit throughout the kernel?
- Building and testing in-kernel Rust code.
- Identify missing features that will provide assurance in safety
- Which test coverage infrastructures are most effective to provide
evidence for kernel quality assurance? How should it be measured?
- Explore ways to improve testing framework and tests in the kernel
with a specific goal to increase traceability and code coverage.
- Regression Testing for safety: Prioritize configurations and tests
critical and important for quality and dependability
- Transitioning to test-driven kernel release cycles for mainline and
stable: How to start relying on passing tests before releasing a new
- Explore how do SBOMs figure into dependability?
Things accomplished from last year:
- Developing a new, modern API for KernelCI with Pub/Sub interface
- Adding Rust coverage in KernelCI https://linux.kernelci.org/job/rust-for-linux/branch/rust/
- KCIDB is continuing to gather results from many test systems: KernelCI, Red Hat's CKI, syzbot, ARM, Gentoo, Linaro's TuxSuite etc. The current focus is on generating common email reports based on this data and dealing with known issues.
- KFENCE is continuing to aid in detecting Out-of-bound OOB accesses, use-after-free errors (UAF), Double free and Invalid free and so on.
- Clang: CFI, weeding out issues upstream, etc.
- Kselftest continues to add coverage for new and existing features and subsystems.
- KUnit is continuing to act as the standard for some drivers and a de facto unit testing framework in the kernel . (https://www.youtube.com/watch?v=78gioY7VYxc)
- The Runtime Verification (RV) interface from Daniel Bristot de Oliveira was successfully merged.
MC Leads: Sasha Levin, Guillaume Tucker, Shuah Khan
KVM (Kernel-based Virtual Machine) enables the use of hardware features to improve the efficiency, performance, and security of virtual machines created and managed by userspace. KVM was originally developed to host and accelerate "full" virtual machines running a traditional kernel and operating system, but has long since expanded to cover a wide array of use cases, e.g. hosting real time workloads, sandboxing untrusted workloads, deprivileging third party code, reducing the trusted computed base of security sensitive workloads, etc. As KVM's use cases have grown, so too have the requirements placed on KVM and the interactions between it and other kernel subsystems.
The KVM Microconference will focus on how to evolve KVM and adjacent subsystems in order to satisfy new and upcoming requirements. Potential topics include:
- Serving inaccessible/unmappable memory for KVM guests (protected VMs); fine-grain permission updates of IOMMU and MMU page tables
- Optimizing mmu_notifiers, e.g. reducing TLB flushes and spurious zapping
- Improving and hardening KVM+perf interactions
- Implementing arch-agnostic abstractions in KVM (e.g. MMU)
- Utilizing "fault" injection to increase test coverage of edge cases
- KVM vs VFIO (e.g. memory types, a rather hot topic on the ARM side)
- Persistence of guest memory and kernel data structure (e.g. IOMMU page tables) across kexec for live update
- Paolo Bonzini, KVM Maintainer
- Sean Christopherson, KVM x86 Co-Maintainer
- Alexander Graf
- James Gowans
- Mickaël Salaün
MC Leads: Paolo Bonzini, Sean Christopherson
Linux is now everywhere in the space programs of every nation in the world, most famously in the Mars Helicopter, and also in many satellites, ground systems, spacecraft, and data processing pipelines. Simply enormous numbers of new microsats have Linux in them. But there has been very few discussions or code sharing back and forth between these end users of Linux, and the actual plumbers/developers. Some of this lack of communication has been in difficulties coping with ITAR, others in the historically long, but now accellerating, pace of space hardware development. What are the future use cases and demands on Linux, and Linux based networking systems in these unique space-bound environments? What can we all do better, scientists and engineers, governments and FOSS folk, working together, to produce an operating system better suited to going where no OS has gone before?
MC Lead: Dave Taht
When things go wrong, we need to debug the kernel. There are about as many ways to do that as you can imagine: printk, kdb/kgdb over serial, tracing, attaching debuggers to /proc/kcore, and post-mortem debugging using core dumps, just to name a few. Frequently, tools and approaches used by userspace debuggers aren't enough for the requirements of the kernel, so special tools are created to handle them: crash, drgn, makedumpfile, libkdumpfile, and many, many others.
With the variety of tools and approaches available, it's important to collaborate on whatever shared problems we may have. This microconference is an opportunity to discuss these problems and come up with shared approaches to resolve them. Some examples of potential topic areas:
- Many debuggers understand core subsystems such as tasks, slab caches, mm_structs, etc, and provide information about them. But as the kernel evolves, code changes can break these tools, which need to contain decades of cruft to handle a variety of versions. How can we improve processes and tools so that the future decades of evolution can be handled without crippling our debuggers with more technical debt?
How can we share logic between debuggers to reduce duplicate effort in interpreting core kernel data structures?
Please see Philipp Rudo's excellent talk from LPC 2022 regarding this very topic.
- Kernel core dumps can come from a variety of sources: some are generated via kexec and /proc/vmcore, then makedumpfile. Others may be created by a variety of hypervisors including Qemu, Xen, and Hyper-V. The core dumps can use ELF, or more commonly, the compressed diskdump family of formats. With the variety of core dump producers and consumers, along with the variation in formats, it's not uncommon to encounter "broken" core dumps which need tweaks or additional tools to be read. How can we build tools to handle the diversity of core dumps, more easily fix broken ones, and guide the community to a better documented standard?
- Kernel debuggers rely on debuginfo such as DWARF, which can be bulky and is not commonly distributed alongside the kernel. How can we enable lightweight debugging options that run everywhere?
- When debugging kernel-related issues on live systems, stack unwinding of both kernel and userspace tasks is important. As it is, stack unwinding in the kernel can be done via frame pointers and ORC on x86_64, but userspace stack unwinding is more difficult, since many applications and libraries are compiled without frame pointers, and the kernel lacks a DWARF-based unwinder. What can the kernel debugging and tracing community do to improve this situation?
MC Leads: Stephen Brennan
The Live Patching microconference at Linux Plumbers 2023 aims to gather stakeholders and interested parties to discuss proposed features and outstanding issues in live patching.
Live patching is a critical tool for maintaining system uptime and security by enabling fixes to be applied to running systems without the need for a reboot. The development of the infrastructure is an ongoing effort and while many problems have been resolved and features implemented, there are still open questions, some with already submitted patch sets, which need to be discussed.
Live Patching microconferences at the previous Linux Plumbers conferences proved to be useful in this regard and helped us to find final solutions or at least promising directions to push the development forward. It includes for example a support for several architectures (ppc64le and s390x were added after x86_64), a late module patching and module dependencies and user space live patching.
Currently proposed topics follow. The list is open though and more will be added during the regular Call for Topics.
- klp-convert (as means to fix CET IBT limitations) and its
- shadow variables, global state transition
- kselftests and the future direction of development
- arm64 live patching
- Josh Poimboeuf
- Jiri Kosina
- Miroslav Benes
- Petr Mladek
- Joe Lawrence
- Nicolai Stange
- Marcos Paulo de Souza
- Mark Rutland
- Mark Brown
We encourage all attendees to actively participate in the
microconference by sharing their ideas, experiences, and insights.
MC Leads: Miroslav Beneš, Joe Lawrence
The Power Management and Thermal Control microconference focuses on power management and thermal control infrastructure, CPU and device power-management mechanisms, and thermal control methods. In particular, we are interested in improving the thermal control infrastructure in the kernel to cover more use cases and utilizing energy-saving opportunities offered by modern hardware in new ways.
The goal is to facilitate cross-framework and cross-platform discussions that can help improve energy-awareness and thermal control in Linux.
- Idle injection and soft IRQs (Srinivas Pandruvada).
- Thermal sysfs/API update: are we happy with the current framework (Srinivas Pandruvada)?
- A way to define additional private attributes for a thermal zone (Srinivas Pandruvada).
- intel_lpmd (Intel Low Power Mode Daemon) (Zhang Rui).
- Thermal infrastructure for debugfs + clean up the sysfs debug-related information (Daniel Lezcano).
- New thermal trip types (Daniel Lezcano).
- Thermal management with the time dimension taken into account (Daniel Lezcano).
- Step-wise thermal governor improvements (Daniel Lezcano).
- ACPI extensions for device DVFS (Sudeep Holla).
More topics will be added based on CfP for this microconference.
MC Lead: Rafael Wysocki
The real-time and scheduling micro-conference joins these two intrinsically connected communities to discuss the next steps together.
Over the past decade, many parts of PREEMPT_RT have been included in the official Linux codebase. Examples include real-time mutexes, high-resolution timers, lockdep, ftrace, RCU_PREEMPT, threaded interrupt handlers, and more. The number of patches that need integration has been significantly reduced, and the rest is mature enough to make their way into mainline Linux.
The scheduler is the core of Linux performance. With different topologies and workloads, giving the user the best experience possible is challenging, from low latency to high throughput and from small power-constrained devices to HPC, where CPU isolation is critical.
The following accomplishments have been made as a result of last year’s microconference:
- Progress on rtla/osnoise to support any workload 
- Progress on adding tracepoints for IPI 
- Improvements in RCU to reduce noise
- Progress on the latency-nice patch set 
This year’s topics to be discussed include:
- Improve responsiveness for CFS tasks - e.g., latency-nice patch
- The new EVVDF scheduler proposal 
- Improvements in CPU Isolation
- The status of PREEMPT_RT Locking improvements - e.g., proxy execution 
- Improvements on SCHED_DEADLINE
- Tooling for debugging scheduling and real-time 
MC Leads: Daniel Bristot de Oliveira, Juri Lelli, Vincent Guittot, Steven Rostedt
- Hopefully not hwprobe…
- Do we even bother with generic optimized lib routines, or just go vendor-specific?
- When can we start deprecating stuff? rv32, nommu, xip… (old toolchains?)
Time to give up on profiles and just set a base ourselves?
- CI: Hosting PW-NIPA (current Conor/Microchip), hosting “upstream kernel ci” on Github w/ sponsored runners?
- Confidential Computing in RISC-V (It may be suitable to submit it to Confidential Computing MC though)
- Hardware assisted control-flow integrity on RISC-V CPUs (Deepak Gupta) CFI ?
- Any MM topics ?
- Text patching (Björn Töpel)
MC Leads: Palmer Dabbelt, ATISH PATRA
Rust is a systems programming language that is making great strides in becoming the next big one in the domain.
Rust for Linux is the project adding support for the Rust language to the Linux kernel. Rust has a key property that makes it very interesting as the second language in the kernel: it guarantees no undefined behavior takes place (as long as unsafe code is sound). This includes no use-after-free mistakes, no double frees, no data races, etc. It also provides other important benefits, such as improved error handling, stricter typing, sum types, pattern matching, privacy, closures, generics, etc.
This microconference intends to cover talks and discussions on both Rust for Linux as well as other non-kernel Rust topics.
Possible Rust for Linux topics:
- Rust in the kernel (e.g. status update, next steps...).
- Use cases for Rust around the kernel (e.g. subsystems, drivers, other modules...).
- Discussions on how to abstract existing subsystems safely, on API design, on coding guidelines...
- Integration with kernel systems and other infrastructure (e.g. build system, documentation, testing and CIs, maintenance, unstable features, architecture support, stable/LTS releases, Rust versioning, third-party crates...).
- Updates on its subprojects (e.g.
Possible Rust topics:
- Language and standard library (e.g. upcoming features, stabilization of the remaining features the kernel needs, memory model...).
- Compilers and codegen (e.g.
rustcimprovements, LLVM and Rust,
rustc_codegen_gcc, Rust GCC...).
- Other tooling and new ideas (
bindgen, Cargo, Miri, Clippy, Compiler Explorer, Coccinelle for Rust...).
- Educational material.
- Any other Rust topic within the Linux ecosystem.
Last year was the first edition of the Rust MC and the focus was on showing the ongoing efforts by different parties (compilers, Rust for Linux, CI, eBPF...). Shortly after the Rust MC, Rust got merged into the Linux kernel. Abstractions are getting upstreamed, with the first major drivers looking to be merged soon: Android Binder, the Asahi GPU driver and the NVMe driver (presented in that MC).
MC Leads: Wedson Almeida Filho, Miguel Ojeda
The Linux kernel has grown in complexity over the years. Complete understanding of how it works via code inspection has become virtually impossible. Today, tracing is used to follow the kernel as it performs its complex tasks. Tracing is used today for much more than simply debugging. Its framework has become the way for other parts of the Linux kernel to enhance and even make possible new features. Live kernel patching is based on the infrastructure of function tracing, as well as BPF. It is now even possible to model the behavior and correctness of the system via runtime verification which attaches to trace points. There is still much more that is happening in this space, and this microconference will be the forum to explore current and new ideas.
Results and accomplishments from the last time (2021):
- User events were introduced, and have finally made it into the kernel
- The discussion around trace events to handle user faults initiated the event probe work around to the problem. That was to add probes on existing trace events to change their types. This works on synthetic events that can pass the user space file name of the entry of a system call to the exit of the system call which would have faulted in the file and make it available to the trace event.
- Dynamically creating the events directory is currently being worked on with the eventfs patch set. This will save memory as the dentries and inodes will only be allocated when accessed.
- The discussion about function tracing with arguments has helped inspire both fprobes and function graph return value tracing.
- There’s still ongoing effort in merging the return path tracers of function graph and kretprobes and fprobes.
Topics for this year:
- Use of sframes. How to get user space stack traces without requiring frame pointers.
- Updating perf and ftrace to extract user space stack frames from a schedulable context (as requested by NMI).
- Extending user events. Now that they are in the kernel, how to make them more accessible to users and applications.
- Getting more use cases with the runtime verifier. Now that the runtime verifier is in the kernel (uses tracepoints to model against), what else can it be used for.
- Wider use of ftrace_regs in fprobes and rethook from fprobes because rethook may not fill all registers in pt_regs too. How BPF handles this will also be discussed.
- Removing kretprobes from kprobes so that kprobe can focus on handling software breakpoint.
- Object tracing (following a variable throughout each function call). This has had several patches out, but has stopped due to hard issues to overcome. A live discussion could possibly come up with a proper solution.
- Hardware breakpoints and tracing memory changes. Object tracing follows a variable when it changes between function calls. But if the hardware supports it, tracing a variable when it actually changes would be more useful albeit more complex. Discussion around this may come up with a easier answer.
- MMIO tracer being used in SMP. Currently the MMIO tracer does not handle race conditions. Instead, it offlines all but one CPU when it is enabled. It would be great if this could be used in normal SMP environments. There’s nothing technically preventing that from happening. It only needs some clever thinking to come up with a design to do so.
- Getting perf counters onto the ftrace ring buffer. Ftrace is designed for fast tracing, and perf is a great profiler. Over the years it has been asked to have perf counters along side ftrace trace events. Perhaps its time to finally accomplish that. It could be that each function can show the perf cache misses of that function.
- Steven Rostedt
- Masami Hiramtsu
- Mathieu Desnoyers
- Alexei Starovoitov
- Peter Zijlstra
- Mark Rutland
- Beau Belgrave
- Daniel Bristot de Oliveira
- Florent Revest
- Jiri Olsa
- Tom Zanussi
MC Leads: Masami Hiramatsu, Steven Rostedt
The PCI interconnect specification, the devices that implement it, and the system IOMMUs that provide memory and access control to them are nowadays a de-facto standard for connecting high-speed components, incorporating more and more features such as:
- Address Translation Service (ATS)/Page Request Interface (PRI)
- Single-root I/O Virtualization (SR-IOV)/Process Address Space ID (PASID)
- Shared Virtual Addressing (SVA)
- Remote Direct Memory Access (RDMA)
- Peer-to-Peer DMA (P2PDMA)
- Cache Coherent Interconnect for Accelerators (CCIX)
- Compute Express Link (CXL)
- Data Object Exchange (DOE)
- Component Measurement and Authentication (CMA)
- Integrity and Data Encryption (IDE)
- Security Protocol and Data Model (SPDM)
These features are aimed at high-performance systems, server and desktop computing, embedded and SoC platforms, virtualisation, and ubiquitous IoT devices.
The kernel code that enables these new system features focuses on coordination between the PCI devices, the IOMMUs they are connected to, and the VFIO layer used to manage them (for userspace access and device passthrough) with related kernel interfaces and userspace APIs to be designed in-sync and in a clean way for all three sub-systems.
The VFIO/IOMMU/PCI MC focuses on the kernel code that enables these new system features, often requiring coordination between the VFIO, IOMMU and PCI sub-systems.
Following the success of LPC 2017, 2019, 2020, 2021, and 2022 VFIO/IOMMU/PCI MC, the Linux Plumbers Conference 2023 VFIO/IOMMU/PCI track will focus on promoting discussions on the PCI core but also current kernel patches aimed at VFIO/IOMMU/PCI sub-systems with specific sessions targeting discussions requiring the three sub-systems coordination.
See the following video recordings from 2022: LPC 2022 - VFIO/IOMMU/PCI MC
Older recordings can be accessed through our official YouTube channel at @linux-pci and the archived LPC 2017 VFIO/IOMMU/PCI MC web page at Linux Plumbers Conference 2017, where the audio recordings from the MC track and links to presentation materials are available.
The tentative schedule will provide an update on the current state of VFIO/IOMMU/PCI kernel sub-systems, followed by a discussion of current issues in the proposed topics.
The following was a result of last year's successful Linux Plumbers MC:
- Support for the /dev/iommufd device has been merged into the mainline kernel
- A discussion has been kicked off around the topic of the Instant Detection of Virtual Devices
- The work on the PCIe Endpoint Notifier has been completed and merged into the mainline kernel
Tentative topics that are under consideration for this year include (but are not limited to):
- Cache Coherent Interconnect for Accelerators (CCIX)/Compute Express Link (CXL) expansion memory and accelerators management
- Data Object Exchange (DOE)
- Integrity and Data Encryption (IDE)
- Component Measurement and Authentication (CMA)
- Security Protocol and Data Model (SPDM)
- I/O Address Space ID Allocator (IOASID)
- INTX/MSI IRQ domain consolidation
- Gen-Z interconnect fabric
- ARM64 architecture and hardware
- PCI native host controllers/endpoints drivers current challenges and improvements (e.g., state of PCI quirks, etc.)
- PCI error handling and management, e.g., Advanced Error Reporting (AER), Downstream Port Containment (DPC), ACPI Platform Error Interface (APEI) and Error Disconnect Recover (EDR)
- Power management and devices supporting Active-state Power Management (ASPM)
- Peer-to-Peer DMA (P2PDMA)
- Resources claiming/assignment consolidation
- Probing of native PCIe controllers and general reset implementation
- Prefetchable vs non-prefetchable BAR address mappings
- Untrusted/external devices management
- DMA ownership models
- Thunderbolt, DMA, RDMA and USB4 security
- Write-combine on non-x86 architectures
- I/O Page Fault (IOPF) for passthrough devices
- Shared Virtual Addressing (SVA) interface
- Single-root I/O Virtualization(SRIOV)/Process Address Space ID (PASID) integration
- PASID in SRIOV virtual functions
- Device assignment/sub-assignment
- /dev/iommufd development
- IOMMU virtualisation
- IOMMU drivers SVA interface
- DMA-API layer interactions and the move towards generic dma-ops for IOMMU drivers
- Possible IOMMU core changes (e.g., better integration with the device-driver core, etc.)
If you are interested in participating in this MC and have topics to propose, please use the Call for Proposals (CfP) process. More topics might be added based on CfP for this MC.
Otherwise, join us to discuss helping Linux keep up with the new features added to the PCI interconnect specification. We hope to see you there!
- Alex Williamson
- Arnd Bergmann
- Ashok Raj
- Benjamin Herrenschmidt
- Bjorn Helgaas
- Dan Williams
- Eric Auger
- Jacob Pan
- Jason Gunthorpe
- Jean-Philippe Brucker
- Jonathan Cameron
- Jörg Rödel
- Kevin Tian
- Lorenzo Pieralisi
- Lu Baolu
- Marc Zyngier
- Pali Rohár
- Peter Zijlstra
- Thomas Gleixner
MC Leads: Bjorn Helgaas, Lorenzo Pieralisi, Joerg Roedel, Krzysztof Wilczyński, Alex Williamson