Conveners
VFIO/IOMMU/PCI MC
- Lorenzo Pieralisi
- Krzysztof Wilczyลski
- Joerg Roedel (AMD)
- Bjorn Helgaas (Google)
- Alex Williamson
Description
The PCI interconnect specification, the devices that implement it, and the system IOMMUs that provide memory and access control to them are nowadays a de-facto standard for connecting high-speed components, incorporating more and more features such as:
- Address Translation Service (ATS)/Page Request Interface (PRI)
- Single-root I/O Virtualization (SR-IOV)/Process Address Space ID (PASID)
- Shared Virtual Addressing (SVA)
- Remote Direct Memory Access (RDMA)
- Peer-to-Peer DMA (P2PDMA)
- Cache Coherent Interconnect for Accelerators (CCIX)
- Compute Express Link (CXL)/Data Object Exchange (DOE)
- Component Measurement and Authentication (CMA)
- Integrity and Data Encryption (IDE)
- Security Protocol and Data Model (SPDM)
These features are aimed at high-performance systems, server and desktop computing, embedded and SoC platforms, virtualisation, and ubiquitous IoT devices.
The kernel code that enables these new system features focuses on coordination between the PCI devices, the IOMMUs they are connected to, and the VFIO layer used to manage them (for userspace access and device passthrough) with related kernel interfaces and userspace APIs to be designed in-sync and in a clean way for all three sub-systems.
The VFIO/IOMMU/PCI MC focuses on the kernel code that enables these new system features, often requiring coordination between the VFIO, IOMMU and PCI subsystems.
Following the success of LPC 2017, 2019, 2020, 2021, 2022, 2023 and 2024 VFIO/IOMMU/PCI MC, the Linux Plumbers Conference 2024 VFIO/IOMMU/PCI track will focus on promoting discussions on the PCI core and current kernel patches aimed at VFIO/IOMMU/PCI subsystems. Specific sessions will focus on discussions that require coordination between the three subsystems.
See the following video recordings from 2024: LPC 2024 - VFIO/IOMMU/PCI MC.
Older recordings are available through the official YouTube channel of the Linux Plumbers Conference and the archived LPC 2017 VFIO/IOMMU/PCI MC web page at Linux Plumbers Conference 2017, where the audio recordings from the MC track and links to presentation materials are available.
The tentative schedule will provide an update on the current state of VFIO/IOMMU/PCI kernel subsystems, followed by a discussion of current issues related to the proposed topics.
The following was a result of last year's successful Linux Plumbers MC:
- The first version of work on solving the complex and pressing issue of secure device assignment that spans across the PCI, IOMMU, and CXL subsystems has been completed, and a series of patches has been sent to view and spark more discussion and debate about how to solve this challenging problem.
Tentative topics that are under consideration for this year include (but are not limited to):
- PCI
- Cache Coherent Interconnect for Accelerators (CCIX)/Compute Express Link (CXL) expansion memory and accelerators management
- Data Object Exchange (DOE)
- Integrity and Data Encryption (IDE)
- Component Measurement and Authentication (CMA)
- Security Protocol and Data Model (SPDM)
- I/O Address Space ID Allocator (IOASID)
- INTX/MSI IRQ domain consolidation
- Gen-Z interconnect fabric
- PCI error handling and management, e.g., Advanced Error Reporting (AER), Downstream Port Containment (DPC), ACPI Platform Error Interface (APEI) and Error Disconnect Recovery (EDR)
- Power management and devices supporting Active-state Power Management (ASPM)
- Peer-to-Peer DMA (P2PDMA)
- Resources claiming/assignment consolidation
- DMA ownership models
- Thunderbolt, DMA, RDMA and USB4 security
- VFIO
- I/O Page Fault (IOPF) for passthrough devices
- Shared Virtual Addressing (SVA) interface
- Single-root I/O Virtualization(SRIOV)/Process Address Space ID (PASID) integration
- PASID in SRIOV virtual functions
- TDISP/TSM Device assignment/sub-assignment
- IOMMU
- /dev/iommufd development
- IOMMU virtualisation
- IOMMU drivers SVA interface
- DMA-API layer interactions and the move towards generic dma-ops for IOMMU drivers
- Possible IOMMU core changes (e.g., better integration with the device-driver core, etc.)
If you are interested in participating in this MC and have topics to propose, please use the Call for Proposals (CfP) process. More topics might be added based on CfP for this MC.
Otherwise, join us in discussing how to help Linux keep up with the new features added to the PCI interconnect specification. We hope to see you there!
Key Attendees:
- Alex Williamson
- Benjamin Herrenschmidt
- Bjorn Helgaas
- Dan Williams
- Ilpo Jรคrvinen
- Jacob Pan
- James Gowans
- Jason Gunthorpe
- Jonathan Cameron
- Jรถrg Rรถdel
- Kevin Tian
- Lorenzo Pieralisi
- Lu Baolu
- Manivannan Sadhasivam
Contacts:
- Alex Williamson (alex.williamson@redhat.com)
- Bjorn Helgaas (helgaas@kernel.org)
- Jรถrg Roedel (joro@8bytes.org)
- Lorenzo Pieralisi (lpieralisi@kernel.org)
- Krzysztof Wilczyลski (kwilczynski@kernel.org)
-
Dan Williams (Intel)13/12/2025, 10:00
With required updates to the PCI core, device core, CPU arch, KVM, VFIO, IOMMUFD, and DMABUF the TEE I/O effort has a significant amount of work to do reach the starting line of the race to address Confidential Device use cases. Then, the mechanisms for devices to enter the locked state, the attestation and policy infrastructure for deploying secrets to TEE VMs, and the ability to recover a...
Go to contribution page -
Mr Jason Gunthorpe (NVIDIA Networking)13/12/2025, 10:30
Review the current state of the page table consolidation project.
Depending on progress in the next months this may be a primer on the design of the consolidated page table system to help reviewers, or a discussion on the next steps to land along the project.
https://patch.msgid.link/r/0-v5-116c4948af3d+68091-iommu_pt_jgg@nvidia.com
Additionally any iommufd related topics that people...
Go to contribution page -
Alex Mastro (Meta)13/12/2025, 11:00
Hello, I'm planning to attend LPC in person this year, and am interested in presenting our learnings related to running user space drivers built on top of VFIO in production, specifically related to orchestrating access to VFIO-bound devices from multiple processes.
The presentation would cover
Go to contribution page
- Our current usage patterns.
- Benefits of being able to deploy updates to device policy by... -
Hubertus Franke (IBM Research)13/12/2025, 12:00
Cloud workloads with strict performance needs (AI, HPC, large-scale data processing) frequently use PCIe device passthrough (e.g., via VFIO in Linux/KVM) to reduce latency and improve bandwidth. While effective for performance, this approach also exposes low-level device configuration interfaces directly to guest workloads, which may be malicious or running untrusted software.
In our...
Go to contribution page -
Yu Zhang13/12/2025, 12:20
We present a Hyper-V based pvIOMMU implementation for Linux guest, built upon the community-driven Generic I/O Page Table framework. Our approach leverages stage-1 page tables in the guest(w/ nested translation) to drive DMA remapping(including vSVA). This also eliminates the need for complex device-specific emulation and map/unmap overhead, meanwhile staying scalable across...
Go to contribution page -
Wei Huang13/12/2025, 12:40
The Smart Data Accelerator Interface (SDXI) is a new SNIA standard that extends traditional DMA engines with support for multiple address spaces, user-space ownership, and extensible offloads such as memory data movement. This talk reports on the progress of Linux enablement in two phases: an initial DMA-engine integration already upstream for review, and a full SDXI 1.0 implementation with a...
Go to contribution page -
Wei Huang13/12/2025, 13:00
AMDโs Smart Data Cache Injection (SDCI) leverages PCIe TLP Processing Hints (TPH) to steer DMA write data directly into the target CPU's L2 cache to reduce latency, improve throughput, and reduce DRAM bandwidth. This talk covers the details of AMD SDCI design, outlines the Linux kernel support we have developed - including a new ACPI _DSM interface in the PCI root complex and extensions to...
Go to contribution page -
Manivannan Sadhasivam13/12/2025, 13:15
On non-ACPI systems, such as the ones using DeviceTree for hardware description, the PCI host bridge drivers were responsible for managing the endpoint power supplies. While it worked for some simple use cases like the endpoints requiring 12V, 3.3V supplies, it didn't work for complex supplies required by some endpoint devices like the integrated WLAN/BT devices.
The PCI Pwrctrl framework...
Go to contribution page