The PCI interconnect specification and the devices implementing it are incorporating more and more features aimed at high performance systems (eg RDMA, peer-to-peer, CCIX, PCI ATS (Address Translation Service)/PRI(Page Request Interface), enabling Shared Virtual Addressing (SVA) between devices and CPUs), that require the kernel to coordinate the PCI devices, the IOMMUs they are connected to and the VFIO layer used to managed them (for userspace access and device passthrough) with related kernel interfaces that have to be designed in-sync for all three subsystems.
The kernel code that enables these new system features requires coordination between VFIO/IOMMU/PCI subsystems, so that kernel interfaces and userspace APIs can be designed in a clean way.
Following up the successful LPC 2017 VFIO/IOMMU/PCI microconference, the Linux Plumbers 2019 VFIO/IOMMU/PCI track will therefore focus on promoting discussions on the current kernel patches aimed at VFIO/IOMMU/PCI subsystems with specific sessions targeting discussion for kernel patches that enable technology (eg device/sub-device assignment, peer-to-peer PCI, IOMMU enhancements) requiring the three subsystems coordination; the microconference will also cover VFIO/IOMMU/PCI subsystem specific tracks to debate patches status for the respective subsystems plumbing.
Tentative topics for discussion:
Shared Virtual Addressing (SVA) interface
IOMMU drivers SVA interface consolidation
IOMMU-API enhancements for mediated devices/SVA
Possible IOMMU core changes (like splitting up iommu_ops, better integration with device-driver core)
DMA-API layer interactions and how to get towards generic dma-ops for IOMMU drivers
Resources claiming/assignment consolidation
PCI error management
PCI endpoint subsystem
prefetchable vs non-prefetchable BAR address mappings (cacheability)
Kernel NoSnoop TLP attribute handling
CCIX and accelerators management
If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.
Bjorn Helgaas email@example.com, Lorenzo Pieralisi firstname.lastname@example.org, Joerg Roedel email@example.com, and Alex Williamson firstname.lastname@example.org
This topic will discuss 1) why do we need per-group default domain type, 2) how it solves the problems in the real IOMMU driver, and 3) the user interfaces.
Since August 2018 I have been working on SMMUv3 nested stage integration
at IOMMU/VFIO levels, to allow virtual SMMUv3/VFIO integration.
This shares some APIs with the Intel and ARM SVA series (cache invalidation,
fault reporting) but also introduces some specific ones to pass information
about guest stage 1 configuration and MSI bindings.
In this session I would like to discuss the...
PASID (Process Address Space ID) is a PCIe capability that enables sharing of a single device across multiple isolated address domains. It has been becoming a hot term in I/O technology evolution. e.g. it is foundation of SVM and SIOV. Combined with the usages of PASID and the configuration difference due to architecture difference across vendors, it brings an interesting topic on PASID...
While x86 is probably the most prominent platform for vfio/iommu development and usage, other architectures also see quite a bit of movement. These architectures are similar to x86 in some parts and quite different in others; therefore, sometimes issues come up that may be surprising to folks mostly working on more common platforms.
For example, PCI on s390 is using special instructions. QEMU...
Modern PCI graphics devices may contain several gigabytes of memory mapped in its BAR. This trend is continuing into storage with NVMe devices containing large Controller Memory Buffers and Persistent Memory Regions.
Some PCI hierarchies are resource constrained and cannot fit as many devices as desired. In NVMe's case, it's preferable to enumerate and attach all devices rather than use the...
This is meant to be a rather open discussion on PCI resource assignment policies. I plan to discuss a bit what the different arch/platforms do today, how I've tried to consolidate it, then we can debate the pro/cons of the different approaches and decide where to go from there.
A PCI-Express non-transparent bridge (NTB) is a point-to-point PCIe bus
connecting 2 host systems. NTB functionality can be achieved in a platform
having 2 endpoint instances. Here each of the endpoint instance will be
connected to an independent host and the hosts can communicate with each other
using endpoint as a bridge. The endpoint framework and the "new" NTB EP
function driver should...
The Thunderbolt vulnerabilities are public and have a nice name as Thunderclap (https://thunderclap.io/) nowadays. This topic will introduce what kind of vulnerabilities we have identified with Linux and how we are fixing them.