Description
The PCI interconnect specification and the devices implementing it are incorporating more and more features aimed at high performance systems (eg RDMA, peer-to-peer, CCIX, PCI ATS (Address Translation Service)/PRI(Page Request Interface), enabling Shared Virtual Addressing (SVA) between devices and CPUs), that require the kernel to coordinate the PCI devices, the IOMMUs they are connected to and the VFIO layer used to managed them (for userspace access and device passthrough) with related kernel interfaces that have to be designed in-sync for all three subsystems.
The kernel code that enables these new system features requires coordination between VFIO/IOMMU/PCI subsystems, so that kernel interfaces and userspace APIs can be designed in a clean way.
Following up the successful LPC 2017 VFIO/IOMMU/PCI microconference, the Linux Plumbers 2019 VFIO/IOMMU/PCI track will therefore focus on promoting discussions on the current kernel patches aimed at VFIO/IOMMU/PCI subsystems with specific sessions targeting discussion for kernel patches that enable technology (eg device/sub-device assignment, peer-to-peer PCI, IOMMU enhancements) requiring the three subsystems coordination; the microconference will also cover VFIO/IOMMU/PCI subsystem specific tracks to debate patches status for the respective subsystems plumbing.
Tentative topics for discussion:
VFIO
Shared Virtual Addressing (SVA) interface
SRIOV/PASID integration
Device assignment/sub-assignment
IOMMU
IOMMU drivers SVA interface consolidation
IOMMUs virtualization
IOMMU-API enhancements for mediated devices/SVA
Possible IOMMU core changes (like splitting up iommu_ops, better integration with device-driver core)
DMA-API layer interactions and how to get towards generic dma-ops for IOMMU drivers
PCI
Resources claiming/assignment consolidation
Peer-to-Peer
PCI error management
PCI endpoint subsystem
prefetchable vs non-prefetchable BAR address mappings (cacheability)
Kernel NoSnoop TLP attribute handling
CCIX and accelerators management
If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.
MC leads
Bjorn Helgaas bjorn@helgaas.com, Lorenzo Pieralisi lorenzo.pieralisi@arm.com, Joerg Roedel joro@8bytes.org, and Alex Williamson alex.williamson@redhat.com
This topic will discuss 1) why do we need per-group default domain type, 2) how it solves the problems in the real IOMMU driver, and 3) the user interfaces.
This is meant to be a rather open discussion on PCI resource assignment policies. I plan to discuss a bit what the different arch/platforms do today, how I've tried to consolidate it, then we can debate the pro/cons of the different approaches and decide where to go from there.
A PCI-Express non-transparent bridge (NTB) is a point-to-point PCIe bus
connecting 2 host systems. NTB functionality can be achieved in a platform
having 2 endpoint instances. Here each of the endpoint instance will be
connected to an independent host and the hosts can communicate with each other
using endpoint as a bridge. The endpoint framework and the "new" NTB EP
function driver should...