Description
Following the success of the past 3 years at LPC, we would like to see a 4th RDMA (Remote Direct Memory Access networking) microconference this year. The meetings in the last conferences have seen significant improvements to the RDMA subsystem merged over the years: new user API, container support, testability/syzkaller, system bootup, Soft iWarp, etc.
In Vancouver, the RDMA track hosted some core kernel discussions on get_user_pages that is starting to see its solution merged. We expect that again RDMA will be the natural microconf to hold these quasi-mm discussions at LPC.
This year there remain difficult open issues that need resolution:
RDMA and PCI peer to peer for GPU and NVMe applications, including HMM and DMABUF topics
RDMA and DAX (carry over from LSF/MM)
Final pieces to complete the container work
Contiguous system memory allocations for userspace (unresolved from 2017)
Shared protection domains and memory registrations
NVMe offload
Integration of HMM and ODP
And several new developing areas of interest:
Multi-vendor virtualized 'virtio' RDMA
Non-standard driver features and their impact on the design of the subsystem
Encrypted RDMA traffic
Rework and simplification of the driver API
Previous years:
2018, 2017: 2nd RDMA mini-summit summary, and 2016: 1st RDMA mini-summit summary
If you are interested in participating in this microconference and have topics to propose, please use the CfP process. More topics will be added based on CfP for this microconference.
MC leads
Leon Romanovsky leon@leon.nu, Jason Gunthorpe jgg@mellanox.com
P2P
- Suggestion with VFIO (Don)
- RDMA as the importer, VFIO as the exporter
get_user_pages() and friends
- Discussion on future GUP, required to support P2P
- GUP to SGL?
- Non struct page based GUP
hmm_range_fault()
- Integrating RDMA ODP with HMM
- 'DMA fault' for ZONE_DEVICE pages
We are going through upstreaming IBNBD/IBTRS 5th iterations, the latest effort is here: https://lwn.net/Articles/791690/.
We would like to discuss in an open round about the unique features of the driver and the library, whether and how they are beneficial for the RDMA eco-system and what should be the next steps in order to get them upstream.
A face to face discussion about action items...
As memory sizes grow so do the sizes of the data transferred between RDMA devices. Generally, the Operating system needs to keep track of the state of each of its pieces of memory and that is on Intel x86 a page of 4 KB. This is also connected to hardware providing memory management features such as the processor page tables as well as the MMU features of the RDMA NIC.
The overhead of the...