Description
The Linux Plumbers 2019 Scheduler Microconference is about all scheduler topics, which are not Realtime
Potential topics:
- Load Balancer Rework - prototype
- Idle Balance optimizations
- Flattening the group scheduling hierrachy
- Core scheduling
- Proxy Execution for CFS
- Improving scheduling latency with SCHED_IDLE task
- Scheduler tunables - Mobile vs Server
- nohz
- LISA for scheduler verification
We plan to continue the discussions that started at OSPM in May'19 and get a wider audience outside the core scheduler developers at LPC.
Potential attendees:
Juri Lelli
Vincent Guittot
Subhra Mazumdar
Daniel Bristot
Dhaval Giani
PeterZ
Paul Turner
Rik van Riel
Patrick Bellasi
Morten Rasmussen
Dietmar Eggman
Steven Rostedt
Thomas Gleixner
Viresh Kumar
Phil Auld
Waiman Long
Josef Bacik
Joel Fernandes
Paul McKenney
Alessio Balsini
Frederic Weisbecker
This microconference is picking scheduler topics which are not RT, but this should take place either immediately before or after that MC.
MC leads:
Juri Lelli juri.lelli@redhat.com, Vincent Guittot vincent.guittot@linaro.org, Daniel Bristot de Oliveira bristot@redhat.com, Subhra Mazumdar subhra.mazumdar@oracle.com, Dhaval Giani dhaval.giani@gmail.com
There have been two different approaches proposposed on the LKML over the past year on core scheduling. One was the coscheduling approach by Jan Schönherr, originally posted at https://lkml.org/lkml/2018/9/7/1521 and the next version posted at https://lkml.org/lkml/2018/10/19/859
Upstream chose a different route and decided to modify CFS, and only do "core-scheduling". Vineeth picked up the...
Dmitry Vyukov's testing work identified some (ab)uses of sched_setattr() that can result in SCHED_DEADLINE tasks starving RCU's kthreads for extended time periods, not millisecond, not seconds, not minutes, not even hours, but days. Given that RCU CPU stall warnings are issued whenever an RCU grace period fails to complete within a few tens of seconds, the system did not suffer silently. ...
The cfs load_balance has became more and more complex over the years and has reached the point where policy can't be explained sometimes. Furthermore, available metrics have evolved and load balance doesn't always take full advantage of it to calculate the imbalance. It's probably the good time to do a rework of the load balance code as proposed in this...
There is a presentation in the refereed track on flattening the CPU controller runqueue hierarchy, but it may be useful to have a discussion on the same topic in the scheduler microconference.
The Linux Kernel scheduler represents a system's topology by the means of
scheduler domains. In the common case, these domains map to the cache topology
of the system.
The Cavium ThunderX is an ARMv8-A 2-node NUMA system, each node containing
48 CPUs (no hyperthreading). Each CPU has its own L1 cache, and CPUs within
the same node will share a same L2 cache.
Running some memory-intensive...