OS-research-lab

ML-Augmented CPU Scheduling: RR vs CFS

Following my last post on ML in LRU, I wanted to explore a somewhat novel idea:

What if we applied machine learning inside the CPU scheduler?

I built a Python simulation and fed it two classic workloads to test how ML-enhanced Round-Robin scheduling compares to the Linux default, CFS.


Workloads


Schedulers Compared

Scheduler Description
CFS Linux default — picks the task with the least CPU time so far (fair-share), auto-shrinks time slices as the queue grows
Round-Robin (RR) First-in, first-out with fixed time slices (tested q = 5, 10, 20)
RR + ML Uses an online classifier to move likely-to-finish tasks to the front of the queue
RR + 2×ML Adds a regressor that predicts task duration and adjusts quantum on the fly (between 2–20 ms)

Results

Bursty Workload (Lower is Better)

bursty-schedulers

Heavy Workload (Lower is Better)

heavy-schedulers

Bursty Workload — RR vs 2 x ML

bursty-compare

Heavy Workload — RR vs 2 x ML

heavy-compare


Takeaways

Bursty Workload:

Heavy Workload:


Final Lessons

CFS is extremely well-designed — a solid default.