OS-research-lab

ML-Augmented LRU vs Traditional Page Replacement Strategies

While working with virtual memory systems, I began to wonder:

Could a lightweight machine learning model actually out-evict traditional strategies like LRU?

With the rise of ML, it seemed natural to explore smarter page eviction based on historical access patterns. This simulation compares four classic cache replacement policies:


Policies Tested


Simulation Setup


Results

Cache Size: 64

Cache 64

Cache Size: 128

Cache 128

Cache Size: 256

Cache 256

Cache Size: 512

Cache 512

Hit Rate Comparison

Hit Rate Summary


Key Observations


Takeaways (from an OS Perspective)


Limitations


Final Thought

A model trained on yesterday’s access pattern can become obsolete the moment the user opens a different app.
Without a dominant reuse pattern, ML rarely beats a robust LRU.