Interview QuestionPractical QuestionFollow-up Questions

LRU Cache

skydovesJaewoong Eum (skydoves)||10 min read

LRU Cache

An LRU (Least Recently Used) cache is a fixed capacity data structure that evicts the least recently accessed entry when the capacity is exceeded. It provides O(1) time complexity for both retrieval and insertion by combining a hash map for key lookups with a doubly linked list for tracking access order. LRU caches are fundamental to image loading libraries, database query caching, and memory management throughout the Android platform. By the end of this lesson, you will be able to:

  • Explain the eviction strategy and data structure design of an LRU cache.
  • Describe how Android's LruCache class works and how it is used.
  • Implement a basic LRU cache using LinkedHashMap.
  • Identify real world uses of LRU caching in Android libraries and the framework.
  • Distinguish between capacity based eviction and time based expiration in caching.

How an LRU Cache Works

The cache maintains two invariants. First, every entry can be looked up by key in constant time through a hash map. Second, entries are ordered by access recency through a doubly linked list. When an entry is accessed or inserted, it moves to the head of the list. When the cache exceeds capacity, the entry at the tail of the list is evicted because it is the least recently used.

This combination gives O(1) performance for get, put, and eviction. The hash map provides direct access to the node, and the linked list provides constant time reordering because moving a node only requires updating four pointers.

The eviction policy is deterministic: the entry that has not been accessed for the longest time is always the one removed. This makes LRU caches predictable and easy to reason about, which is important for debugging cache related performance issues. The trade off is that LRU does not consider access frequency, so an entry accessed once a long time ago and an entry accessed a thousand times are treated the same way once a more recent access displaces them.

Implementation with LinkedHashMap

Kotlin's LinkedHashMap supports access order tracking when constructed with the accessOrder parameter set to true. This makes it a natural foundation for an LRU cache:

class LruCache<K, V>(private val capacity: Int) {
    private val map = LinkedHashMap<K, V>(
        capacity, 0.75f, true
    )

    fun get(key: K): V? = map[key]

    fun put(key: K, value: V) {
        if (map.size >= capacity && !map.containsKey(key)) {
            val eldest = map.entries.iterator().next().key
            map.remove(eldest)
        }
        map[key] = value
    }
}

When accessOrder is true, every get or put call moves the accessed entry to the end of the iteration order. The entry at the beginning of the iterator is the least recently used. On insertion, if the capacity is exceeded, the cache removes that entry.

Android's LruCache Class

The Android SDK provides android.util.LruCache, which wraps a LinkedHashMap with thread safe access and a size based eviction policy. Instead of counting entries, it tracks total size through an overridable sizeOf method:

val bitmapCache = object : LruCache<String, Bitmap>(
    maxSizeBytes
) {
    override fun sizeOf(key: String, value: Bitmap): Int {
        return value.byteCount
    }

    override fun entryRemoved(
        evicted: Boolean, key: String,
        oldValue: Bitmap, newValue: Bitmap?
    ) {
        // Optional cleanup when an entry is evicted
    }
}

This interview continues for subscribers

Subscribe to Dove Letter for full access to exclusive interviews about Android and Kotlin development.

Become a Sponsor