Multi Level Image Caching for Continuous Scrolling
Multi Level Image Caching for Continuous Scrolling
Webtoon reader applications present one of the most demanding image-loading challenges on mobile platforms. Users scroll through vertically continuous strips of high-resolution artwork, often spanning dozens of individual bitmap tiles per chapter, and they expect the experience to feel as seamless as flipping a physical page. Any visible loading delay, dropped frame, or blank placeholder breaks the illusion of a single continuous canvas. The caching architecture behind a webtoon reader must therefore solve two problems simultaneously: delivering decoded bitmaps to the UI thread fast enough to prevent jank, and avoiding redundant network downloads when users revisit previously read content. By the end of this lesson, you will be able to:
- Explain the lookup hierarchy of a two-tier caching system and why each layer exists.
- Describe how an LRU memory cache stores decoded bitmaps and how its eviction policy interacts with scroll position.
- Identify the role of a disk cache in reducing network consumption and enabling offline access.
- Trace the full lifecycle of an image tile from network fetch through disk persistence, memory promotion, and eventual eviction.
- Apply BitmapPool and tile-based decoding techniques to reduce allocation pressure during fast scrolling.
The Caching Hierarchy and Lookup Order
A production webtoon reader resolves every image tile through a strict three-level lookup. When the rendering pipeline determines that a tile is about to enter the visible viewport, it queries each level in order and stops at the first hit:
- Memory cache -- an in-process LRU map of fully decoded
Bitmapobjects keyed by tile identifier. A hit here costs a hash lookup and a reference copy. The bitmap is already in a format the canvas can draw directly. - Disk cache -- a bounded directory of compressed image files (JPEG or WebP) stored on the device's internal or external cache partition. A hit avoids a network round-trip but still requires a background decode step before the bitmap can be drawn.
- Network -- the origin server or CDN. This is the slowest path and consumes the user's data budget. After a successful fetch, the raw bytes are written into the disk cache, decoded into a bitmap, and inserted into the memory cache, so that all subsequent requests for the same tile resolve at a faster layer.
This hierarchy means that the first time a user reads a chapter, every tile travels the full network-to-memory path. The second time they scroll past the same tile within the same session, it resolves from memory in microseconds. If they reopen the chapter in a later session after the memory cache has been cleared, the tile resolves from disk in milliseconds rather than requiring another download.
Memory Cache: Preventing UI Jank
The memory cache is the most important layer for scroll performance. Its job is to ensure that every tile within and immediately adjacent to the visible viewport is already decoded and available for the draw pass, with zero background thread involvement.
The standard implementation is an LruCache sized as a fraction of the application's heap. Because bitmap tiles vary in resolution, the cache must measure capacity in bytes, not entry count. Overriding sizeOf to return bitmap.byteCount ensures that a single high-resolution panel does not silently consume the entire budget:
val maxMemory = (Runtime.getRuntime().maxMemory() / 1024).toInt()
val cacheSize = maxMemory / 8
val memoryCache = object : LruCache<String, Bitmap>(cacheSize) {
override fun sizeOf(key: String, bitmap: Bitmap): Int {
return bitmap.byteCount / 1024
}
}
The LRU eviction policy is a natural fit for vertical scrolling. As the user scrolls downward, tiles near the top of the viewport age out of the recently-used set and are evicted first, while tiles just below the viewport are fetched and inserted. The cache effectively maintains a sliding window of decoded bitmaps centered on the current scroll position. Sizing the cache to hold roughly two to three screens worth of tiles provides enough headroom for predictive prefetch without monopolizing the heap.
This interview continues for subscribers
Subscribe to Dove Letter for full access to exclusive interviews about Android and Kotlin development.
Become a Sponsor