Exploring the Internal Mechanisms of Landscapist Core
Exploring the Internal Mechanisms of Landscapist Core
Landscapist Core is a standalone image loading engine built from scratch for Kotlin Multiplatform. Unlike Landscapist's wrappers around Coil, Glide, and Fresco,
Landscapist Core handles fetching, caching, decoding, and transformations internally. This eliminates platform dependencies and provides fine grained control over every aspect of image loading.
In this article, you'll explore the internal architecture of Landscapist Core, examining how the Landscapist class orchestrates the loading pipeline, how TwoTierMemoryCache provides a second chance for evicted items through weak references, how DecodeScheduler prioritizes visible images over background loads, how progressive decoding improves perceived performance, and how memory pressure handling keeps the app responsive under constrained conditions.
The Landscapist orchestrator
The Landscapist class is the main entry point for image loading. It coordinates fetching, caching, decoding, and transformation into a unified pipeline:
public class Landscapist private constructor(
public val config: LandscapistConfig,
private val memoryCache: MemoryCache,
private val diskCache: DiskCache?,
private val fetcher: ImageFetcher,
private val decoder: ImageDecoder,
private val dispatcher: CoroutineDispatcher,
public val requestManager: RequestManager = RequestManager(),
public val memoryPressureManager: MemoryPressureManager = MemoryPressureManager(),
)
Each component has a single responsibility. The memoryCache stores decoded images in memory. The diskCache persists raw image data to storage. The fetcher retrieves images from network or local sources. The decoder converts raw bytes into displayable images. The requestManager tracks active requests for cancellation. The memoryPressureManager responds to system memory warnings.
The loading pipeline
The load function implements a three stage lookup with progressive enhancement:
public fun load(request: ImageRequest): Flow<ImageResult> = flow {
emit(ImageResult.Loading)
val cacheKey = CacheKey.create(
model = request.model,
transformationKeys = request.transformations.map { it.key },
width = request.targetWidth,
height = request.targetHeight,
)
// 1. Check memory cache (instant)
if (request.memoryCachePolicy.readEnabled) {
memoryCache[cacheKey]?.let { cached ->
emit(ImageResult.Success(data = cached.data, dataSource = DataSource.MEMORY))
return@flow
}
}
// 2. Check disk cache
if (request.diskCachePolicy.readEnabled && diskCache != null) {
diskCache.get(cacheKey)?.use { snapshot ->
val bytes = snapshot.data().buffer().readByteArray()
// Decode and emit...
}
}
// 3. Fetch from network
val fetchResult = fetcher.fetch(request)
// Process result...
}.flowOn(dispatcher)
The pipeline follows a predictable order: memory cache first (instant), disk cache second (fast I/O), network last (slow). Each stage can be enabled or disabled through CachePolicy, allowing fine grained control for special cases like forcing a refresh or skipping caching entirely.
Cache key generation
The CacheKey uniquely identifies a cached image based on all factors that affect its appearance:
val cacheKey = CacheKey.create(
model = request.model,
transformationKeys = request.transformations.map { it.key },
width = request.targetWidth,
height = request.targetHeight,
)
Two requests for the same URL but different sizes produce different cache keys. Two requests with the same URL and size but different transformations also produce different keys. This ensures the cache returns the exact image variant requested, avoiding incorrect results when transformations or sizes differ.
Two tier memory cache
The TwoTierMemoryCache addresses a common caching problem: when the cache is full, evicted items are lost even if they're still referenced elsewhere in the app. The two tier approach provides a "second chance" through weak references:
public class TwoTierMemoryCache(
private var _maxSize: Long,
private val weakReferencesEnabled: Boolean = true,
) : MemoryCache {
private val strongCache = linkedMapOf<String, CachedImage>()
private val weakCache = mutableMapOf<String, WeakRef<CachedImage>>()
private val currentSize = atomic(0L)
The strong cache holds images within the memory budget using LRU eviction. The weak cache holds references to recently evicted images without counting against the budget.
How eviction works
When an item is evicted from the strong cache, it moves to the weak cache:
private fun evictOldest() {
val iterator = strongCache.entries.iterator()
if (iterator.hasNext()) {
val eldest = iterator.next()
iterator.remove()
currentSize.addAndGet(-eldest.value.sizeBytes)
// Move to weak cache if enabled
if (weakReferencesEnabled) {
weakCache[eldest.key] = WeakRef(eldest.value)
}
}
}
The weak reference does not prevent garbage collection. If the system needs memory, the GC reclaims the image. But if the image is still in memory (perhaps held by a UI component), the weak reference finds it.
Cache hit path
When looking up a key, the cache checks both tiers:
override fun get(key: CacheKey): CachedImage? = synchronized(lock) {
val memoryKey = key.memoryKey
// First check strong cache
strongCache.remove(memoryKey)?.let { image ->
strongCache[memoryKey] = image // Re-insert to update access order
return@synchronized image
}
// Check weak cache if enabled
if (weakReferencesEnabled) {
weakCache[memoryKey]?.get()?.let { image ->
// Promote back to strong cache
weakCache.remove(memoryKey)
evictIfNeeded(image.sizeBytes)
strongCache[memoryKey] = image
currentSize.addAndGet(image.sizeBytes)
return@synchronized image
}
// Clean up null weak reference
weakCache.remove(memoryKey)
}
null
}
If found in the weak cache, the item is promoted back to the strong cache. This ensures frequently accessed items stay in the strong tier even under memory pressure. The promotion triggers eviction if necessary, maintaining the memory budget.
This article continues for subscribers
Subscribe to Dove Letter for full access to 40+ deep-dive articles about Android and Kotlin development.
Become a Sponsor