Android & Kotlin Technical Articles
Detailed articles on Android development, Jetpack Compose internals, Kotlin coroutines, and open source library design by skydoves, Google Developer Expert and maintainer of Android libraries with 40M+ annual downloads. Read practical guides on Retrofit, Compose Preview, BottomSheet UI, coroutine compilation, and more.
This is a collection of private or subscriber-first articles written by the Dove Letter, skydoves (Jaewoong). These articles can be released somewhere like Medium in the future, but always they will be revealed for Dove Letter members first.
This book is designed for Kotlin developers who want to deep dive into the Kotlin fundamentals, internal mechanisms, and leverage that knowledge in their daily work right away.
Kotlin Coroutines have become the standard for asynchronous programming on the JVM, offering developers a way to write sequential, readable code that can pause and resume without blocking threads. Most developers interact with coroutines through familiar APIs like launch, async, and Flow, treating suspend as a language keyword that "just works." But coroutines are not simply a library feature layered on top of the language. They are a compiler level solution, built through the Kotlin compiler's IR lowering pipeline and bytecode generation, that transforms your sequential code into resumable state machines. The suspend keyword triggers a series of compiler transformations that rewrite your function's structure, signature, and control flow before it ever reaches the JVM. In this article, you'll dive deep into the Kotlin compiler's coroutine machinery, exploring the six stage transformation pipeline that converts a suspend function into a state machine. You'll trace through how the compiler injects hidden continuation parameters through CPS transformation, how it generates continuation classes with the clever sign bit trick for distinguishing fresh calls from resumptions, how the bytecode level transformer collects suspension points and inserts a TABLESWITCH dispatch, how local variables are "spilled" into continuation fields to survive across suspension, and how tail call optimization lets the compiler skip the entire state machine when it can prove every suspension point is a tail call. The fundamental problem: How do you make a function resumable? Consider this suspend function: kotlin suspend fun fetchUserData: UserData { val user = fetchUser val profile = fetchProfileuser.id return UserDatauser, profile } This looks like ordinary sequential code, but both fetchUser and fetchProfile might perform network requests that take hundreds of milliseconds. The function must be able to pause at each call, release the thread entirely, and later resume execution at the exact point where it left off, with all local variables intact. The JVM provides no native mechanism for this. A JVM method is a stack frame, and when a method returns, its stack frame is gone. There is no way to "freeze" a stack frame, release the thread, and later restore it. The function must return to release the thread, but returning destroys the local state. The Kotlin compiler solves this by transforming each suspend function into a state machine. The function's body is split into segments between suspension points. Local variables are saved into fields of a continuation object before each suspension, and restored after resumption. A label field tracks which segment to execute next, and a TABLESWITCH at the function entry dispatches to the correct segment. The developer writes linear code; the compiler generates the machinery to break it apart and reassemble it on demand. The six stage pipeline: From suspend to state machine The transformation happens across six distinct phases in the JVM backend. Understanding the full pipeline is essential to understanding why each phase exists and what it contributes. 1. SuspendLambdaLowering: Converts suspend lambda expressions into anonymous continuation classes 2. TailCallOptimizationLowering: Identifies suspend calls in tail position and marks them with IrReturn wrappers 3. AddContinuationLowering: The central IR lowering, generates continuation classes, injects $completion parameters, creates static suspend implementations 4. Code generation: Lowers IR to JVM bytecode, placing BeforeSuspendMarker/AfterSuspendMarker instructions around each suspension point 5. CoroutineTransformerMethodVisitor: The bytecode level state machine engine, inserts the TABLESWITCH, spills variables, generates resume paths 6. Tail call optimization check: If all suspension points are tail calls, the state machine is skipped entirely Let's trace through each phase. CPS transformation: The invisible parameter The foundation of coroutine compilation is Continuation Passing Style CPS transformation. Every suspend function, when compiled, receives a hidden additional parameter: the continuation. This continuation represents "what happens next" after the function completes or suspends. When you write: kotlin suspend fun fetchUser: User { // ... } The compiler transforms the signature to: kotlin fun fetchUser$completion: Continuation<User?: Any? Two changes happen. First, a $completion parameter of type Continuation is appended. Second, the return type becomes Any?, because the function can now return either the actual result or the special sentinel COROUTINESUSPENDED, indicating that the function has paused and will deliver its result later through the continuation. Looking at how AddContinuationLowering performs this injection: kotlin val continuationParameter = buildValueParameterfunction { kind = IrParameterKind.Regular name = Name.identifierSUSPENDFUNCTIONCOMPLETIONPARAMETERNAME // "$completion" type = continuationTypecontext.substitutesubstitutionMap // Continuation<RetType? origin = JvmLoweredDeclarationOrigin.CONTINUATIONCLASS } The parameter is inserted before any default argument masks but after all regular parameters. This is invisible in source code but always present in the bytecode. Every call site of a suspend function is also rewritten to pass the current continuation as this extra argument. The continuation class: Where state lives The central artifact of coroutine compilation is the continuation class. For each named suspend function, the compiler generates an inner class that extends ContinuationImpl and holds all the state needed to suspend and resume. Looking at generateContinuationClassForNamedFunction in AddContinuationLowering.kt: kotlin context.irFactory.buildClass { name = Name.special"<Continuation" origin = JvmLoweredDeclarationOrigin.CONTINUATIONCLASS }.apply { superTypes += context.symbols.continuationImplClass.owner.defaultType
Google Maps popularized a bottom sheet pattern that most Android developers recognize immediately: a small panel peeking from the bottom of the screen, expandable to a mid height for quick details, and draggable to full screen for comprehensive information. The user interacts with the map behind the sheet at all times. This pattern looks simple on the surface, but implementing it correctly requires solving several problems: multi state anchoring, non modal interaction, dynamic content adaptation, and nested scroll coordination. Jetpack Compose's standard ModalBottomSheet only supports two states expanded and hidden and blocks background interaction with a scrim, making it unsuitable for this use case. In this article, you'll explore how to build a Google Maps style bottom sheet using FlexibleBottomSheethttps://github.com/skydoves/flexiblebottomsheet, covering how to configure three expansion states with custom height ratios, how to enable non modal mode so users can interact with the content behind the sheet, how to adapt your UI dynamically based on the sheet's current state, how to control state transitions programmatically, how to handle nested scrolling inside the sheet, and how to wrap content dynamically for variable height sheets. Why ModalBottomSheet falls short Consider the standard Material 3 bottom sheet: kotlin @Composable fun StandardBottomSheet { ModalBottomSheet onDismissRequest = { / dismiss / }, sheetState = rememberModalBottomSheetState, { Text"Content here" } } This gives you two states: expanded and hidden. The sheet covers the background with a scrim, blocking all interaction behind it. For a confirmation dialog or action menu, this is fine. But for a Google Maps style experience, you need: 1. Three visible states: A peek height showing a summary, a mid height for details, and a full height for comprehensive content. 2. No scrim: The map behind the sheet must remain fully interactive. 3. Dynamic content: The content should adapt based on the current expansion state. 4. Nested scrolling: Scrollable content inside the fully expanded sheet should scroll naturally, and dragging down from the top of the scroll should collapse the sheet. FlexibleBottomSheethttps://github.com/skydoves/flexiblebottomsheet addresses all of these. Setting up a three state bottom sheet
Android's WorkManager has become the recommended solution for persistent, deferrable background work. Unlike transient background operations that live and die with your app process, WorkManager guarantees that enqueued work eventually executes, even if the user force-stops the app, the device reboots, or constraints aren't met yet. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around work persistence, dual-scheduler coordination, constraint tracking, process resilience, and state management that span a Room database, multiple scheduler backends, and a carefully orchestrated execution pipeline. In this article, you'll dive deep into how Jetpack WorkManager works internally, exploring how the singleton is initialized and bootstrapped through AndroidX Startup, how WorkSpec entities persist work metadata in a Room database, how the dual-scheduler system coordinates between GreedyScheduler and SystemJobScheduler, how Processor and WorkerWrapper orchestrate the actual execution of work, how ConstraintTracker monitors system state for constraint satisfaction, how ForceStopRunnable detects app force stops and reschedules work, and how work chaining creates dependency graphs through the Dependency table. The fundamental problem: Reliable background execution Background execution on Android is fundamentally unreliable. The system aggressively kills processes to reclaim memory, Doze mode restricts background activity, and app standby buckets throttle work for rarely-used apps. A naive approach to background work: kotlin class SyncActivity : AppCompatActivity { override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState Thread { // Sync data with server api.syncAllData }.start } } This fails in multiple ways. The thread dies when the process is killed. There's no retry mechanism if the network fails. The work doesn't survive device reboots. There's no way to specify constraints like "only on Wi-Fi" or "only when charging." You might try using a Service: kotlin class SyncService : Service { override fun onStartCommandintent: Intent?, flags: Int, startId: Int: Int { Thread { api.syncAllData }.start return STARTREDELIVERINTENT } } This is better, STARTREDELIVERINTENT ensures the Intent is redelivered if the process is killed. But you still have no constraint support, no work chaining, no persistence across reboots, and no observability of work status. You'd need to build all of that yourself. WorkManager solves this by providing a complete infrastructure for persistent, constraint-aware, observable, chainable background work with guaranteed execution. Initialization: The bootstrap sequence WorkManager initializes itself automatically before your Application.onCreate runs. The entry point is WorkManagerInitializer, which implements AndroidX Startup's Initializer interface: java public final class WorkManagerInitializer implements Initializer<WorkManager { @Override public WorkManager createContext context { Logger.get.debugTAG, "Initializing WorkManager with default configuration."; WorkManager.initializecontext, new Configuration.Builder.build; return WorkManager.getInstancecontext; } @Override public List<Class<? extends Initializer<? dependencies { return Collections.emptyList; } } AndroidX Startup uses a ContentProvider to trigger initialization before Application.onCreate. This is critical because it ensures WorkManager is ready before any application code runs. The dependencies method returns an empty list, meaning WorkManager has no initialization dependencies on other Startup initializers. The singleton with dual-lock pattern WorkManager.initialize delegates to WorkManagerImpl.initialize, which uses a synchronized dual-instance pattern: java public static void initializeContext context, Configuration configuration { synchronized sLock { if sDelegatedInstance != null && sDefaultInstance != null { throw new IllegalStateException"WorkManager is already initialized."; } if sDelegatedInstance == null { context = context.getApplicationContext; if sDefaultInstance == null { sDefaultInstance = createWorkManagercontext, configuration; } sDelegatedInstance = sDefaultInstance; } } } Two static fields serve different purposes. sDefaultInstance holds the real singleton. sDelegatedInstance enables testing by allowing test code to inject a mock via setDelegate. The sLock object provides thread-safe access. The explicit check for double initialization throws an IllegalStateException with a helpful message guiding developers to disable WorkManagerInitializer in the manifest if they want custom initialization. On-demand initialization via Configuration.Provider When getInstanceContext is called and no instance exists, WorkManager falls back to on-demand initialization: java public static WorkManagerImpl getInstanceContext context { synchronized sLock { WorkManagerImpl instance = getInstance; if instance == null { Context appContext = context.getApplicationContext; if appContext instanceof Configuration.Provider { initializeappContext, Configuration.Provider appContext.getWorkManagerConfiguration; instance = getInstanceappContext; } else { throw new IllegalStateException "WorkManager is not initialized properly."; } } return instance; } } If your Application class implements Configuration.Provider, WorkManager lazily initializes with that configuration. This pattern allows developers to disable automatic initialization and provide custom configuration without calling initialize explicitly in Application.onCreate. The createWorkManager factory The actual WorkManagerImpl construction wires together all the internal components: kotlin fun WorkManagerImpl context: Context, configuration: Configuration, workTaskExecutor: TaskExecutor = WorkManagerTaskExecutorconfiguration.taskExecutor, workDatabase: WorkDatabase = WorkDatabase.create context.applicationContext, workTaskExecutor.serialTaskExecutor, configuration.clock, context.resources.getBooleanR.bool.workmanagertestconfiguration, , trackers: Trackers = Trackerscontext.applicationContext, workTaskExecutor, processor: Processor = Processorcontext.applicationContext, configuration, workTaskExecutor, workDatabase, schedulersCreator: SchedulersCreator = ::createSchedulers, : WorkManagerImpl { val schedulers = schedulersCreator context, configuration, workTaskExecutor, workDatabase, trackers, processor, return WorkManagerImpl context.applicationContext, configuration, workTaskExecutor, workDatabase, schedulers, processor, trackers, }
Modern Android applications commonly adopt multi layered architectures such as MVVM or MVI, where data flows through distinct layers: a data source, a repository, and a ViewModel or presentation layer. Each layer has a specific responsibility, and network responses must propagate through all of them before reaching the UI. While this separation produces clean, testable code, it introduces a real challenge: how do you handle API responses, including errors and exceptions, as they cross each layer boundary? Most developers solve this by wrapping API calls in try-catch blocks and returning fallback values. This works for small projects, but as the number of API calls grows, the approach creates ambiguous results, scattered boilerplate, and lost context that downstream layers need. You end up with ViewModels that cannot tell whether an empty list means "no data" or "network failure," repositories that swallow important error details, and data sources that repeat the same error handling pattern dozens of times. In this article, you'll explore the problems that emerge when handling Retrofit API calls across layered architectures, why conventional approaches break down at scale, and how Sandwichhttps://github.com/skydoves/sandwich provides a type safe, composable solution that simplifies response handling from the network layer all the way to the UI. You'll also walk through the full set of Sandwich APIs, from basic response handling to advanced patterns like sequential composition, response merging, global error mapping, and Flow integration, each with real world use cases that show when and why you would reach for them. Retrofit API calls with coroutines Most Android projects use Retrofithttps://github.com/square/retrofit with Kotlin coroutineshttps://github.com/Kotlin/kotlinx.coroutines for network communication. A typical service interface looks like this: kotlin interface PosterService { @GET"DisneyPosters.json" suspend fun fetchPosterList: List<Poster } The service returns a List<Poster directly. Retrofit deserializes the JSON response body and gives you the data. This works perfectly when the request succeeds, but it gives you no structured way to handle failures. Retrofit throws an HttpException for non 2xx status codes and various IO exceptions for network problems. The responsibility of catching these falls entirely on the caller. When you consume this service in a data source, the conventional approach looks like this: kotlin class PosterRemoteDataSource private val posterService: PosterService, { suspend fun fetchPosterList: List<Poster { return try { posterService.fetchPosterList } catch e: HttpException { emptyList } catch e: Throwable { emptyList } } } The data source catches every possible exception and returns emptyList as a fallback. From the caller's perspective, this function always succeeds, it always returns a List<Poster. If we create a flow from the code above, it will be like so: !https://velog.velcdn.com/images/skydoves/post/cc3deaea-7244-4091-88d3-744d297112cc/image.png But that apparent simplicity hides a serious problem. This compiles and runs. But once you trace the data flow through a full architecture, where the data source feeds a repository that feeds a ViewModel that drives the UI, the problems become clear. The problems with conventional response handling The code above has three major issues that compound as your project grows and the number of API endpoints increases. Ambiguous results The data source returns emptyList for both HTTP errors and network exceptions. Downstream layers the repository, the ViewModel receive a List<Poster with no way to distinguish between three completely different scenarios: 1. The request succeeded and the server returned an empty list. 2. The request failed with a 401 Unauthorized error. 3. The device had no network connectivity. All three produce the same result: an empty list. The repository cannot decide whether to show an error message, redirect to a login screen, or display "no data" content. The ViewModel might show an empty state when it should be showing a "please log in" dialog. The response has lost its context, and once that context is gone, no amount of downstream logic can recover it. You might try to work around this by returning null for failures instead of emptyList. But that introduces its own ambiguity: does null mean "error" or "no data"? You end up needing a wrapper type anyway, which leads to the next problem. That's just adding one more implicit convention on your head. Boilerplate error handling Every API call requires its own try-catch block. If you have 20 service methods, you write 20 nearly identical try-catch blocks. Each one catches HttpException, catches Throwable, and returns some fallback value. This repetition creates maintenance overhead and increases the surface area for mistakes, like forgetting to handle a specific exception type in one of the 20 call sites. Consider a data source with multiple methods: kotlin class UserRemoteDataSourceprivate val userService: UserService { suspend fun fetchUserid: String: User? { return try { userService.fetchUserid } catch e: HttpException { null } catch e: Throwable { null } } suspend fun fetchFollowersid: String: List<User { return try { userService.fetchFollowersid } catch e: HttpException { emptyList } catch e: Throwable { emptyList } } suspend fun updateProfileprofile: Profile: Boolean { return try { userService.updateProfileprofile true } catch e: HttpException { false } catch e: Throwable { false } } } The pattern is identical every time: try the call, catch HttpException, catch Throwable, return a fallback. The only thing that changes is the fallback value null, emptyList, false. This is textbook boilerplate that should not exist in every data source class. One dimensional response processing
Kotlin Coroutines introduced structured concurrency as a fundamental principle, ensuring that coroutines are properly scoped and cancelled when their parent scope completes. At the heart of this mechanism lies CancellationException, a special exception that signals cancellation and must be handled with care. While most developers know they shouldn't catch this exception, the deeper question remains: why is CancellationException special, and what happens when you accidentally swallow it? In this article, you'll dive deep into the internal mechanisms of CancellationException, exploring why it must be re-thrown, how runCatching can break structured concurrency, the proposals for safer alternatives, and the design decisions that make cancellation propagation both correct and performant. The fundamental problem: Catching cancellation breaks structured concurrency Consider this seemingly innocent code: kotlin suspend fun processData: Result<Data = runCatching { val user = fetchUser val profile = fetchProfileuser.id Datauser, profile } This looks reasonable. You're wrapping a suspend operation in runCatching to convert exceptions into Result values for safer error handling. But there's a subtle bug: if the coroutine is cancelled during fetchUser or fetchProfile, the CancellationException is caught by runCatching and wrapped in Result.failure. The cancellation signal never propagates to the parent scope, breaking structured concurrency. The core issue is that runCatching is implemented like this: kotlin public inline fun <R runCatchingblock: - R: Result<R { return try { Result.successblock } catch e: Throwable { Result.failuree } } Notice the catch e: Throwable clause. This catches everything, including CancellationException. When cancellation occurs, instead of propagating up the coroutine hierarchy, it's captured in a Result object, and the coroutine continues executing as if nothing happened. Understanding CancellationException: Not just another exception CancellationException is fundamentally different from other exceptions in Kotlin Coroutines. Let's examine its definition: kotlin public actual open class CancellationException message: String?, cause: Throwable? : IllegalStateExceptionmessage, cause It extends IllegalStateException, but its purpose is not to signal an error, it's to signal intentional cancellation. This distinction is crucial for understanding why it must be handled specially. The cancellation contract When a coroutine is cancelled, the cancellation mechanism works through these steps: 1. Cancellation signal: The parent scope or job calls cancel on the coroutine's Job 2. CancellationException thrown: At the next suspension point, the coroutine throws a CancellationException 3. Propagation: The exception propagates up the coroutine hierarchy 4. Cleanup: Each coroutine in the chain can run cleanup logic in finally blocks 5. Parent notification: The parent scope is notified that the child completed due to cancellation If you catch CancellationException and don't re-throw it, steps 3-5 never happen. The parent scope thinks the child is still running, resource cleanup might not occur, and the entire structured concurrency guarantee breaks down. The invisibility principle
Landscapisthttps://github.com/skydoves/landscapist Core is a standalone image loading engine built from scratch for Kotlin Multiplatform. Unlike Landscapist's wrappers around Coil, Glide, and Fresco, Landscapist Corehttps://skydoves.github.io/landscapist/landscapist/landscapist-core/ handles fetching, caching, decoding, and transformations internally. This eliminates platform dependencies and provides fine grained control over every aspect of image loading. In this article, you'll explore the internal architecture of Landscapist Core, examining how the Landscapist class orchestrates the loading pipeline, how TwoTierMemoryCache provides a second chance for evicted items through weak references, how DecodeScheduler prioritizes visible images over background loads, how progressive decoding improves perceived performance, and how memory pressure handling keeps the app responsive under constrained conditions. The Landscapist orchestrator The Landscapist class is the main entry point for image loading. It coordinates fetching, caching, decoding, and transformation into a unified pipeline: kotlin public class Landscapist private constructor public val config: LandscapistConfig, private val memoryCache: MemoryCache, private val diskCache: DiskCache?, private val fetcher: ImageFetcher, private val decoder: ImageDecoder, private val dispatcher: CoroutineDispatcher, public val requestManager: RequestManager = RequestManager, public val memoryPressureManager: MemoryPressureManager = MemoryPressureManager, Each component has a single responsibility. The memoryCache stores decoded images in memory. The diskCache persists raw image data to storage. The fetcher retrieves images from network or local sources. The decoder converts raw bytes into displayable images. The requestManager tracks active requests for cancellation. The memoryPressureManager responds to system memory warnings. The loading pipeline The load function implements a three stage lookup with progressive enhancement: kotlin public fun loadrequest: ImageRequest: Flow<ImageResult = flow { emitImageResult.Loading val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight, // 1. Check memory cache instant if request.memoryCachePolicy.readEnabled { memoryCachecacheKey?.let { cached - emitImageResult.Successdata = cached.data, dataSource = DataSource.MEMORY return@flow } } // 2. Check disk cache if request.diskCachePolicy.readEnabled && diskCache != null { diskCache.getcacheKey?.use { snapshot - val bytes = snapshot.data.buffer.readByteArray // Decode and emit... } } // 3. Fetch from network val fetchResult = fetcher.fetchrequest // Process result... }.flowOndispatcher The pipeline follows a predictable order: memory cache first instant, disk cache second fast I/O, network last slow. Each stage can be enabled or disabled through CachePolicy, allowing fine grained control for special cases like forcing a refresh or skipping caching entirely. Cache key generation The CacheKey uniquely identifies a cached image based on all factors that affect its appearance: kotlin val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight,
Android's ViewModel has become an essential component of modern Android development, providing a lifecycle-aware container for UI-related data that survives configuration changes. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around lifecycle management, multiplatform abstraction, resource cleanup, and thread-safe caching. Understanding how ViewModel works under the hood helps you make better architectural decisions and avoid subtle bugs. In this article, you'll dive deep into how Jetpack ViewModel works internally, exploring how the ViewModelStore retains instances across configuration changes, how ViewModelProvider orchestrates creation and caching, how the factory pattern enables flexible instantiation, how CreationExtras enables stateless factories, how resource cleanup is managed through the Closeable pattern, and how viewModelScope integrates coroutines with ViewModel lifecycle. This isn't a guide on using ViewModel, it's an exploration of the internal machinery that makes lifecycle-aware state management possible. The fundamental problem: Surviving configuration changes Configuration changes present a fundamental challenge for Android development. When a user rotates their device, changes language settings, or triggers any configuration change, the system destroys and recreates the Activity. Any data stored in the Activity is lost: kotlin class MyActivity : ComponentActivity { private var userData: User? = null // Lost on rotation! override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState // Must reload data after every rotation loadUserData } } The naive approach is to use onSaveInstanceState: kotlin override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putParcelable"user", userData } override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState userData = savedInstanceState?.getParcelable"user" } This works for small, serializable data. But what about large datasets, network connections, or objects that can't be serialized? What about ongoing operations like network requests? The Bundle approach fails for these cases, both because of size limitations and because serialization/deserialization is expensive. ViewModel solves this by providing a lifecycle-aware container that survives configuration changes through a retained object pattern, not serialization. The ViewModelStore: The retention mechanism At the heart of ViewModel's configuration-change survival is ViewModelStore, a simple key-value store that holds ViewModel instances: kotlin public open class ViewModelStore { private val map = mutableMapOf<String, ViewModel @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } @RestrictToRestrictTo.Scope.LIBRARYGROUP public operator fun getkey: String: ViewModel? { return mapkey } @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun keys: Set<String { return HashSetmap.keys } public fun clear { for vm in map.values { vm.clear } map.clear } } The implementation is remarkably straightforward, just a MutableMap<String, ViewModel. The magic isn't in the store itself, it's in how the store is retained. Key replacement behavior Notice the put method's behavior: kotlin public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } If a ViewModel already exists with the same key, the old ViewModel is immediately cleared. This ensures proper cleanup when a ViewModel is replaced. You might wonder when this happens, it occurs when you request a ViewModel with the same key but a different type: kotlin // First request creates TestViewModel1 with key "mykey" val vm1: TestViewModel1 = viewModelProvider"mykey", TestViewModel1::class // Second request with same key but different type val vm2: TestViewModel2 = viewModelProvider"mykey", TestViewModel2::class // vm1.onCleared has been called, vm1 is no longer valid This behavior is validated in the test suite: kotlin @Test fun twoViewModelsWithSameKey { val key = "thekey" val vm1 = viewModelProviderkey, TestViewModel1::class assertThatvm1.cleared.isFalse val vw2 = viewModelProviderkey, TestViewModel2::class assertThatvw2.isNotNull assertThatvm1.cleared.isTrue } The ViewModelStoreOwner contract The ViewModelStoreOwner interface defines who owns the store: kotlin public interface ViewModelStoreOwner { public val viewModelStore: ViewModelStore } This simple interface is implemented by ComponentActivity, Fragment, and NavBackStackEntry. The owner's responsibility is twofold:
Image loading is one of the most critical yet complex aspects of Android development. While libraries like Glide and Picasso have served developers for years, Coil emerged as a modern, Kotlin-first solution built from the ground up with coroutines. But the power of Coil goes far beyond its clean API, it's in the solid internal machinery that makes it both performant and memory-efficient. In this article, you'll dive deep into the internal mechanisms of Coil, exploring how image requests flow through an interceptor chain, how the two-tier memory cache achieves high hit rates while preventing memory leaks, how bitmap sampling uses bit manipulation for optimal memory usage, and the subtle optimizations that make it production-ready. Understanding the core abstraction At its heart, Coil is an image loading library that transforms data sources URLs, files, resources into decoded images displayed in views. What distinguishes Coil from other image loaders is its adherence to two fundamental principles: coroutine-native design and composable interceptor architecture. The coroutine-native design means everything in Coil is built around suspend functions. Image loading naturally fits the structured concurrency model, requests have lifecycles, can be cancelled, and should respect scopes. Traditional image loaders use callback chains, but Coil embraces coroutines: kotlin // Traditional callback approach imageLoader.loadurl { bitmap - imageView.setImageBitmapbitmap } // Coil's coroutine approach val result = imageLoader.execute ImageRequest.Buildercontext .dataurl .targetimageView .build The composable interceptor architecture means the entire request pipeline is a chain of interceptors, similar to OkHttp. Each interceptor can observe, transform, or short-circuit the request. This makes the library extensible without modifying core code. These properties aren't just conveniences, they're architectural decisions that enable better resource management, cleaner cancellation semantics, and powerful customization. Let's explore how these principles manifest in the implementation. The ImageLoader interface and RealImageLoader implementation If you examine the ImageLoader interface, it defines two primary entry points: kotlin interface ImageLoader { fun enqueuerequest: ImageRequest: Disposable suspend fun executerequest: ImageRequest: ImageResult } Two methods for the same operation? This reflects Android's dual nature, some callers need fire-and-forget loading enqueue for views, while others need structured concurrency execute for repositories or composables. The RealImageLoader implementation handles both cases with a unified internal pipeline: kotlin internal class RealImageLoader val options: Options, : ImageLoader { private val scope = CoroutineScopeoptions.logger private val systemCallbacks = SystemCallbacksthis private val requestService = RequestServicethis, systemCallbacks, options.logger override fun enqueuerequest: ImageRequest: Disposable { // Start executing the request on the main thread. val job = scope.asyncoptions.mainCoroutineContextLazy.value { executerequest, REQUESTTYPEENQUEUE } // Update the current request attached to the view and return a new disposable. return getDisposablerequest, job } override suspend fun executerequest: ImageRequest: ImageResult { if !needsExecuteOnMainDispatcherrequest { // Fast path: skip dispatching. return executerequest, REQUESTTYPEEXECUTE } else { // Slow path: dispatch to the main thread. return coroutineScope { val job = asyncoptions.mainCoroutineContextLazy.value { executerequest, REQUESTTYPEEXECUTE } getDisposablerequest, job.job.await } } } } Notice the fast path optimization in execute: if the request doesn't need main thread dispatch no target view, it executes immediately without the overhead of launching a coroutine. This is important for background image loading in repositories where you're just fetching the bitmap. The scope is a SupervisorJob scope, meaning one failed request doesn't cancel other in-flight requests: kotlin private fun CoroutineScopelogger: Logger?: CoroutineScope { val context = SupervisorJob + CoroutineExceptionHandler { , throwable - logger?.logTAG, throwable } return CoroutineScopecontext } This isolation ensures that a network error loading one image doesn't affect other images currently loading. The CoroutineExceptionHandler logs uncaught exceptions rather than crashing, making the library resilient to unexpected errors. The request execution pipeline: Interceptors all the way down The core of Coil's architecture is the interceptor chain. When you execute a request, it flows through a series of interceptors before reaching the EngineInterceptor, which performs the actual fetch and decode: kotlin private suspend fun executeinitialRequest: ImageRequest, type: Int: ImageResult { val requestDelegate = requestService.requestDelegate request = initialRequest, job = coroutineContext.job, findLifecycle = type == REQUESTTYPEENQUEUE, .apply { assertActive } val request = requestService.updateRequestinitialRequest val eventListener = options.eventListenerFactory.createrequest
In the modern Android development ecosystem, the synergy between Kotlin and Java is quite still important since many of very traditional projects are written in Java. A prime example of this great interoperability is Square's Retrofit library. Despite being written entirely in Java, Retrofit seamlessly supports Kotlin's suspend functions, allowing developers to write clean, idiomatic asynchronous code for network requests. This capability is not magic, it is a sophisticated illusion built upon a cooperative understanding between the Kotlin compiler and Retrofit's dynamic, reflection-based architecture. This study examines the internal mechanisms that make this "interpolation" possible, revealing how a Java library can interact with a language feature it has no native concept of. The Foundation: Continuation-Passing Style CPS Transformation
Like what you see?
Subscribe to Dove Letter to get weekly insights about Android and Kotlin development, plus access to exclusive content and discussions.