Android & Kotlin Technical Articles

Detailed articles on Android development, Jetpack Compose internals, Kotlin coroutines, and open source library design by skydoves, Google Developer Expert and maintainer of Android libraries with 40M+ annual downloads. Read practical guides on Retrofit, Compose Preview, BottomSheet UI, coroutine compilation, and more.

Exclusive Articles
RSS

This is a collection of private or subscriber-first articles written by the Dove Letter, skydoves (Jaewoong). These articles can be released somewhere like Medium in the future, but always they will be revealed for Dove Letter members first.

Deep Dive into Kotlin Data Classes, Coroutines, Flow, and K2 Compiler

This book is designed for Kotlin developers who want to deep dive into the Kotlin fundamentals, internal mechanisms, and leverage that knowledge in their daily work right away.

KotlinCoroutines
Wednesday, January 15, 2025
How Navigation 3 Works Under the Hood

Navigation 3 is a ground up redesign of Jetpack navigation for Compose. Unlike Navigation 2, which adapted the Fragment based navigation model to Compose through NavController and XML graph definitions, Navigation 3 is built entirely on Compose primitives. The back stack is a SnapshotStateList, entries are immutable data classes, and the rendering pipeline uses AnimatedContent for transitions. The result is a navigation library that feels native to Compose rather than layered on top of it. In this article, you'll explore how NavBackStack integrates with the snapshot system to make navigation reactive, how NavEntry uses contentKey based state scoping to preserve UI state per destination, the NavEntryDecorator pattern that enables state persistence and lifecycle management, how rememberDecoratedNavEntries tracks entry lifecycle across animations, the NavDisplay rendering pipeline with scene strategies and AnimatedContent, and how predictive back gestures drive seekable transitions. The fundamental problem: Navigation as Compose state Navigation 2 carried baggage from the Fragment era. NavController managed internal state imperatively: you called navigate and popBackStack, and the controller figured out what to show. This worked, but it conflicted with Compose's declarative model where UI is a function of state. Developers had to bridge two mental models: Compose's "state drives UI" and Navigation 2's "call methods to change screens." Navigation 3 solves this by making the back stack itself Compose state. There is no NavController. The back stack is a SnapshotStateList that triggers recomposition when mutated. Navigation becomes a list operation: kotlin // Navigate backStack.addDetailScreenitemId = "123" // Pop backStack.removeLast // Replace backStackbackStack.lastIndex = EditScreenitemId = "123" This is the design principle that shapes every other decision in the library. Because the back stack is just a list in Compose state, the framework can observe it, serialize it, and react to changes automatically. NavKey and NavBackStack: State you can serialize

ComposeKotlin
Tuesday, April 21, 2026
How the Kotlin Compiler Knows You Covered Every Sealed Subclass in `when`

You write a when expression on a sealed class, cover every subclass, and the compiler lets you skip the else branch. Then a teammate adds a new subclass in another file, and every when in the project turns red with "when expression must be exhaustive." The compiler caught the missing case at compile time, before any test could fail. But sealed subclasses can live in different files across the module. How does the compiler know them all, and how does it verify that your when branches cover every one? In this article, you'll trace through the compiler's sealed subclass collection phase, the exhaustiveness checking algorithm that compares your when branches against the full subclass set, the special handling for enums and booleans and nullable types, and how the final bytecode represents a sealed when at runtime. The fundamental problem: Subclasses are scattered across the module With enums, exhaustiveness is simple. An enum class declares all its entries in one place, and the compiler can read them directly from the declaration. Sealed classes are different. Their subclasses can be declared in separate files, as long as they're in the same module: kotlin // Shape.kt sealed interface Shape // Circle.kt data class Circleval radius: Double : Shape // Rectangle.kt data class Rectangleval width: Double, val height: Double : Shape // Triangle.kt data class Triangleval base: Double, val height: Double : Shape When you write when shape { is Circle - ... is Rectangle - ... }, the compiler needs to know that Triangle exists in a different file and that you missed it. This requires a global collection pass before any exhaustiveness check can happen.

Kotlin
Tuesday, April 7, 2026
Kotlin KSP Internals: How Your Annotations Become Generated Code

If you use Jetpack Room, every @Dao interface turns into a full database implementation. If you use Hilt, every @Inject constructor gets wired into a dependency graph. If you use Moshi, every @JsonClass generates a JSON adapter. You add one annotation, hit Build, and new source files appear in your build/generated/ksp directory. The engine behind all of these is KSP, Kotlin Symbol Processing. In this article, you'll start from a practical processor that you'd write yourself, then trace inward through the KSP pipeline: how Gradle discovers your processor, how the Resolver lets you query the entire codebase as a symbol tree, how the multi round processing loop handles dependencies between generated files, and how KSP tracks which files need reprocessing on incremental builds. The fundamental problem: Why KAPT was slow Before KSP, the only way to do annotation processing in Kotlin was KAPT Kotlin Annotation Processing Tool. KAPT works by generating Java stub files from your Kotlin source code, then feeding those stubs to the standard javac annotation processing pipeline. This means the Kotlin compiler has to generate a complete set of Java declarations for every Kotlin class, interface, and function in your project, even if only a handful of them carry annotations. For a project with hundreds of Kotlin files, this stub generation can add 20 to 30 seconds to each build. The stubs are thrown away after processing, so the work is purely overhead. KSP takes a different approach. Instead of generating Java stubs and running through javac, KSP reads the Kotlin compiler's own symbol tree directly. Your processor receives KSClassDeclaration, KSFunctionDeclaration, and KSPropertyDeclaration objects that represent the actual Kotlin program structure, including Kotlin specific features like nullable types, extension functions, sealed classes, and default parameter values that get lost in Java stubs. The result is that KSP processors run roughly twice as fast as equivalent KAPT processors, and they see a more accurate representation of the source code.

ArchitectureKotlin
Wednesday, April 1, 2026
Build Your Own Landscapist Image Plugin in Jetpack Compose

Landscapisthttps://github.com/skydoves/landscapist provides a composable image loading library for Jetpack Compose and Kotlin Multiplatform. Among its image composables, LandscapistImage stands out as the recommended choice: it uses Landscapist's own standalone loading engine built from scratch for Jetpack Compose and Kotlin Multiplatform, with no dependency on platform specific loaders like Glide or Coil. It handles fetching, caching, decoding, and display internally, and it works identically across Android, iOS, Desktop, and Web. On top of that, LandscapistImage exposes a plugin system through the ImagePlugin sealed interface, giving you five distinct hook points into the image loading lifecycle where you can inject custom behavior without modifying the loader itself. In this article, you'll explore the ImagePlugin architecture, examining each of the five plugin types and why they exist, how ImagePluginComponent collects and dispatches plugins through a DSL, and how built in plugins like PlaceholderPluginhttps://skydoves.github.io/landscapist/placeholder/placeholderplugin, ShimmerPluginhttps://skydoves.github.io/landscapist/placeholder/shimmerplugin, CircularRevealPluginhttps://skydoves.github.io/landscapist/animation/circular-reveal-animation, PalettePluginhttps://skydoves.github.io/landscapist/palette/, and ZoomablePluginhttps://skydoves.github.io/landscapist/zoomable/zoomableplugin implement these interfaces in practice. Why LandscapistImage for plugins Before diving into the plugin system, it is worth understanding why LandscapistImage is the best foundation for plugin based image loading. LandscapistImage uses its own standalone engine landscapist-core rather than delegating to Glide, Coil, or Fresco. This means every stage of the image loading pipeline, from network fetching through memory caching to bitmap decoding, is controlled by a single Kotlin Multiplatform implementation. The benefit for plugins is direct: when LandscapistImage transitions from loading to success, it knows the exact moment the bitmap becomes available. It passes that bitmap directly to PainterPlugin and SuccessStatePlugin without any adapter layer or platform specific conversion. The plugin receives a real ImageBitmap, not a wrapped platform object. This also means LandscapistImage works on every Compose Multiplatform target. A ShimmerPlugin you write for Android runs identically on iOS and Desktop. There is no "this plugin only works with Glide" problem, because there is no Glide in the pipeline. If you look at the LandscapistImage composable signature, you can see where plugins fit in: kotlin @Composable public fun LandscapistImage imageModel: - Any?, modifier: Modifier = Modifier, component: ImageComponent = rememberImageComponent {}, imageOptions: ImageOptions = ImageOptions, loading: @Composable BoxScope.LandscapistImageState.Loading - Unit? = null, success: @Composable BoxScope.LandscapistImageState.Success, Painter - Unit? = null, failure: @Composable BoxScope.LandscapistImageState.Failure - Unit? = null, The component parameter is the entry point for the plugin system. When you pass a rememberImageComponent { ... } block, every plugin you add inside that block gets dispatched at the correct lifecycle stage automatically. You can still use loading, success, and failure lambdas for one off customization, but plugins are the reusable, composable alternative. The fundamental problem: Extending image loading without modifying it

ComposeAndroidKotlin
Saturday, March 28, 2026
How Compose's Drawing System Works Under the Hood

Every Compose app draws images. Whether you call ImagepainterResourceR.drawable.photo to display a bitmap, render a Material icon with IconIcons.Default.Search, or load a vector drawable, the same underlying abstraction handles the actual drawing: the Painter class. Painter is to Compose what Drawable is to the View system, a layer that knows how to draw content into a bounded area while handling alpha, color filters, and layout direction. In this article, you'll explore the full drawing pipeline from the abstract Painter class through its concrete implementations BitmapPainter for raster images and VectorPainter for vector graphics, the immutable ImageVector data structure and the mutable VectorComponent render tree that draws it, the DrawCache that caches rendered vectors as bitmaps for performance, painterResource which dispatches between bitmap and vector formats, and the Image and Icon composables that connect painters to the layout system. The fundamental problem: One API for many image formats Android has two fundamentally different kinds of image assets. Bitmaps PNG, JPG, WEBP are grids of pixels. Vector drawables are XML files containing mathematical path descriptions, lines, curves, and fills expressed as coordinate instructions. A bitmap stores the exact color value of every pixel, while a vector stores instructions for how to draw the image at any size. Compose needs a single abstraction that layout composables like Image and Icon can use without knowing the underlying image format. A composable that displays a photo and one that displays a search icon should both work through the same interface. The Painter abstraction solves this problem. It defines two things every image format must provide: an intrinsicSize reporting the image's natural dimensions and an onDraw method that knows how to render it. Each format provides its own implementation. Painter: The drawing abstraction Think of Painter like a print shop that accepts any original, whether it is a photograph, an illustration, or a piece of vector art, and produces output at the requested size. The consumer hands over the original and specifies dimensions, and the shop handles the rest. The consumer does not need to know whether the source was a JPEG or an SVG file. The Painter abstract class defines the contract that all image sources must implement. If you look at its core structure simplified: kotlin abstract class Painter { private var layerPaint: Paint? = null private var useLayer = false private var alpha: Float = DefaultAlpha private var colorFilter: ColorFilter? = null abstract val intrinsicSize: Size protected abstract fun DrawScope.onDraw protected open fun applyAlphaalpha: Float: Boolean = false protected open fun applyColorFiltercolorFilter: ColorFilter?: Boolean = false protected open fun applyLayoutDirectionlayoutDirection: LayoutDirection: Boolean = false } Each Painter subclass reports its natural dimensions through intrinsicSize. A BitmapPainter returns the pixel dimensions of its image. A VectorPainter returns the dp based default size. If a painter has no intrinsic size, like ColorPainter which fills any area with a solid color, it returns Size.Unspecified. The optimization hooks applyAlpha and applyColorFilter are where the design gets interesting. These methods return a Boolean. If the subclass returns true, it means "I'll handle this effect directly." If it returns false, the base class falls back to rendering into an offscreen layer using withSaveLayer, which works universally but costs an extra buffer allocation. This opt in pattern lets simple painters like BitmapPainter avoid the offscreen layer entirely. The draw method ties everything together. It configures alpha and color filter, insets the drawing area to the requested size, then decides whether to use a layer or call onDraw directly:

ComposeKotlin
Saturday, March 21, 2026
The Seven Group Types in Compose

Every @Composable function you write produces invisible scaffolding. The Compose compiler wraps each Kotlin construct in a "group" that tells the runtime what it can do during recomposition. A conditional branch gets one type of group. A function body gets another. A key call gets yet another. These decisions happen at compile time, and they determine whether the runtime can skip, replace, move, or recycle each piece of your UI. In this article, you'll explore the seven group types in Compose's runtime, examining how replace groups handle conditional branches, how restart groups enable targeted recomposition, how movable groups preserve state across reordering, how node groups bridge the slot table and the UI tree, how reusable groups recycle composition structure, how defaults groups isolate default parameter calculations, and how all seven funnel into a single core function with just three GroupKind values. The fundamental problem: Why multiple group types? Consider a composable that mixes several Kotlin constructs together: kotlin @Composable fun UserCarduser: User, showBio: Boolean { keyuser.id { Textuser.name if showBio { val bio = remember { loadBiouser.id } Textbio } } } Each construct here needs different treatment from the runtime. The if branch might disappear entirely when showBio becomes false, so the runtime needs to delete everything inside it. The keyuser.id block might move to a different position in a list, so the runtime needs to find it and relocate it instead of destroying it. The UserCard function itself needs to restart independently when its parameters change. The Text calls need to emit actual nodes into the UI tree. One group type cannot handle all these cases efficiently. A group that searches for moved children would waste time on a simple if/else branch that will never move. A group that immediately deletes mismatches would destroy state that could have been preserved through a reorder. So the compiler classifies each construct into the group type that gives the runtime exactly the capabilities it needs, and nothing more. The funnel: Seven entry points, three GroupKind values All group types funnel into a single core mechanism. The GroupKind value class defines just three distinct representations: kotlin @JvmInline internal value class GroupKind private constructorval value: Int { inline val isNode get = value != Group.value inline val isReusable get = value != Node.value companion object { val Group = GroupKind0 val Node = GroupKind1 val ReusableNode = GroupKind2 } } Three values, but seven group types. Five of those seven use GroupKind.Group. The behavioral difference between them is not in how the slot table stores them, but in the logic each start method runs before or after calling the core start function. Here is the routing:

ComposeKotlin
Tuesday, March 17, 2026
How Compose Preview Works Under the Hood

Every Android developer using Compose has written @Preview above a composable and watched it appear in the Studio design panel. But what actually happens between that annotation and the rendered pixels? The answer involves annotation metadata, XML layout inflation, fake Android lifecycle objects, reflection based composable invocation, and a JVM based rendering engine, all collaborating to make a composable believe it is running inside a real Activity. In this article, you'll explore the full pipeline that transforms a @Preview annotation into a rendered image, tracing the journey from the annotation definition itself, through ComposeViewAdapter the FrameLayout that orchestrates the render, ComposableInvoker which calls your composable via reflection while respecting the Compose compiler's ABI, Inspectable which enables inspection mode and records composition data, and the ViewInfo tree that maps rendered pixels back to source code lines. The fundamental problem: Rendering the uncallable A @Composable function is not a regular function. The Compose compiler transforms every @Composable function to accept a Composer parameter and synthetic $changed and $default integers. Beyond the function signature, composables expect to run inside an environment that provides lifecycle owners, a ViewModelStore, a SavedStateRegistry, and other Android framework objects. These dependencies come for free inside a running Activity, but Studio needs to render your composable without a running emulator or device. The tooling must reconstruct enough of the Android runtime for the composable to believe it is inside a real Activity, call the composable through reflection while matching the compiler's transformed signature exactly, and then extract the rendered layout information so Studio can map pixels to source code. This is the challenge the ui-tooling library solves. The @Preview annotation: Metadata, not behavior The @Preview annotation itself does nothing at runtime. It is purely metadata that Studio reads to configure the rendering environment. Looking at the annotation definition: kotlin @MustBeDocumented @RetentionAnnotationRetention.BINARY @TargetAnnotationTarget.ANNOTATIONCLASS, AnnotationTarget.FUNCTION @Repeatable annotation class Preview val name: String = "", val group: String = "", @IntRangefrom = 1 val apiLevel: Int = -1, val widthDp: Int = -1, val heightDp: Int = -1, val locale: String = "", @FloatRangefrom = 0.01 val fontScale: Float = 1f, val showSystemUi: Boolean = false, val showBackground: Boolean = false, val backgroundColor: Long = 0, @AndroidUiMode val uiMode: Int = 0, @Device val device: String = Devices.DEFAULT, @Wallpaper val wallpaper: Int = Wallpapers.NONE, Three meta annotations define how this annotation behaves:

ComposeAndroidKotlin
Sunday, March 15, 2026
Five Algorithms and Data Structures Hidden Inside Jetpack Compose

Jetpack Compose is a UI toolkit on the surface, but its internals draw from decades of computer science research. The runtime uses a data structure borrowed from text editors to store composition state. The modifier system applies an algorithm from version control to diff node chains. The state management layer implements a concurrency model from database engines. These are not theoretical exercises. They are practical solutions to problems that Compose solves every frame. In this article, you'll explore five algorithms and data structures embedded in Compose's internals: the gap buffer that powers the slot table, Myers' diff algorithm applied to modifier chains, snapshot isolation borrowed from database MVCC, bit packing used to compress flags and parameters, and positional memoization that makes remember work without explicit keys. Gap Buffer: The text editor trick inside the slot table When you type in a text editor, characters are inserted at the cursor position. The naive approach, shifting every subsequent character one position to the right, costs On per keystroke. Text editors solved this decades ago with the gap buffer: an array that maintains a block of unused space the "gap" at the cursor position. Inserting a character fills one slot of the gap at O1 cost. Moving the cursor shifts the gap to a new location, but sequential edits at nearby positions are fast because the gap is already there. Compose's slot table faces the same problem. During composition, the runtime inserts groups and their associated data slots as composable functions execute. The SlotWriter maintains two parallel gap buffers: one for groups structural metadata and one for slots stored values like remembered state. Each buffer tracks a gapStart position and a gapLen count. When the writer needs to insert at a position away from the current gap, it moves the gap there first. The operation shifts array elements to reposition the empty space simplified: kotlin private fun moveGroupGapToindex: Int { if groupGapStart == index return

ComposeAndroidKotlin
Wednesday, March 11, 2026
How Compose Remembers: The Positional Memoization Behind remember and State

Every Compose developer has written remember { mutableStateOf0 }. The value survives recomposition without any explicit storage reference. No ViewModel, no map, no key. Compose knows where the value belongs based on where the remember call appears in the source code. This mechanism is called positional memoization: values are identified not by name, but by their position in the execution trace of the composition. In this article, you'll dive deep into the positional memoization system that powers remember, exploring how the compiler transforms remember calls into Composer.cache invocations, how cache reads from and writes to the slot table using a sequential cursor, how the changed function advances through stored keys to detect invalidation, why or is used instead of || when combining key checks, how RememberObserver values receive lifecycle callbacks, and how the skipping property determines whether the runtime re executes a group or reuses stored data. The fundamental problem: State without storage references Consider a simple counter composable: kotlin @Composable fun Counter { var count by remember { mutableStateOf0 } ButtononClick = { count++ } { Text"Count: $count" } } Where does count live? There's no field in a class, no entry in a map, no unique identifier passed to remember. If you call Counter from two different places, each call gets its own independent count. The runtime distinguishes them purely by position: the first Counter call occupies one position in the composition tree, the second call occupies another. Think of the slot table as a filing cabinet with numbered drawers. Each time composition runs, the runtime opens drawers in the same order: drawer 0, drawer 1, drawer 2, and so on. As long as your composable functions execute in the same order, each remember call opens the same drawer it opened last time and finds the same value waiting inside. The remember API: Surface and overloads The simplest remember overload takes no keys. It calls currentComposer.cachefalse, calculation, passing false to indicate the cached value is never invalidated by key changes: kotlin @Composable inline fun <T remember crossinline calculation: @DisallowComposableCalls - T : T = currentComposer.cachefalse, calculation The single key variant passes the result of currentComposer.changedkey1 as the invalid flag. If the key changed since the last composition, the cached value is recalculated: kotlin @Composable inline fun <T remember key1: Any?, crossinline calculation: @DisallowComposableCalls - T, : T { return currentComposer.cache currentComposer.changedkey1, calculation } The two key variant combines checks with or:

ComposeKotlin
Sunday, March 8, 2026
From Gap Buffer to Linked List: How Compose Rewrote Its SlotTable for Faster Recomposition

Jetpack Compose stores your entire composition tree in a data structure called the SlotTable. Every composable call, every remembered value, every key is recorded as groups and slots in this table. For years, the SlotTable used a gap buffer, the same data structure that powers text editors. It worked well for sequential operations, but as applications grew more dynamic with lists, animations, and conditional content, one limitation became painful: moving or reordering groups required copying large portions of memory. The Compose team rewrote the SlotTable as a linked list, and operations like list reordering now recompose over twice as fast. In this article, you'll dive deep into this architectural shift, exploring how the gap buffer stores groups in contiguous arrays, how the link buffer replaces positions with pointer based navigation, how the SlotTableEditor achieves O1 moves and deletes, how the free list and slot buffering manage memory, and how GroupHandle enables lazy position resolution. This isn't a guide on using Compose's APIs. It's an exploration of the data structure rewrite that makes recomposition fundamentally faster. The gap buffer: From text editors to composition trees A gap buffer is a data structure originally designed for text editing. Consider the problem a text editor faces: a document is a sequence of characters, and the user can insert or delete at any position. Storing characters in a flat array means inserting in the middle requires shifting every character after the insertion point. For a 100,000 character document, inserting near the beginning means moving nearly 100,000 characters. The gap buffer solves this by keeping an empty region the "gap" right at the cursor position. Inserting at the cursor fills in the gap without shifting anything. Deleting at the cursor expands the gap. Most text editing is sequential: you type at one spot, and the gap stays right where you need it. Editors like Emacs have used gap buffers for decades because of this property. The trade off appears when you jump to a distant position. The gap must slide to the new cursor, copying every element between the old and new positions. As long as edits cluster near one location, the gap rarely moves and performance stays excellent. This is why gap buffers work well beyond text editors too: any workload where modifications happen sequentially in a large, ordered data set benefits from the same principle. The data stays in a flat, contiguous array which is cache friendly, and insertions at the working position are O1. How Compose applied the gap buffer The original Compose SlotTable stores composition data in two flat arrays: kotlin internal class SlotTable : SlotStorage, CompositionData { var groups = IntArray0 var groupsSize = 0 var slots = Array<Any?0 { null } var slotsSize = 0 } The groups array stores group metadata as inline structs of 5 integers each GroupFieldsSize = 5. Each group contains a key, group size, node count, parent anchor, and a data anchor pointing into the slots array. The slots array holds the actual remembered values, composable node references, and other data associated with each group. The groups are ordered linearly: a parent's group fields are followed immediately by all its children's fields, forming a depth first layout. This makes linear scanning fast because you can read the tree from start to finish without jumping around. But it also means that a group's identity is tied to its position in the array. If you want to move a group, you have to physically relocate its data. The gap buffer maintains its "gap" in each array. Insertions at the gap are O1. But insertions elsewhere require moving the gap first, which copies every element between the old gap position and the new one. For a table with 10,000 groups 50,000 integers, moving the gap from the end to the beginning copies all 50,000 integers. Deletions work similarly. When you remove a group, its space becomes part of the gap. But if the gap isn't adjacent to the deleted group, it must be moved there first. This worked well for initial composition, which proceeds sequentially through the tree. But recomposition can touch any part of the tree in any order, and list reordering moves groups across large distances. The fundamental problem: Array copies that scale with composition size Imagine a Column rendering 1,000 items, and the data source reorders item 999 to position 0. The runtime must move that item's group and all its slots from the end of the SlotTable to the beginning. In the gap buffer, this requires a 9 step process:

ComposeKotlin
Sunday, March 1, 2026
The Machinery Behind the Magic: How Kotlin Turns suspend into State Machines

Kotlin Coroutines have become the standard for asynchronous programming on the JVM, offering developers a way to write sequential, readable code that can pause and resume without blocking threads. Most developers interact with coroutines through familiar APIs like launch, async, and Flow, treating suspend as a language keyword that "just works." But coroutines are not simply a library feature layered on top of the language. They are a compiler level solution, built through the Kotlin compiler's IR lowering pipeline and bytecode generation, that transforms your sequential code into resumable state machines. The suspend keyword triggers a series of compiler transformations that rewrite your function's structure, signature, and control flow before it ever reaches the JVM. In this article, you'll dive deep into the Kotlin compiler's coroutine machinery, exploring the six stage transformation pipeline that converts a suspend function into a state machine. You'll trace through how the compiler injects hidden continuation parameters through CPS transformation, how it generates continuation classes with the clever sign bit trick for distinguishing fresh calls from resumptions, how the bytecode level transformer collects suspension points and inserts a TABLESWITCH dispatch, how local variables are "spilled" into continuation fields to survive across suspension, and how tail call optimization lets the compiler skip the entire state machine when it can prove every suspension point is a tail call. The fundamental problem: How do you make a function resumable? Consider this suspend function: kotlin suspend fun fetchUserData: UserData { val user = fetchUser val profile = fetchProfileuser.id return UserDatauser, profile } This looks like ordinary sequential code, but both fetchUser and fetchProfile might perform network requests that take hundreds of milliseconds. The function must be able to pause at each call, release the thread entirely, and later resume execution at the exact point where it left off, with all local variables intact. The JVM provides no native mechanism for this. A JVM method is a stack frame, and when a method returns, its stack frame is gone. There is no way to "freeze" a stack frame, release the thread, and later restore it. The function must return to release the thread, but returning destroys the local state. The Kotlin compiler solves this by transforming each suspend function into a state machine. The function's body is split into segments between suspension points. Local variables are saved into fields of a continuation object before each suspension, and restored after resumption. A label field tracks which segment to execute next, and a TABLESWITCH at the function entry dispatches to the correct segment. The developer writes linear code; the compiler generates the machinery to break it apart and reassemble it on demand. The six stage pipeline: From suspend to state machine The transformation happens across six distinct phases in the JVM backend. Understanding the full pipeline is essential to understanding why each phase exists and what it contributes. 1. SuspendLambdaLowering: Converts suspend lambda expressions into anonymous continuation classes 2. TailCallOptimizationLowering: Identifies suspend calls in tail position and marks them with IrReturn wrappers 3. AddContinuationLowering: The central IR lowering, generates continuation classes, injects $completion parameters, creates static suspend implementations 4. Code generation: Lowers IR to JVM bytecode, placing BeforeSuspendMarker/AfterSuspendMarker instructions around each suspension point 5. CoroutineTransformerMethodVisitor: The bytecode level state machine engine, inserts the TABLESWITCH, spills variables, generates resume paths 6. Tail call optimization check: If all suspension points are tail calls, the state machine is skipped entirely Let's trace through each phase. CPS transformation: The invisible parameter The foundation of coroutine compilation is Continuation Passing Style CPS transformation. Every suspend function, when compiled, receives a hidden additional parameter: the continuation. This continuation represents "what happens next" after the function completes or suspends. When you write: kotlin suspend fun fetchUser: User { // ... } The compiler transforms the signature to: kotlin fun fetchUser$completion: Continuation<User?: Any? Two changes happen. First, a $completion parameter of type Continuation is appended. Second, the return type becomes Any?, because the function can now return either the actual result or the special sentinel COROUTINESUSPENDED, indicating that the function has paused and will deliver its result later through the continuation. Looking at how AddContinuationLowering performs this injection: kotlin val continuationParameter = buildValueParameterfunction { kind = IrParameterKind.Regular name = Name.identifierSUSPENDFUNCTIONCOMPLETIONPARAMETERNAME // "$completion" type = continuationTypecontext.substitutesubstitutionMap // Continuation<RetType? origin = JvmLoweredDeclarationOrigin.CONTINUATIONCLASS } The parameter is inserted before any default argument masks but after all regular parameters. This is invisible in source code but always present in the bytecode. Every call site of a suspend function is also rewritten to pass the current continuation as this extra argument. The continuation class: Where state lives The central artifact of coroutine compilation is the continuation class. For each named suspend function, the compiler generates an inner class that extends ContinuationImpl and holds all the state needed to suspend and resume. Looking at generateContinuationClassForNamedFunction in AddContinuationLowering.kt: kotlin context.irFactory.buildClass { name = Name.special"<Continuation" origin = JvmLoweredDeclarationOrigin.CONTINUATIONCLASS }.apply { superTypes += context.symbols.continuationImplClass.owner.defaultType

CoroutinesKotlin
Tuesday, February 24, 2026
What Is a Snapshot? Understanding Compose's Isolated State World

Jetpack Compose manages UI state through a system called Snapshots, a concept borrowed from database theory that enables isolated, concurrent access to shared mutable state. When you write var count by mutableStateOf0, the runtime doesn't just store a value in a field. It creates a snapshot aware state object that participates in an isolation system where multiple threads can read and write state without interfering with each other. While most developers interact with snapshots implicitly through mutableStateOf and recomposition, the deeper question remains: what exactly is a snapshot, how does it provide isolation, and what happens when you "enter" one? In this article, you'll dive deep into the Snapshot abstraction itself, exploring how the class hierarchy provides different levels of isolation, how the thread local mechanism makes snapshots invisible to the developer, how the GlobalSnapshot serves as the always present default, how advanceGlobalSnapshot makes changes visible, how nested snapshots enable hierarchical isolation, and how TransparentObserverSnapshot achieves zero cost observation. This isn't a guide on using mutableStateOf or snapshotFlow. It's an exploration of the isolation architecture that makes Compose's reactive state management possible. The fundamental problem: Concurrent access to shared mutable state Consider a typical Compose application: kotlin @Composable fun UserProfile { var name by remember { mutableStateOf"" } var email by remember { mutableStateOf"" } Column { Text"Name: $name" Text"Email: $email" ButtononClick = { name = "Jaewoong" email = "jaewoong@example.com" } { Text"Load User" } } } This looks simple, but several problems lurk beneath the surface: 1. Torn reads: If composition reads name after the button click updates it but before email is updated, the UI shows inconsistent state. 2. Concurrent composition: Compose may run composition on a background thread while the main thread is modifying state. 3. Observation: The system must know that UserProfile read name and email so it can schedule recomposition when they change. 4. Batching: Multiple state changes from a single gesture should result in one recomposition, not one per change. The naive solution of locking every read and write would kill performance. Compose solves all four problems with a single abstraction: snapshots. A snapshot is an isolated view of mutable state at a specific point in time. Reads within a snapshot always see a consistent view, writes are invisible to other snapshots until explicitly applied, and observers track exactly which state each composable depends on. The Snapshot abstraction: An isolated view of state At its core, a Snapshot is a sealed class that encapsulates a unique ID and a set of snapshot IDs that should be considered invisible: kotlin // simplified public sealed class Snapshot snapshotId: SnapshotId, internal open var invalid: SnapshotIdSet, { public open var snapshotId: SnapshotId = snapshotId public abstract val root: Snapshot public abstract val readOnly: Boolean internal abstract val readObserver: Any - Unit? internal abstract val writeObserver: Any - Unit? } Three properties define how a snapshot sees the world:

ComposeKotlin
Sunday, February 22, 2026
The Snapshot System: How Compose Tracks and Batches State Changes

Jetpack Compose revolutionized Android UI development with its declarative approach, but what makes it truly powerful is the sophisticated machinery underneath. At the heart of Compose's reactivity lies the Snapshot System, a multi-version concurrency control MVCC implementation that enables isolated state changes, automatic recomposition, and conflict-free concurrent updates. When you write var count by mutableStateOf0, you're interacting with one of the most elegant concurrent systems in modern Android development. In this article, you'll dive deep into the internal mechanisms of the Snapshot System, exploring how snapshots provide isolation through MVCC, how StateRecord chains track multiple versions of state, how the system decides which version to read, how writes create new StateRecords without blocking readers, how state observations trigger recomposition, and how the apply mechanism detects and resolves conflicts. This isn't a guide on using mutableStateOf, it's an exploration of the compiler and runtime machinery that makes reactive state management possible. The fundamental problem: How do you track state changes safely? Consider this simple Compose code: kotlin @Composable fun Counter { var count by remember { mutableStateOf0 } ButtononClick = { count++ } { Text"Count: $count" } } This looks deceptively simple, but several complex problems need solving: 1. Isolation: When count changes, the new value must be visible to recomposition but not affect in-progress compositions. 2. Observation: The system must know that this composable read count so it can recompose when count changes. 3. Concurrency: Multiple threads might read and write state simultaneously. 4. Memory: Old state versions must eventually be garbage collected. The naive approach would use locks everywhere, but that would kill performance. Compose solves this elegantly with snapshots, isolated views of mutable state that enable lock-free reads and conflict detection. Understanding the core abstraction: What makes Snapshot special At its heart, a Snapshot is an isolated view of mutable state at a specific point in time. The Snapshot class is a sealed class that encapsulates a unique snapshot ID and tracks which concurrent snapshots should be considered invalid for isolation purposes: kotlin public sealed class Snapshot snapshotId: SnapshotId, internal open var invalid: SnapshotIdSet, { public open var snapshotId: SnapshotId = snapshotId public abstract val root: Snapshot public abstract val readOnly: Boolean internal abstract val readObserver: Any - Unit? internal abstract val writeObserver: Any - Unit? } Three critical properties define snapshot isolation: Snapshot IDs are monotonically increasing Every snapshot gets a unique ID from nextSnapshotId, an atomically incremented counter. This creates a total ordering of snapshots. When you create a snapshot, it gets the next available ID: kotlin val nextSnapshotId: SnapshotId get = sync { currentGlobalSnapshot.get.snapshotId + 1 } This monotonic ID is the foundation of version selection, newer snapshots can see changes from older snapshots, but not vice versa. Invalid sets track concurrent snapshots Each snapshot maintains a SnapshotIdSet called invalid that contains the IDs of snapshots that were active but not yet applied when this snapshot was created. This is crucial for isolation: kotlin // From the test suite demonstrating isolation var state by mutableStateOf"0" val snapshot = takeSnapshot state = "1" assertEquals"1", state // Global sees "1" assertEquals"0", snapshot.enter { state } // Snapshot still sees "0" The snapshot can't see changes made by concurrent snapshots because their IDs are in its invalid set. This is how MVCC provides snapshot isolation. Observers enable reactive behavior The readObserver is called whenever state is read, allowing the system to track dependencies. The writeObserver is called on writes, enabling batched notifications. These observers are the bridge between snapshots and recomposition. Global vs Mutable: Two kinds of snapshots Compose uses two snapshot types for different purposes. GlobalSnapshot: The current state of the world There's a single GlobalSnapshot that represents the "current" global state: kotlin internal class GlobalSnapshotsnapshotId: SnapshotId, invalid: SnapshotIdSet : MutableSnapshot snapshotId, invalid, null, { state - sync { globalWriteObservers.fastForEach { itstate } } } The global snapshot is special: - It's the default snapshot when you're not inside any other snapshot - Writes to the global snapshot are immediately visible - It has a write observer that notifies all registered globalWriteObservers - When mutable snapshots are applied, they merge into the global snapshot

ComposeKotlin
Sunday, February 22, 2026
Compose Stability Analyzer 0.7.0: Recomposition Cascade and Live Heatmap

Jetpack Compose's stability system determines whether a composable function can be skipped during recomposition. When all parameters are stable, Compose can compare them and skip the function entirely if nothing changed. When even one parameter is unstable, the composable must re-execute every time its parent recomposes. Understanding which composables are stable and which are not is the first step toward optimizing Compose performance, but it's not the whole picture. Compose Stability Analyzerhttps://github.com/skydoves/compose-stability-analyzer has been providing real time stability analysis directly in Android Studio through gutter icons, hover tooltips, inline hints, and code inspections. These features answer the question "is this composable stable?" at a glance.

ComposeKotlin
Friday, February 13, 2026
WorkManager Internals: How Guaranteed Background Work Actually Works, and Why Service Can't

Android's WorkManager has become the recommended solution for persistent, deferrable background work. Unlike transient background operations that live and die with your app process, WorkManager guarantees that enqueued work eventually executes, even if the user force-stops the app, the device reboots, or constraints aren't met yet. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around work persistence, dual-scheduler coordination, constraint tracking, process resilience, and state management that span a Room database, multiple scheduler backends, and a carefully orchestrated execution pipeline. In this article, you'll dive deep into how Jetpack WorkManager works internally, exploring how the singleton is initialized and bootstrapped through AndroidX Startup, how WorkSpec entities persist work metadata in a Room database, how the dual-scheduler system coordinates between GreedyScheduler and SystemJobScheduler, how Processor and WorkerWrapper orchestrate the actual execution of work, how ConstraintTracker monitors system state for constraint satisfaction, how ForceStopRunnable detects app force stops and reschedules work, and how work chaining creates dependency graphs through the Dependency table. The fundamental problem: Reliable background execution Background execution on Android is fundamentally unreliable. The system aggressively kills processes to reclaim memory, Doze mode restricts background activity, and app standby buckets throttle work for rarely-used apps. A naive approach to background work: kotlin class SyncActivity : AppCompatActivity { override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState Thread { // Sync data with server api.syncAllData }.start } } This fails in multiple ways. The thread dies when the process is killed. There's no retry mechanism if the network fails. The work doesn't survive device reboots. There's no way to specify constraints like "only on Wi-Fi" or "only when charging." You might try using a Service: kotlin class SyncService : Service { override fun onStartCommandintent: Intent?, flags: Int, startId: Int: Int { Thread { api.syncAllData }.start return STARTREDELIVERINTENT } } This is better, STARTREDELIVERINTENT ensures the Intent is redelivered if the process is killed. But you still have no constraint support, no work chaining, no persistence across reboots, and no observability of work status. You'd need to build all of that yourself. WorkManager solves this by providing a complete infrastructure for persistent, constraint-aware, observable, chainable background work with guaranteed execution. Initialization: The bootstrap sequence WorkManager initializes itself automatically before your Application.onCreate runs. The entry point is WorkManagerInitializer, which implements AndroidX Startup's Initializer interface: java public final class WorkManagerInitializer implements Initializer<WorkManager { @Override public WorkManager createContext context { Logger.get.debugTAG, "Initializing WorkManager with default configuration."; WorkManager.initializecontext, new Configuration.Builder.build; return WorkManager.getInstancecontext; } @Override public List<Class<? extends Initializer<? dependencies { return Collections.emptyList; } } AndroidX Startup uses a ContentProvider to trigger initialization before Application.onCreate. This is critical because it ensures WorkManager is ready before any application code runs. The dependencies method returns an empty list, meaning WorkManager has no initialization dependencies on other Startup initializers. The singleton with dual-lock pattern WorkManager.initialize delegates to WorkManagerImpl.initialize, which uses a synchronized dual-instance pattern: java public static void initializeContext context, Configuration configuration { synchronized sLock { if sDelegatedInstance != null && sDefaultInstance != null { throw new IllegalStateException"WorkManager is already initialized."; } if sDelegatedInstance == null { context = context.getApplicationContext; if sDefaultInstance == null { sDefaultInstance = createWorkManagercontext, configuration; } sDelegatedInstance = sDefaultInstance; } } } Two static fields serve different purposes. sDefaultInstance holds the real singleton. sDelegatedInstance enables testing by allowing test code to inject a mock via setDelegate. The sLock object provides thread-safe access. The explicit check for double initialization throws an IllegalStateException with a helpful message guiding developers to disable WorkManagerInitializer in the manifest if they want custom initialization. On-demand initialization via Configuration.Provider When getInstanceContext is called and no instance exists, WorkManager falls back to on-demand initialization: java public static WorkManagerImpl getInstanceContext context { synchronized sLock { WorkManagerImpl instance = getInstance; if instance == null { Context appContext = context.getApplicationContext; if appContext instanceof Configuration.Provider { initializeappContext, Configuration.Provider appContext.getWorkManagerConfiguration; instance = getInstanceappContext; } else { throw new IllegalStateException "WorkManager is not initialized properly."; } } return instance; } } If your Application class implements Configuration.Provider, WorkManager lazily initializes with that configuration. This pattern allows developers to disable automatic initialization and provide custom configuration without calling initialize explicitly in Application.onCreate. The createWorkManager factory The actual WorkManagerImpl construction wires together all the internal components: kotlin fun WorkManagerImpl context: Context, configuration: Configuration, workTaskExecutor: TaskExecutor = WorkManagerTaskExecutorconfiguration.taskExecutor, workDatabase: WorkDatabase = WorkDatabase.create context.applicationContext, workTaskExecutor.serialTaskExecutor, configuration.clock, context.resources.getBooleanR.bool.workmanagertestconfiguration, , trackers: Trackers = Trackerscontext.applicationContext, workTaskExecutor, processor: Processor = Processorcontext.applicationContext, configuration, workTaskExecutor, workDatabase, schedulersCreator: SchedulersCreator = ::createSchedulers, : WorkManagerImpl { val schedulers = schedulersCreator context, configuration, workTaskExecutor, workDatabase, trackers, processor, return WorkManagerImpl context.applicationContext, configuration, workTaskExecutor, workDatabase, schedulers, processor, trackers, }

CoroutinesArchitectureKotlin
Thursday, February 12, 2026
DerivedState: Hash-Based Invalidation Without Tracking Dependencies

Compose's derivedStateOf provides a way to create computed state that only triggers recomposition when the computed result actually changes. When you write val fullName by remember { derivedStateOf { "${firstName.value} ${lastName.value}" } }, Compose tracks which state objects were read during calculation and intelligently determines when recalculation is necessary. While most developers know that derivedStateOf helps avoid unnecessary recompositions from intermediate state changes, the deeper question remains: how does Compose know when to recalculate without explicitly tracking dependencies, and what makes this different from a simple remember { computed value }? In this article, you'll dive deep into the internal mechanisms of derivedStateOf, exploring how the Snapshot.observe mechanism captures dependencies during calculation, how the nesting level system distinguishes direct from indirect reads, how hash based validation determines invalidation without value comparison, how the ResultRecord structure caches results across snapshots, and how equivalence policies enable allocation free updates when values haven't changed. This isn't a guide on using derivedStateOf. It's an exploration of the runtime machinery that makes intelligent state derivation possible. The fundamental problem: Computed values that recompose too often Just imagine a search screen with a filter: kotlin @Composable fun SearchScreen { var searchQuery by remember { mutableStateOf"" } var selectedCategory by remember { mutableStateOf<Category?null } var items by remember { mutableStateOflistOf<Item } val filteredItems = items.filter { item - item.name.containssearchQuery, ignoreCase = true && selectedCategory == null || item.category == selectedCategory } LazyColumn { itemsfilteredItems { item - ItemCarditem } } } Every time any state changes, filteredItems is recalculated. Worse, even if the filter produces the same result, the recomposition still happens because Compose sees a new list object. With 10,000 items, this becomes a performance problem. The naive solution is memoization with remember: kotlin val filteredItems = remembersearchQuery, selectedCategory, items { items.filter { ... } } This helps, but you must manually specify all dependencies. Miss one, and you get stale results. Add an unnecessary one, and you get extra recalculations. derivedStateOf solves both problems: kotlin val filteredItems by remember { derivedStateOf { items.filter { item - item.name.containssearchQuery, ignoreCase = true && selectedCategory == null || item.category == selectedCategory } } }

ComposeKotlin
Tuesday, February 10, 2026
Compose Identity Mechanisms: How key() Transforms Into Movable Groups

Jetpack Compose manages UI state through a sophisticated identity system that determines when composables should be reused versus recreated. When you wrap content in keyuserId { UserCarduser }, you're providing Compose with identity information that survives recomposition, reordering, and structural changes. While most developers understand that key helps preserve state when list items move, the deeper question remains: how does Compose actually track identity, and what happens at the compiler and runtime level when you use key? In this article, you'll dive deep into Compose's identity mechanisms, exploring how the compiler transforms key calls into movable group instructions, how the runtime distinguishes between replaceable, movable, and restart groups, how the two level identity system combines source location keys with object keys, how JoinedKey combines multiple keys with special enum handling, and how the slot table stores and retrieves identity information during recomposition. This isn't a guide on using key. It's an exploration of the compiler and runtime machinery that makes stable identity possible. The fundamental problem: Positional identity breaks with structural changes Consider a simple list that can be reordered: kotlin @Composable fun UserListusers: List<User { Column { for user in users { UserCarduser } } } @Composable fun UserCarduser: User { var expanded by remember { mutableStateOffalse } // ... } When users is Alice, Bob, Charlie and Alice's card is expanded, Compose remembers the expanded state. But what happens when the list becomes Bob, Alice, Charlie? Without explicit identity, Compose uses positional memoization: the first UserCard call maps to position 0, the second to position 1, and so on. When the list reorders, position 0 now contains Bob, but the expanded = true state from position 0 is still there. Bob's card incorrectly appears expanded. The naive solution is recreating all state on every structural change, but this destroys the user experience. Scroll positions reset, animations restart, and text field contents vanish. Compose needs a way to track identity that survives positional changes. The key composable solves this by providing explicit identity: kotlin for user in users { keyuser.id { UserCarduser } }

ComposeKotlin
Thursday, February 5, 2026
Shared Internals: Kotlin's New Proposal for Cross-Module Visibility

Kotlin's internal visibility modifier provides a useful mechanism for hiding implementation details within a module while exposing a clean public API. But as codebases grow and libraries modularize, a tension emerges: the logical boundaries of your API don't always align with the compilation boundaries of your modules. Test modules need access to production internals. Library families like kotlinx.coroutines want to share implementation details across artifacts without exposing them to consumers. The current workaround, "friend modules," is an undocumented compiler feature that lacks language-level design. KEEP-0451 proposes a solution: the shared internal visibility modifier. This new visibility level sits between internal and public, allowing modules to explicitly declare which internals they share and with whom. In this article, you'll explore the motivation behind this proposal, the design decisions that shaped it, how transitive sharing simplifies complex dependency graphs, and the technical challenges of implementing cross-module visibility on the JVM. The fundamental problem: Module boundaries vs. logical boundaries Consider a typical library structure: kotlinx-coroutines/ ├── kotlinx-coroutines-core/ ├── kotlinx-coroutines-test/ ├── kotlinx-coroutines-reactive/ └── kotlinx-coroutines-android/ These artifacts form a cohesive library family. Internally, they share implementation details: dispatcher internals, continuation machinery, and testing utilities. But from Kotlin's perspective, each artifact is a separate module. The internal modifier in kotlinx-coroutines-core is invisible to kotlinx-coroutines-test, even though both are maintained by the same team and shipped together. The current workarounds are unsatisfying: Option 1: Make everything public. This works, but pollutes the API surface. Consumers developers see implementation details they shouldn't use, and maintainers lose the ability to change internals without breaking compatibility. Option 2: Use the undocumented friend modules feature. The Kotlin compiler supports a -Xfriend-paths flag that grants one module access to another's internals. But this is a compiler implementation detail, not a language feature. It has no syntax, no IDE support, and no guarantees of stability. Option 3: Merge modules. You could combine related modules into a single compilation unit, then split them for distribution. But this complicates build configurations and doesn't scale to complex dependency graphs. KEEP-0451 addresses this gap by elevating friend modules to a first-class language feature with explicit syntax and clear semantics. The shared internal modifier The proposal introduces a new visibility modifier: shared internal. Declarations marked with this modifier are visible to designated dependent modules, but invisible to the general public.

ArchitectureKotlin
Saturday, January 31, 2026
ViewModel: How Configuration Change Survival Actually Works

Android's ViewModel is one of the most widely used architecture components, yet its core survival mechanism remains a mystery to most developers. You annotate a class, call viewModels in your Activity, and your state magically survives screen rotation. But what actually happens behind the scenes? The answer involves a retained in memory object that is never serialized, a simple HashMap keyed by strings, a carefully ordered resource cleanup sequence, and a factory system that separates creation from retrieval. In this article, you'll dive deep into the internal machinery that makes ViewModel survive configuration changes, exploring how ComponentActivity retains the ViewModelStore through Android's NonConfigurationInstances mechanism, how ViewModelProvider coordinates thread safe retrieval and creation through ViewModelProviderImpl, how ViewModelImpl manages resource lifecycle with a deliberate clearing order, how CreationExtras enables stateless factory injection, and how fragments piggyback on this entire system through FragmentManagerViewModel. This isn't a guide on using ViewModel. It's an exploration of the retention, creation, and destruction machinery that makes configuration change survival possible. The fundamental problem: State that outlives Activity instances Consider this common scenario: kotlin class CounterActivity : ComponentActivity { private var count = 0 override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState setContent { ButtononClick = { count++ } { Text"Count: $count" } } } } Rotate the device, and count resets to zero. The Android framework destroys and recreates the Activity on configuration changes. Every field, every local variable, every reference is gone. The Bundle approach works for small, serializable data: kotlin override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putInt"count", count } But Bundles have a strict 1MB transaction limit, can only hold primitive and parcelable types, and require manual serialization and deserialization. What about a list of 10,000 items fetched from a network request? A database cursor? A WebSocket connection? These cannot be serialized into a Bundle. ViewModel solves this by retaining the object in memory across configuration changes. Not serialized. Not parceled. The exact same object instance, held in memory while the old Activity is destroyed and the new one is created. ViewModelStore: The retention container At the foundation of the system is ViewModelStore, a wrapper around a MutableMap: kotlin public open class ViewModelStore { private val map = mutableMapOf<String, ViewModel @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } @RestrictToRestrictTo.Scope.LIBRARYGROUP public operator fun getkey: String: ViewModel? = mapkey public fun clear { for vm in map.values { vm.clear } map.clear } }

AndroidKotlin
Wednesday, January 28, 2026
Runtime Saveable: How Compose Preserves State Across Process Death

Jetpack Compose introduced a declarative paradigm for Android UI, but declarative doesn't mean stateless. User interactions create state like scroll positions, text field contents, and expanded sections that must survive configuration changes and process death. While remember preserves state across recompositions, it's helpless against activity recreation. This is where the runtime saveable module enters: a sophisticated state persistence system that bridges Compose's reactive world with Android's saved instance state mechanism. In this article, you'll dive deep into the internal mechanisms of Compose's saveable APIs, exploring how rememberSaveable tracks and restores state through composition position keys, how the Saver interface enables type safe serialization of arbitrary objects, how SaveableStateRegistry manages multiple providers and preserves registration order, how SaveableStateHolder enables navigation patterns by scoping state to screen keys, and how all these components coordinate to seamlessly preserve UI state. This isn't a guide on using rememberSaveable. It's an exploration of the runtime machinery that makes state persistence invisible to developers. The fundamental problem: State that survives process death Consider this simple Compose code: kotlin @Composable fun Counter { var count by remember { mutableStateOf0 } ButtononClick = { count++ } { Text"Count: $count" } } This works perfectly for recomposition. Click the button, count increments, UI updates. But rotate the device, and count resets to zero. The activity was destroyed and recreated, and remember only survives within a single composition lifecycle. The traditional Android solution is onSaveInstanceState: kotlin class CounterActivity : ComponentActivity { private var count = 0 override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putInt"count", count } override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState count = savedInstanceState?.getInt"count" ?: 0 } } But this approach doesn't compose well with Compose. The state lives in the Activity, not the composable. You need to manually thread state through your composition hierarchy. And if you have dozens of stateful composables, the boilerplate becomes unmanageable. Compose's saveable APIs solve this elegantly by integrating saved instance state directly into the composition model. Each rememberSaveable call automatically participates in the save/restore cycle, keyed by its position in the composition tree. The Saver interface: Type safe state serialization At the heart of the saveable system is the Saver interface, which defines how to convert between your domain types and Bundle compatible representations. The core abstraction The Saver interface is elegantly minimal: kotlin public interface Saver<Original, Saveable : Any { public fun SaverScope.savevalue: Original: Saveable? public fun restorevalue: Saveable: Original? } Two methods handle the round trip: 1. save: Converts your type to something Bundle compatible. Returning null means "don't save this value." 2. restore: Converts back to your original type. Returning null means "use the init lambda instead." The SaverScope receiver on save provides access to canBeSavedvalue: Any: Boolean, allowing savers to validate nested values before attempting serialization. The factory function For convenience, a factory function creates Saver implementations from lambdas: kotlin public fun <Original, Saveable : Any Saver save: SaverScope.value: Original - Saveable?, restore: value: Saveable - Original?, : Saver<Original, Saveable { return object : Saver<Original, Saveable { override fun SaverScope.savevalue: Original = save.invokethis, value override fun restorevalue: Saveable = restore.invokevalue } } This enables concise saver definitions:

ComposeKotlin
Monday, January 26, 2026
Introducing the Experimental Styles API in Jetpack Compose

Jetpack Compose's Modifier system has been the primary way to apply visual properties to composables. You chain modifiers like background, padding, and border to build up the appearance and behavior of UI elements. While powerful, this approach has limitations when dealing with interactive states. When you want a button to change color when pressed, you need to manually track state, create animated values, and conditionally apply different modifiers. The new experimental Styles APIhttps://android-review.googlesource.com/c/platform/frameworks/support/+/3756487 aims to solve this by providing a declarative way to define state-dependent styling with automatic animations. In this article, you'll explore how the Styles API works, examining how Style objects encapsulate visual properties as composable lambdas, how StyleScope provides access to layout, drawing, and text properties, how StyleState exposes interaction states like pressed, hovered, and focused, how the system automatically animates between style states without manual Animatable management, and how the two-node modifier architecture efficiently applies styles while minimizing invalidation. This isn't a guide on basic Compose styling; it's an exploration of a new paradigm for defining interactive, stateful UI appearances. The problem with stateful styling Consider implementing a button that changes color when hovered and pressed. With the current Modifier approach, you need to manage this manually: kotlin @Composable fun InteractiveButtononClick: - Unit { val interactionSource = remember { MutableInteractionSource } val isPressed by interactionSource.collectIsPressedAsState val isHovered by interactionSource.collectIsHoveredAsState val backgroundColor by animateColorAsState targetValue = when { isPressed - Color.Red isHovered - Color.Yellow else - Color.Green } Box modifier = Modifier .clickableinteractionSource = interactionSource, indication = null { onClick } .backgroundbackgroundColor .size150.dp } This pattern requires several pieces: an InteractionSource to track interactions, state derivations for each interaction type, animated values for smooth transitions, and conditional logic to determine the current appearance. The code is verbose and the concerns are scattered across multiple declarations. The Styles API consolidates this into a single declarative definition: kotlin @Composable fun InteractiveButtononClick: - Unit { ClickableStyleableBox onClick = onClick, style = { backgroundColor.Green size150.dp hovered { animate { backgroundColor.Yellow } } pressed { animate { backgroundColor.Red } } } }

ComposeAndroidKotlin
Wednesday, January 21, 2026
CancellationException in Coroutines

Kotlin Coroutines introduced structured concurrency as a fundamental principle, ensuring that coroutines are properly scoped and cancelled when their parent scope completes. At the heart of this mechanism lies CancellationException, a special exception that signals cancellation and must be handled with care. While most developers know they shouldn't catch this exception, the deeper question remains: why is CancellationException special, and what happens when you accidentally swallow it? In this article, you'll dive deep into the internal mechanisms of CancellationException, exploring why it must be re-thrown, how runCatching can break structured concurrency, the proposals for safer alternatives, and the design decisions that make cancellation propagation both correct and performant. The fundamental problem: Catching cancellation breaks structured concurrency Consider this seemingly innocent code: kotlin suspend fun processData: Result<Data = runCatching { val user = fetchUser val profile = fetchProfileuser.id Datauser, profile } This looks reasonable. You're wrapping a suspend operation in runCatching to convert exceptions into Result values for safer error handling. But there's a subtle bug: if the coroutine is cancelled during fetchUser or fetchProfile, the CancellationException is caught by runCatching and wrapped in Result.failure. The cancellation signal never propagates to the parent scope, breaking structured concurrency. The core issue is that runCatching is implemented like this: kotlin public inline fun <R runCatchingblock: - R: Result<R { return try { Result.successblock } catch e: Throwable { Result.failuree } } Notice the catch e: Throwable clause. This catches everything, including CancellationException. When cancellation occurs, instead of propagating up the coroutine hierarchy, it's captured in a Result object, and the coroutine continues executing as if nothing happened. Understanding CancellationException: Not just another exception CancellationException is fundamentally different from other exceptions in Kotlin Coroutines. Let's examine its definition: kotlin public actual open class CancellationException message: String?, cause: Throwable? : IllegalStateExceptionmessage, cause It extends IllegalStateException, but its purpose is not to signal an error, it's to signal intentional cancellation. This distinction is crucial for understanding why it must be handled specially. The cancellation contract When a coroutine is cancelled, the cancellation mechanism works through these steps: 1. Cancellation signal: The parent scope or job calls cancel on the coroutine's Job 2. CancellationException thrown: At the next suspension point, the coroutine throws a CancellationException 3. Propagation: The exception propagates up the coroutine hierarchy 4. Cleanup: Each coroutine in the chain can run cleanup logic in finally blocks 5. Parent notification: The parent scope is notified that the child completed due to cancellation If you catch CancellationException and don't re-throw it, steps 3-5 never happen. The parent scope thinks the child is still running, resource cleanup might not occur, and the entire structured concurrency guarantee breaks down. The invisibility principle

CoroutinesKotlin
Sunday, January 18, 2026
Exploring the Internal Mechanisms of Landscapist Core

Landscapisthttps://github.com/skydoves/landscapist Core is a standalone image loading engine built from scratch for Kotlin Multiplatform. Unlike Landscapist's wrappers around Coil, Glide, and Fresco, Landscapist Corehttps://skydoves.github.io/landscapist/landscapist/landscapist-core/ handles fetching, caching, decoding, and transformations internally. This eliminates platform dependencies and provides fine grained control over every aspect of image loading. In this article, you'll explore the internal architecture of Landscapist Core, examining how the Landscapist class orchestrates the loading pipeline, how TwoTierMemoryCache provides a second chance for evicted items through weak references, how DecodeScheduler prioritizes visible images over background loads, how progressive decoding improves perceived performance, and how memory pressure handling keeps the app responsive under constrained conditions. The Landscapist orchestrator The Landscapist class is the main entry point for image loading. It coordinates fetching, caching, decoding, and transformation into a unified pipeline: kotlin public class Landscapist private constructor public val config: LandscapistConfig, private val memoryCache: MemoryCache, private val diskCache: DiskCache?, private val fetcher: ImageFetcher, private val decoder: ImageDecoder, private val dispatcher: CoroutineDispatcher, public val requestManager: RequestManager = RequestManager, public val memoryPressureManager: MemoryPressureManager = MemoryPressureManager, Each component has a single responsibility. The memoryCache stores decoded images in memory. The diskCache persists raw image data to storage. The fetcher retrieves images from network or local sources. The decoder converts raw bytes into displayable images. The requestManager tracks active requests for cancellation. The memoryPressureManager responds to system memory warnings. The loading pipeline The load function implements a three stage lookup with progressive enhancement: kotlin public fun loadrequest: ImageRequest: Flow<ImageResult = flow { emitImageResult.Loading val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight, // 1. Check memory cache instant if request.memoryCachePolicy.readEnabled { memoryCachecacheKey?.let { cached - emitImageResult.Successdata = cached.data, dataSource = DataSource.MEMORY return@flow } } // 2. Check disk cache if request.diskCachePolicy.readEnabled && diskCache != null { diskCache.getcacheKey?.use { snapshot - val bytes = snapshot.data.buffer.readByteArray // Decode and emit... } } // 3. Fetch from network val fetchResult = fetcher.fetchrequest // Process result... }.flowOndispatcher The pipeline follows a predictable order: memory cache first instant, disk cache second fast I/O, network last slow. Each stage can be enabled or disabled through CachePolicy, allowing fine grained control for special cases like forcing a refresh or skipping caching entirely. Cache key generation The CacheKey uniquely identifies a cached image based on all factors that affect its appearance: kotlin val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight,

CoroutinesArchitectureKotlin
Sunday, January 18, 2026
The Three Phases: Composition, Layout, and Drawing

Jetpack Compose transforms declarative UI code into pixels on screen through a pipeline of three distinct phases: Composition, Layout, and Drawing. When you change a state variable, Compose doesn't redraw everything, it determines which phases need to run and executes only the necessary work. A change that only affects drawing can skip composition and layout entirely, while a structural change might require all three phases. Understanding which phase your code triggers helps you write more efficient Compose applications. In this article, you'll explore how the three phases work internally, examining how the Composition phase builds and updates the UI tree through the SlotTable and Composer, how the Layout phase measures and positions nodes through LayoutNode and Constraints propagation, how the Drawing phase renders content through DrawScope and GraphicsLayer, and how invalidation propagates through the system. This isn't a guide on using Compose, it's an exploration of the execution pipeline that transforms your composable functions into rendered UI. The execution pipeline: From state to pixels When Compose needs to display UI, it executes three phases in strict order. Composition builds the UI tree by running your composable functions and recording what needs to be displayed. Layout takes that tree and determines the size and position of every element. Drawing takes the positioned elements and renders them to the screen. Each phase depends on the previous phase completing, but not every state change requires all three phases. Consider what happens when you animate an element's opacity. In a naive implementation, changing opacity would trigger composition rebuild the tree, layout remeasure and reposition, and drawing render. But opacity doesn't affect tree structure or element positions, it's purely a visual property. Compose optimizes this by allowing opacity changes in GraphicsLayer to trigger only the drawing phase, skipping composition and layout entirely. This optimization is only possible because of the phase separation. The phase model also explains why certain patterns are problematic. Reading layout coordinates during composition forces the system to complete layout before finishing composition, breaking the normal phase ordering. Understanding the phases helps you write code that works with the system rather than against it. Composition phase: Building the UI tree The Composition phase is where your composable functions execute. The Composer walks through your code, tracks what you've called, compares it to the previous composition, and records changes. This phase doesn't produce pixels, it produces a tree of nodes that the subsequent phases will process. The Composer's role The Composer is the runtime engine that executes composable functions. Every composable function receives an implicit $composer parameter injected by the compiler: kotlin // What you write @Composable fun Greetingname: String { Text"Hello, $name" } // What the compiler generates simplified fun Greetingname: String, $composer: Composer, $changed: Int { $composer.startRestartGroup1234 if $composer.changedname || !$composer.skipping { Text"Hello, $name", $composer, 0 } else { $composer.skipToGroupEnd } $composer.endRestartGroup?.updateScope { $composer - Greetingname, $composer, $changed or 1 } } The Composer serves three functions. First, it records positional information, tracking the results of remember lambdas, composable function parameters, and the structure of calls. Second, it detects changes by comparing current values against previous composition state. Third, it incrementally evaluates composition by only recomposing functions whose inputs have changed. The SlotTable: Persistent memory Composition state lives in the SlotTable, a data structure that stores the UI tree in a flattened format optimized for incremental updates. The SlotTable uses two arrays: one for group metadata and one for slot values. Each group in the table contains:

ComposeKotlin
Sunday, January 11, 2026
Building complex layouts with Layout() and understanding measure/placement

Building complex user interfaces in Jetpack Compose often requires going beyond the standard Box, Row, and Column layouts. While these composables handle most common scenarios beautifully, there are times when you need complete control over how children are measured and positioned. This is where the Layout composable becomes essential—the fundamental building block that powers every layout in Compose, including the standard ones you use daily. In this article, you'll dive deep into the Layout composable, exploring how measurement and placement work under the hood. You'll examine real implementations from the Compose UI library, understand the constraint system, and learn patterns for building sophisticated custom layouts. This isn't a basic tutorial—it's an exploration of the layout system's internals and the design decisions that make it powerful. Understanding the core abstraction: What makes Layout special At its heart, the Layout composable is a function that takes content and a measurement policy, then produces a UI element with specific dimensions and child positions. What distinguishes it from higher-level layouts is its adherence to two fundamental principles: single-pass measurement and constraint-based sizing. Single-pass measurement Single-pass measurement means each child is measured exactly once per layout pass. This constraint exists for performance—measuring the same child multiple times would create exponential complexity as layout hierarchies deepen. The implication is significant: you must make all measurement decisions with the information available in a single pass. kotlin Layoutcontent { measurables, constraints - // Each measurable can only be measured ONCE val placeables = measurables.map { it.measureconstraints } // After measurement, you work with Placeables, not Measurables layoutwidth, height { placeables.forEach { it.placex, y } } } This differs fundamentally from traditional Android Views, where onMeasure could be called multiple times with different MeasureSpec configurations. Compose's single-pass model is faster but requires more upfront planning. Constraint-based sizing Constraint-based sizing means parents communicate size expectations to children through Constraints objects, and children respond with their chosen size through Placeable objects. This bidirectional communication enables flexible layouts that adapt to available space. Parent │ ├─── ConstraintsminWidth, maxWidth, minHeight, maxHeight ───→ Child │ └─── Placeablewidth, height ←───────────────────────────────── Child The Constraints class encapsulates four values: minWidth, maxWidth, minHeight, and maxHeight. A child must choose dimensions within these bounds. This is more expressive than Android's MeasureSpec, which could only communicate one dimension's constraints at a time. These properties aren't just implementation details—they're architectural constraints that enable predictable performance and composable layout logic. The Layout function signature: Anatomy of a custom layout Let's examine the Layout function signature to understand its components: kotlin @Composable inline fun Layout content: @Composable - Unit, modifier: Modifier = Modifier, measurePolicy: MeasurePolicy The three parameters serve distinct roles: 1. content - A composable lambda that defines the children. These become Measurable objects during measurement. 2. modifier - Applied to the layout itself, affecting its measurement and drawing. Modifiers can intercept and transform constraints before they reach your measure policy. 3. measurePolicy - The brain of the layout. It receives Measurable children and parent Constraints, then returns a MeasureResult containing the layout's size and placement logic. The MeasurePolicy interface is where the real work happens: kotlin interface MeasurePolicy { fun MeasureScope.measure measurables: List<Measurable, constraints: Constraints : MeasureResult } The MeasureScope receiver provides density information and the layout function for creating results. The measurables list contains one entry per child composable. The constraints represent what the parent allows. Real-world case study: Box implementation Let's examine how Box is implemented in the Compose UI library. The source is located at foundation/foundation-layout/src/commonMain/kotlin/androidx/compose/foundation/layout/Box.kt: kotlin @Composable inline fun Box modifier: Modifier = Modifier, contentAlignment: Alignment = Alignment.TopStart, propagateMinConstraints: Boolean = false, content: @Composable BoxScope. - Unit, { val measurePolicy = maybeCachedBoxMeasurePolicycontentAlignment, propagateMinConstraints Layout content = { BoxScopeInstance.content }, measurePolicy = measurePolicy, modifier = modifier, }

ComposeKotlin
Tuesday, January 6, 2026
Recompose Scopes: How Compose Knows What to Update

Jetpack Compose's declarative UI paradigm promises simplicity: you describe your UI as a function of state, and the framework handles updates automatically. But behind this elegant abstraction lies a sophisticated selective recomposition system that makes Compose remarkably efficient. When a single state variable changes, Compose doesn't re-execute your entire UI tree,it surgically recomposes only the specific composable functions that read that state. This precision is enabled by Recompose Scopes, the runtime tracking mechanism that connects state reads to composable functions and orchestrates minimal UI updates. In this article, you'll dive deep into how "Recompose Scopes" work, exploring how RecomposeScopeImpl tracks which composables read which state, how invalidation propagates through the composition hierarchy, how the compiler-generated restart lambda enables precise recomposition, how the system determines when to skip recomposition entirely, and how bit-packed flags and token-based tracking optimize memory and performance. This isn't a guide on writing efficient composables; it's an exploration of the runtime machinery that makes selective recomposition possible. The fundamental problem: How do you know what to recompose? Consider this simple Compose code: kotlin @Composable fun UserProfileuserId: String { val user by viewModel.userState.collectAsState val settings by viewModel.settingsState.collectAsState Column { UserHeaderuser.name UserAvataruser.avatarUrl SettingsPanelsettings } } When user changes, only UserHeader and UserAvatar should recompose, SettingsPanel shouldn't, because it didn't read user. But how does Compose know this? The naive approach would be to re-execute everything and compare the results, but that would be expensive. Compose needs to track, at runtime, which composables read which state, so when state changes, only the affected composables are re-executed. This requires solving several complex problems: 1. Dependency tracking: Which composable functions read which state objects? 2. Invalidation: When state changes, which scopes should be marked for recomposition? 3. Precise restart: How do you re-execute just one composable function with the same parameters? 4. Skipping: How do you avoid re-executing functions when nothing they depend on changed? 5. Memory: How do you track dependencies without excessive memory overhead? Recompose Scopes solve these problems through a combination of compiler cooperation and runtime tracking. RecomposeScopeImpl: The tracking mechanism Every composable function that might need to recompose gets an associated RecomposeScopeImpl instance. This class, defined in the Compose runtime, is the central bookkeeping structure for selective recomposition. The RecomposeScopeImpl class encapsulates everything needed to track and restart a composable function: kotlin internal class RecomposeScopeImplinternal var owner: RecomposeScopeOwner? : ScopeUpdateScope, RecomposeScope, IdentifiableRecomposeScope Compact flag-based state storage Rather than using multiple boolean fields, RecomposeScopeImpl uses a single integer with bit masks for state: kotlin private var flags: Int = 0 private const val UsedFlag = 0x001 // Scope was used during composition private const val DefaultsInScopeFlag = 0x002 // Has default parameter calculations private const val DefaultsInvalidFlag = 0x004 // Default calculations changed private const val RequiresRecomposeFlag = 0x008 // Direct invalidation occurred private const val SkippedFlag = 0x010 // Scope was skipped private const val RereadingFlag = 0x020 // Re-reading tracked instances private const val ForcedRecomposeFlag = 0x040 // Forced recomposition private const val ForceReusing = 0x080 // Forced reusing state private const val Paused = 0x100 // Paused for pausable compositions private const val Resuming = 0x200 // Resuming from pause private const val ResetReusing = 0x400 // Reset reusing state This compact representation saves memory—11 boolean flags fit in a single 32-bit integer instead of consuming 11 bytes or more with padding. The getters and setters use bitwise operations: kotlin private inline fun getFlagflag: Int = flags and flag != 0 private inline fun setFlagflag: Int, value: Boolean { flags = if value { flags or flag } else { flags and flag.inv } } This pattern appears throughout high-performance Compose code—prefer bit-packing over separate booleans for frequently allocated objects.

ComposeKotlin
Tuesday, January 6, 2026
How Coil works under the hood: LRU caching, performance trade-off, bitmap sampling

Image loading is one of the most critical yet complex aspects of Android development. While libraries like Glide and Picasso have served developers for years, Coil emerged as a modern, Kotlin-first solution built from the ground up with coroutines. But the power of Coil goes far beyond its clean API, it's in the solid internal machinery that makes it both performant and memory-efficient. In this article, you'll dive deep into the internal mechanisms of Coil, exploring how image requests flow through an interceptor chain, how the two-tier memory cache achieves high hit rates while preventing memory leaks, how bitmap sampling uses bit manipulation for optimal memory usage, and the subtle optimizations that make it production-ready. Understanding the core abstraction At its heart, Coil is an image loading library that transforms data sources URLs, files, resources into decoded images displayed in views. What distinguishes Coil from other image loaders is its adherence to two fundamental principles: coroutine-native design and composable interceptor architecture. The coroutine-native design means everything in Coil is built around suspend functions. Image loading naturally fits the structured concurrency model, requests have lifecycles, can be cancelled, and should respect scopes. Traditional image loaders use callback chains, but Coil embraces coroutines: kotlin // Traditional callback approach imageLoader.loadurl { bitmap - imageView.setImageBitmapbitmap } // Coil's coroutine approach val result = imageLoader.execute ImageRequest.Buildercontext .dataurl .targetimageView .build The composable interceptor architecture means the entire request pipeline is a chain of interceptors, similar to OkHttp. Each interceptor can observe, transform, or short-circuit the request. This makes the library extensible without modifying core code. These properties aren't just conveniences, they're architectural decisions that enable better resource management, cleaner cancellation semantics, and powerful customization. Let's explore how these principles manifest in the implementation. The ImageLoader interface and RealImageLoader implementation If you examine the ImageLoader interface, it defines two primary entry points: kotlin interface ImageLoader { fun enqueuerequest: ImageRequest: Disposable suspend fun executerequest: ImageRequest: ImageResult } Two methods for the same operation? This reflects Android's dual nature, some callers need fire-and-forget loading enqueue for views, while others need structured concurrency execute for repositories or composables. The RealImageLoader implementation handles both cases with a unified internal pipeline: kotlin internal class RealImageLoader val options: Options, : ImageLoader { private val scope = CoroutineScopeoptions.logger private val systemCallbacks = SystemCallbacksthis private val requestService = RequestServicethis, systemCallbacks, options.logger override fun enqueuerequest: ImageRequest: Disposable { // Start executing the request on the main thread. val job = scope.asyncoptions.mainCoroutineContextLazy.value { executerequest, REQUESTTYPEENQUEUE } // Update the current request attached to the view and return a new disposable. return getDisposablerequest, job } override suspend fun executerequest: ImageRequest: ImageResult { if !needsExecuteOnMainDispatcherrequest { // Fast path: skip dispatching. return executerequest, REQUESTTYPEEXECUTE } else { // Slow path: dispatch to the main thread. return coroutineScope { val job = asyncoptions.mainCoroutineContextLazy.value { executerequest, REQUESTTYPEEXECUTE } getDisposablerequest, job.job.await } } } } Notice the fast path optimization in execute: if the request doesn't need main thread dispatch no target view, it executes immediately without the overhead of launching a coroutine. This is important for background image loading in repositories where you're just fetching the bitmap. The scope is a SupervisorJob scope, meaning one failed request doesn't cancel other in-flight requests: kotlin private fun CoroutineScopelogger: Logger?: CoroutineScope { val context = SupervisorJob + CoroutineExceptionHandler { , throwable - logger?.logTAG, throwable } return CoroutineScopecontext } This isolation ensures that a network error loading one image doesn't affect other images currently loading. The CoroutineExceptionHandler logs uncaught exceptions rather than crashing, making the library resilient to unexpected errors. The request execution pipeline: Interceptors all the way down The core of Coil's architecture is the interceptor chain. When you execute a request, it flows through a series of interceptors before reaching the EngineInterceptor, which performs the actual fetch and decode: kotlin private suspend fun executeinitialRequest: ImageRequest, type: Int: ImageResult { val requestDelegate = requestService.requestDelegate request = initialRequest, job = coroutineContext.job, findLifecycle = type == REQUESTTYPEENQUEUE, .apply { assertActive } val request = requestService.updateRequestinitialRequest val eventListener = options.eventListenerFactory.createrequest

CoroutinesPerformanceKotlin
Monday, November 24, 2025
How Compose Compiler infers stability and decide their types?

Jetpack Compose uses a smart recomposition system to optimize UI updates. At the heart of this optimization is stability inference - the compiler's ability to determine whether a type's values can change over time. Understanding how the compiler reasons about stability is crucial for writing performant Compose code. What is Stability? In Compose, a type is considered stable if it meets these conditions: 1. The result of equals will always return the same result for the same two instances 2. If a public property of the type changes, Composition will be notified 3. All public properties are also stable types Common examples: - Stable: Primitives Int, String, Boolean, @Immutable data classes, function types - Unstable: Classes with var properties, mutable collections MutableList, MutableMap The Stability Type System The Compose compiler uses a sophisticated type system to track stability information during compilation. This is represented by the Stability sealed class:

ComposeKotlin
Monday, November 24, 2025
What is Remote Compose, and how can you leverage it to build server-driven UI

Building dynamic user interfaces has long been a fundamental challenge in Android development. The traditional approach requires recompiling and redeploying the entire application whenever the UI needs to change—a process that creates significant friction for A/B testing, feature flags, and real-time content updates. Consider a scenario where your marketing team wants to test a new checkout button design: in the traditional model, this simple change requires developer time, code review, QA testing, app store submission, and weeks of waiting for user adoption. Compose Remote emerges as a powerful solution to this problem, enabling developers to create, transmit, and render Jetpack Compose UI layouts at runtime without any recompilation. In this article, you'll explore what Compose Remote is, understand its core architecture, and discover the benefits it brings to dynamic screen design with Jetpack Compose. This isn't a tutorial on using the library, it's an exploration of the paradigm shift it represents for Android UI development. Understanding the core abstraction: What makes Compose Remote special At its heart, Compose Remote is a framework that enables remote rendering of Compose UI components. What distinguishes it from traditional UI approaches is its adherence to two fundamental principles: declarative document serialization and platform-independent rendering. Declarative document serialization Declarative document serialization means you can capture any Jetpack Compose layout into a compact, serialized format. Think of it like taking a "screenshot" of your UI, except instead of pixels, you're capturing the actual drawing instructions. This captured document contains everything needed to recreate the UI: shapes, colors, text, images, animations, and even interactive touch regions. kotlin // On the server or creation side val document = captureRemoteDocument context = context, creationDisplayInfo = displayInfo, profile = profile { // Standard Compose UI - looks exactly like regular Compose code Columnmodifier = RemoteModifier.fillMaxSize { Text"Dynamic Content" ButtononClick = { / action / } { Text"Click Me" } } } // Result: A ByteArray that can be sent over the network The cool of this approach is that the creation side writes standard Compose code. There's no new DSL to learn, no JSON schema to maintain, no template language to master. If you can write it in Compose, you can capture it with Compose Remote. Platform-independent rendering Platform-independent rendering means the captured document can be transmitted over the network and rendered on any Android device without needing the original Compose code. The client device doesn't need your composable functions, your view models, or your business logic, it just needs the document bytes and a player. kotlin // On the client or player side RemoteDocumentPlayer document = remoteDocument.document, documentWidth = windowInfo.containerSize.width, documentHeight = windowInfo.containerSize.height, onAction = { actionId, value - // Handle user interactions } These properties aren't just conveniences, they're architectural constraints that enable true decoupling of UI definition from deployment. The document format captures not just static layouts but also state, animations, and interactions, making it a complete representation of the UI experience. Comparing approaches: Why not JSON or WebViews? Before diving deeper, it's worth understanding why Compose Remote takes this approach rather than alternatives: JSON-based server-driven UI like Airbnb's Epoxy or Shopify's approach requires defining a schema that maps to native components. This works well for structured content but struggles with: - Complex animations and transitions - Custom drawing and graphics - Rich text with inline styling - Gradients, shadows, and visual effects WebViews offer full flexibility but introduce: - Performance overhead separate rendering process - Inconsistent look and feel web styling vs native - Memory pressure each WebView is expensive - Touch handling complexity gesture conflicts Compose Remote takes a third path: capturing the actual drawing operations that Compose would execute. This means any UI you can build in Compose, including custom Canvas drawing, complex animations, and Material Design components, can be captured and replayed remotely with native performance. The document-based architecture: Creation and playback Compose Remote's architecture is built around a clear separation between two phases: document creation and document playback. Understanding this separation is key to understanding the framework's power. Document creation: Capturing UI as data The creation phase transforms Compose UI code into a serialized document. This happens through a sophisticated capture mechanism that intercepts drawing operations at the Canvas level, the lowest level of Android's rendering pipeline. @Composable Content ↓ RemoteComposeCreationState Tracks state and modifiers ↓ CaptureComposeView Virtual Display - no actual screen needed ↓ RecordingCanvas Intercepts every draw call ↓ Operations 93+ operation types covering all drawing primitives ↓ RemoteComposeBuffer Efficient binary serialization ↓ ByteArray Network-ready, typically 10-100KB for complex UIs The creation side provides a complete Compose integration layer. You write standard @Composable functions, and the framework captures everything: layout hierarchies, modifiers, text styles, images, animations, and even touch handlers.

ComposeKotlin
Monday, November 24, 2025
Compose Compiler Stability Inference System

A comprehensive study of how the Compose compiler determines type stability for recomposition optimization. Table of Contents - Compose Compiler Stability Inference Systemcompose-compiler-stability-inference-system - Table of Contentstable-of-contents - Chapter 1: Foundationschapter-1-foundations - 1.1 Introduction11-introduction - 1.2 Core Concepts12-core-concepts - Stability Definitionstability-definition - Recomposition Mechanicsrecomposition-mechanics - 1.3 The Role of Stability13-the-role-of-stability - Performance Impactperformance-impact - Chapter 2: Stability Type Systemchapter-2-stability-type-system - 2.1 Type Hierarchy21-type-hierarchy - 2.2 Compile-Time Stability22-compile-time-stability - Stability.Certainstabilitycertain - 2.3 Runtime Stability23-runtime-stability - Stability.Runtimestabilityruntime - 2.4 Uncertain Stability24-uncertain-stability - Stability.Unknownstabilityunknown - 2.5 Parametric Stability25-parametric-stability - Stability.Parameterstabilityparameter - 2.6 Combined Stability26-combined-stability - Stability.Combinedstabilitycombined - 2.7 Stability Decision Tree27-stability-decision-tree - Complete Decision Treecomplete-decision-tree - Decision Tree for Generic Typesdecision-tree-for-generic-types - Expression Stability Decision Treeexpression-stability-decision-tree - Key Decision Points Explainedkey-decision-points-explained - Chapter 3: The Inference Algorithmchapter-3-the-inference-algorithm - 3.1 Algorithm Overview31-algorithm-overview - 3.2 Type-Level Analysis32-type-level-analysis - Phase 1: Fast Path Type Checksphase-1-fast-path-type-checks - Phase 2: Type Parameter Handlingphase-2-type-parameter-handling - Phase 3: Nullable Type Unwrappingphase-3-nullable-type-unwrapping - Phase 4: Inline Class Handlingphase-4-inline-class-handling - 3.3 Class-Level Analysis33-class-level-analysis - Phase 5: Cycle Detectionphase-5-cycle-detection - Phase 6: Annotation and Marker Checksphase-6-annotation-and-marker-checks - Phase 7: Known Constructsphase-7-known-constructs - Phase 8: External Configurationphase-8-external-configuration - Phase 9: External Module Handlingphase-9-external-module-handling - Phase 10: Java Type Handlingphase-10-java-type-handling - Phase 11: General Interface Handlingphase-11-general-interface-handling - Phase 12: Field-by-Field Analysisphase-12-field-by-field-analysis - 3.4 Expression-Level Analysis34-expression-level-analysis - Constant Expressionsconstant-expressions - Function Call Expressionsfunction-call-expressions - Variable Reference Expressionsvariable-reference-expressions - Chapter 4: Implementation Mechanismschapter-4-implementation-mechanisms - 4.1 Bitmask Encoding41-bitmask-encoding - Encoding Schemeencoding-scheme - Special Bit: Known Stablespecial-bit-known-stable - Bitmask Applicationbitmask-application - 4.2 Runtime Field Generation42-runtime-field-generation - JVM Platformjvm-platform - Non-JVM Platformsnon-jvm-platforms - 4.3 Annotation Processing43-annotation-processing - @StabilityInferred Annotationstabilityinferred-annotation - Annotation Generationannotation-generation - 4.4 Normalization Process44-normalization-process - Chapter 5: Case Studieschapter-5-case-studies - 5.1 Primitive and Built-in Types51-primitive-and-built-in-types - Integer Typesinteger-types - String Typestring-type - Function Typesfunction-types - 5.2 User-Defined Classes52-user-defined-classes - Simple Data Classsimple-data-class - Class with Mutable Propertyclass-with-mutable-property - Class with Mixed Propertiesclass-with-mixed-properties - 5.3 Generic Types53-generic-types - Simple Generic Containersimple-generic-container - Multiple Type Parametersmultiple-type-parameters - Nested Generic Typesnested-generic-types - 5.4 External Dependencies54-external-dependencies - External Class with Annotationexternal-class-with-annotation - External Class Without Annotationexternal-class-without-annotation - 5.5 Interface and Abstract Types55-interface-and-abstract-types - Interface Parameterinterface-parameter - Abstract Classabstract-class - Interface with @Stableinterface-with-stable - 5.6 Inheritance Hierarchies56-inheritance-hierarchies - Stable Inheritancestable-inheritance - Unstable Inheritanceunstable-inheritance - Chapter 6: Configuration and Toolingchapter-6-configuration-and-tooling - 6.1 Stability Annotations61-stability-annotations - @Stable Annotationstable-annotation - @Immutable Annotationimmutable-annotation - Compiler-Level Differences: @Stable vs @Immutablecompiler-level-differences-stable-vs-immutable - @StableMarker Meta-Annotationstablemarker-meta-annotation - 6.2 Configuration Files62-configuration-files - File Formatfile-format - Pattern Syntaxpattern-syntax - Gradle Configurationgradle-configuration - 6.3 Compiler Reports63-compiler-reports - Enabling Reportsenabling-reports - Generated Filesgenerated-files - 6.4 Common Issues and Solutions64-common-issues-and-solutions - Issue 1: Accidental var Usageissue-1-accidental-var-usage - Issue 2: Mutable Collectionsissue-2-mutable-collections - Issue 3: Interface Parametersissue-3-interface-parameters - Issue 4: External Library Typesissue-4-external-library-types - Issue 5: Inheritance from Unstable Baseissue-5-inheritance-from-unstable-base - Chapter 7: Advanced Topicschapter-7-advanced-topics - 7.1 Type Substitution71-type-substitution - Substitution Map Constructionsubstitution-map-construction - Substitution Applicationsubstitution-application - Nested Substitutionnested-substitution - 7.2 Cycle Detection72-cycle-detection - Detection Mechanismdetection-mechanism - Example: Self-Referential Typeexample-self-referential-type - Limitationlimitation - 7.3 Special Cases73-special-cases - Protobuf Typesprotobuf-types - Delegated Propertiesdelegated-properties - Inline Classes with Markersinline-classes-with-markers - Chapter 8: Compiler Analysis Systemchapter-8-compiler-analysis-system - 8.1 Analysis Infrastructure81-analysis-infrastructure - WritableSlices: Data Flow Storagewritableslices-data-flow-storage - BindingContext and BindingTracebindingcontext-and-bindingtrace - 8.2 Composable Call Validation82-composable-call-validation - Context Checking Algorithmcontext-checking-algorithm - Inline Lambda Restrictionsinline-lambda-restrictions - Type Compatibility Checkingtype-compatibility-checking - 8.3 Declaration Validation83-declaration-validation - Composable Function Rulescomposable-function-rules - Property Restrictionsproperty-restrictions - Override Consistencyoverride-consistency - 8.4 Applier Target System84-applier-target-system - Scheme Structurescheme-structure - Target Inference Algorithmtarget-inference-algorithm - Cross-Target Validationcross-target-validation - 8.5 Type Resolution and Inference85-type-resolution-and-inference - Automatic Composable Inferenceautomatic-composable-inference - Lambda Type Adaptationlambda-type-adaptation - 8.6 Analysis Pipeline86-analysis-pipeline - Compilation Phasescompilation-phases - Data Flow Through Phasesdata-flow-through-phases - 8.7 Practical Examples87-practical-examples - Example: Composable Context Validationexample-composable-context-validation - Example: Inline Lambda Analysisexample-inline-lambda-analysis - Example: Stability and Skippingexample-stability-and-skipping - Appendix: Source Code Referencesappendix-source-code-references - Primary Source Filesprimary-source-files - Conclusionconclusion Chapter 1: Foundations 1.1 Introduction The Compose compiler implements a stability inference system to enable recomposition optimization. This system analyzes types at compile time to determine whether their values can be safely compared for equality during recomposition. Source File: compiler-hosted/src/main/java/androidx/compose/compiler/plugins/kotlin/analysis/Stability.kt The inference process involves analyzing type declarations, examining field properties, and tracking stability through generic type parameters. The results inform the runtime whether to skip recomposition when parameter values remain unchanged. 1.2 Core Concepts Stability Definition A type is considered stable when it satisfies three conditions: 1. Immutability: The observable state of an instance does not change after construction 2. Equality semantics: Two instances with equal observable state are equal via equals 3. Change notification: If the type contains observable mutable state, all state changes trigger composition invalidation These properties allow the runtime to make optimization decisions based on value comparison. Recomposition Mechanics When a composable function receives parameters, the runtime determines whether to execute the function body: kotlin @Composable fun UserProfileuser: User { // Function body } The decision process: 1. Compare the new user value with the previous value 2. If equal and the type is stable, skip recomposition 3. If different or unstable, execute the function body Without stability information, the runtime must conservatively recompose on every invocation, regardless of whether parameters changed. 1.3 The Role of Stability Performance Impact Stability inference affects recomposition in three ways: Smart Skipping: Composable functions with stable parameters can be skipped when parameter values remain unchanged. This reduces the number of function executions during recomposition. Comparison Propagation: The compiler passes stability information to child composable calls, enabling nested optimizations throughout the composition tree. Comparison Strategy: The runtime selects between structural equality equals for stable types and referential equality === for unstable types, affecting change detection behavior. Consider this example: kotlin // Unstable parameter type - interface with unknown stability @Composable fun ExpensiveListitems: List<String { // List is an interface - has Unknown stability // Falls back to instance comparison } // Stable parameter type - using immutable collection @Composable fun ExpensiveListitems: ImmutableList<String { // ImmutableList is in KnownStableConstructs // Can skip recomposition when unchanged } // Alternative: Using listOf result @Composable fun ExpensiveListitems: List<String { // If items comes from listOf, the expression is stable // But the List type itself is still an interface with Unknown stability } The key insight: List and MutableList are both interfaces with Unknown stability. To achieve stable parameters, use: 1. ImmutableList from kotlinx.collections.immutable in KnownStableConstructs 2. Add kotlin.collections.List to your stability configuration file 3. Use @Stable annotation on your data classes containing List Chapter 2: Stability Type System 2.1 Type Hierarchy The compiler represents stability through a sealed class hierarchy defined in :

ComposeAndroidKotlin
Friday, October 3, 2025
An Exploration of the Internal Mechanism of Crossfade Composable

In Jetpack Compose, Crossfade provides a simple and declarative way to animate the transition between two different UI states. When the targetState passed to it changes, it smoothly fades out the old content while simultaneously fading in the new content. While its public API is minimal, a study of its internal source code reveals a sophisticated state machine that manages the lifecycle of both the incoming and outgoing composables, orchestrates their animations, and ensures a seamless visual transition. The entire mechanism is built upon the foundational Transition API, which is the core engine for state-based animations in Compose. The Entry Point: CrossfadetargetState, ... The most common Crossfade function that developers use is a simple wrapper. Its entire purpose is to create and manage a Transition object for you.

ComposeArchitectureKotlin
Sunday, September 28, 2025
derivedStateOf Internals: The Cost of Observation / Why derivedStateOf is expensive?

The derivedStateOf API in Jetpack Compose provides a convenient mechanism for creating memoized state that automatically updates when its underlying dependencies change. While essential for performance optimization in many scenarios, it is often described as "expensive." This study analyzes the internal implementation of DerivedSnapshotState to demystify this cost. We will show that the expense of derivedStateOf is not in the read operation, but in the complex machinery required to track dependencies, validate its cached value, and perform recalculations. By examining the isValid, currentRecord, and Snapshot.observe calls, this analysis will reveal the intricate dependency tracking, hashing, and transactional record-keeping that make derivedStateOf a precision tool to be used judiciously, not universally. 1. Introduction: The Promise and the Price The public API is deceptively simple: kotlin public fun <T derivedStateOfcalculation: - T: State<T = DerivedSnapshotStatecalculation, null It promises to run a calculation lambda, cache the result, and only re-run the calculation when one of the State objects read inside it changes. Let's see an example:

ComposeArchitectureKotlin
Sunday, September 28, 2025
A Proposed Evolution for Kotlin's Error Handling

The Kotlin language has long been praised for its pragmatic approach to solving common programming challenges, particularly with its robust null-safety system. However, the domain of recoverable, predictable errors has remained an area where developers rely on a patchwork of patterns rather than a first-class language feature. The "Rich Errors" proposal, also known as Error Union Types, is a significant design initiative aimed at addressing this gap. This study explores the motivation and rationale behind this proposal. We will analyze the shortcomings of existing error-handling patterns in Kotlin and examine how the proposed Rich Errors feature aims to unify them into a more expressive, type-safe, and ergonomic system. The State of Error Handling in Kotlin: A Spectrum of Patterns

Kotlin
Sunday, September 28, 2025
Google recently has official launched compose-runtime-annotation library

Google has recently launched the official runtime-annotation library, which serves a similar purpose to the community-built compose-stable-markerhttps://github.com/skydoves/compose-stable-marker library.

ComposeKotlin
Sunday, August 31, 2025
A Study: Building a Simple Dependency Injection Container in Kotlin for Android

Dependency Injection DI is a core software design pattern that promotes loose coupling and enhances the testability and scalability of applications. While powerful libraries like Hilt and Koin are the standard for production Android apps, building a simple DI container from scratch is a valuable exercise. It demystifies the "magic" and solidifies the core concepts: providing dependencies to classes instead of having them create their own. In this study, we will design and implement a basic, lifecycle-aware DI container. Our goal is to create a tool that can: 1. Register dependencies like a UserRepository or AnalyticsService. 2. Provide instances of these dependencies on demand. 3. Manage the scope of these dependencies e.g., as singletons. 4. Integrate cleanly with the Android ViewModel architecture. Step 1: Designing the Core DIContainer The heart of our tool will be a container class responsible for holding and creating our dependencies. A simple way to store registered dependencies is in a Map, where the key is the class type KClass and the value is a factory lambda that knows how to create an instance of that class.

AndroidKotlin
Sunday, August 31, 2025
A Study of API Guidelines for Building Better Jetpack Compose Components

The Jetpack Compose ecosystem has grown exponentially in recent years, and it is now widely adopted for building production-level UIs in Android applications. We can now say that Jetpack Compose is the future of Android UI development. One of the biggest advantages of Compose is its declarative approach. It allows developers to describe what the UI should display, while the framework handles how the UI should update when the underlying state changes. This model shifts the focus from imperative UI logic to a more intuitive and reactive way of thinking. However, building reusable and scalable UI components requires more than just a grasp of declarative principles. It demands a thoughtful approach to API design. To guide developers, the Android team has published a comprehensive set of API guidelines. These best practices are not strict rules but are strongly recommended for creating components that are consistent, scalable, and intuitive for other developers to use.

ComposeAndroidKotlin
Sunday, August 24, 2025
A Study: How Retrofit, written in Java, interpolates Kotlin's Coroutines to enable `suspend` functions

In the modern Android development ecosystem, the synergy between Kotlin and Java is quite still important since many of very traditional projects are written in Java. A prime example of this great interoperability is Square's Retrofit library. Despite being written entirely in Java, Retrofit seamlessly supports Kotlin's suspend functions, allowing developers to write clean, idiomatic asynchronous code for network requests. This capability is not magic, it is a sophisticated illusion built upon a cooperative understanding between the Kotlin compiler and Retrofit's dynamic, reflection-based architecture. This study examines the internal mechanisms that make this "interpolation" possible, revealing how a Java library can interact with a language feature it has no native concept of. The Foundation: Continuation-Passing Style CPS Transformation

CoroutinesNetworkKotlin
Sunday, August 24, 2025

Like what you see?

Subscribe to Dove Letter to get weekly insights about Android and Kotlin development, plus access to exclusive content and discussions.