This is a collection of private or subscriber-first articles written by the Dove Letter, skydoves (Jaewoong). These articles can be released somewhere like Medium in the future, but always they will be revealed for Dove Letter members first.
This book is designed for Kotlin developers who want to deep dive into the Kotlin fundamentals, internal mechanisms, and leverage that knowledge in their daily work right away.
You write a when expression on a sealed class, cover every subclass, and the compiler lets you skip the else branch. Then a teammate adds a new subclass in another file, and every when in the project turns red with "when expression must be exhaustive." The compiler caught the missing case at compile time, before any test could fail. But sealed subclasses can live in different files across the module. How does the compiler know them all, and how does it verify that your when branches cover every one? In this article, you'll trace through the compiler's sealed subclass collection phase, the exhaustiveness checking algorithm that compares your when branches against the full subclass set, the special handling for enums and booleans and nullable types, and how the final bytecode represents a sealed when at runtime. The fundamental problem: Subclasses are scattered across the module With enums, exhaustiveness is simple. An enum class declares all its entries in one place, and the compiler can read them directly from the declaration. Sealed classes are different. Their subclasses can be declared in separate files, as long as they're in the same module: kotlin // Shape.kt sealed interface Shape // Circle.kt data class Circleval radius: Double : Shape // Rectangle.kt data class Rectangleval width: Double, val height: Double : Shape // Triangle.kt data class Triangleval base: Double, val height: Double : Shape When you write when shape { is Circle - ... is Rectangle - ... }, the compiler needs to know that Triangle exists in a different file and that you missed it. This requires a global collection pass before any exhaustiveness check can happen.
If you use Jetpack Room, every @Dao interface turns into a full database implementation. If you use Hilt, every @Inject constructor gets wired into a dependency graph. If you use Moshi, every @JsonClass generates a JSON adapter. You add one annotation, hit Build, and new source files appear in your build/generated/ksp directory. The engine behind all of these is KSP, Kotlin Symbol Processing. In this article, you'll start from a practical processor that you'd write yourself, then trace inward through the KSP pipeline: how Gradle discovers your processor, how the Resolver lets you query the entire codebase as a symbol tree, how the multi round processing loop handles dependencies between generated files, and how KSP tracks which files need reprocessing on incremental builds. The fundamental problem: Why KAPT was slow Before KSP, the only way to do annotation processing in Kotlin was KAPT Kotlin Annotation Processing Tool. KAPT works by generating Java stub files from your Kotlin source code, then feeding those stubs to the standard javac annotation processing pipeline. This means the Kotlin compiler has to generate a complete set of Java declarations for every Kotlin class, interface, and function in your project, even if only a handful of them carry annotations. For a project with hundreds of Kotlin files, this stub generation can add 20 to 30 seconds to each build. The stubs are thrown away after processing, so the work is purely overhead. KSP takes a different approach. Instead of generating Java stubs and running through javac, KSP reads the Kotlin compiler's own symbol tree directly. Your processor receives KSClassDeclaration, KSFunctionDeclaration, and KSPropertyDeclaration objects that represent the actual Kotlin program structure, including Kotlin specific features like nullable types, extension functions, sealed classes, and default parameter values that get lost in Java stubs. The result is that KSP processors run roughly twice as fast as equivalent KAPT processors, and they see a more accurate representation of the source code.
You store a login token with dataStore.updateData { it.copytoken = newToken }. The user signs in, the token starts writing to disk, and a millisecond later the Android system kills your process because of memory pressure. When the user opens the app again, is the token there? Is the file corrupted? Is the old data intact, or is it gone too? In this article, you'll trace the exact path your data takes inside DataStore during updateData, from the coroutine mutex through the scratch file to the atomic rename, then walk through five crash timing scenarios to see what actually survives. The fundamental problem: File writes are not atomic Before looking at DataStore, consider what happens if you write to a file the naive way: kotlin val file = Filecontext.filesDir, "prefs.json" file.writeTextjson.encodeToStringnewPrefs writeText opens the file, truncates it to zero bytes, and starts writing. If the process dies halfway through, the file contains half of your JSON. On next launch, the file is there but unreadable. You have lost both the old data and the new data. This is the core problem DataStore solves. Not through retry logic or error handling, but through a write strategy that makes partial writes impossible at the file level. The updateData pipeline When you call updateData, a chain of four nested operations runs. Each layer adds a guarantee. The outer method acquires a coroutine Mutex that serializes all writers: kotlin override suspend fun updateDatatransform: suspend t: T - T: T { scope.coroutineContext.ensureActive // ... writerMutex.withLock { val updateMsg = Message.Updatetransform, enqueueState, token val result = handleUpdateupdateMsg yield result } } Only one updateData call can execute at a time. If two fragments both call updateData concurrently, the second one waits until the first finishes. This means the second transform always sees the result of the first, so no writes are lost.
Saturday, March 28, 2026Landscapisthttps://github.com/skydoves/landscapist provides a composable image loading library for Jetpack Compose and Kotlin Multiplatform. Among its image composables, LandscapistImage stands out as the recommended choice: it uses Landscapist's own standalone loading engine built from scratch for Jetpack Compose and Kotlin Multiplatform, with no dependency on platform specific loaders like Glide or Coil. It handles fetching, caching, decoding, and display internally, and it works identically across Android, iOS, Desktop, and Web. On top of that, LandscapistImage exposes a plugin system through the ImagePlugin sealed interface, giving you five distinct hook points into the image loading lifecycle where you can inject custom behavior without modifying the loader itself. In this article, you'll explore the ImagePlugin architecture, examining each of the five plugin types and why they exist, how ImagePluginComponent collects and dispatches plugins through a DSL, and how built in plugins like PlaceholderPluginhttps://skydoves.github.io/landscapist/placeholder/placeholderplugin, ShimmerPluginhttps://skydoves.github.io/landscapist/placeholder/shimmerplugin, CircularRevealPluginhttps://skydoves.github.io/landscapist/animation/circular-reveal-animation, PalettePluginhttps://skydoves.github.io/landscapist/palette/, and ZoomablePluginhttps://skydoves.github.io/landscapist/zoomable/zoomableplugin implement these interfaces in practice. Why LandscapistImage for plugins Before diving into the plugin system, it is worth understanding why LandscapistImage is the best foundation for plugin based image loading. LandscapistImage uses its own standalone engine landscapist-core rather than delegating to Glide, Coil, or Fresco. This means every stage of the image loading pipeline, from network fetching through memory caching to bitmap decoding, is controlled by a single Kotlin Multiplatform implementation. The benefit for plugins is direct: when LandscapistImage transitions from loading to success, it knows the exact moment the bitmap becomes available. It passes that bitmap directly to PainterPlugin and SuccessStatePlugin without any adapter layer or platform specific conversion. The plugin receives a real ImageBitmap, not a wrapped platform object. This also means LandscapistImage works on every Compose Multiplatform target. A ShimmerPlugin you write for Android runs identically on iOS and Desktop. There is no "this plugin only works with Glide" problem, because there is no Glide in the pipeline. If you look at the LandscapistImage composable signature, you can see where plugins fit in: kotlin @Composable public fun LandscapistImage imageModel: - Any?, modifier: Modifier = Modifier, component: ImageComponent = rememberImageComponent {}, imageOptions: ImageOptions = ImageOptions, loading: @Composable BoxScope.LandscapistImageState.Loading - Unit? = null, success: @Composable BoxScope.LandscapistImageState.Success, Painter - Unit? = null, failure: @Composable BoxScope.LandscapistImageState.Failure - Unit? = null, The component parameter is the entry point for the plugin system. When you pass a rememberImageComponent { ... } block, every plugin you add inside that block gets dispatched at the correct lifecycle stage automatically. You can still use loading, success, and failure lambdas for one off customization, but plugins are the reusable, composable alternative. The fundamental problem: Extending image loading without modifying it
Every Compose app draws images. Whether you call ImagepainterResourceR.drawable.photo to display a bitmap, render a Material icon with IconIcons.Default.Search, or load a vector drawable, the same underlying abstraction handles the actual drawing: the Painter class. Painter is to Compose what Drawable is to the View system, a layer that knows how to draw content into a bounded area while handling alpha, color filters, and layout direction. In this article, you'll explore the full drawing pipeline from the abstract Painter class through its concrete implementations BitmapPainter for raster images and VectorPainter for vector graphics, the immutable ImageVector data structure and the mutable VectorComponent render tree that draws it, the DrawCache that caches rendered vectors as bitmaps for performance, painterResource which dispatches between bitmap and vector formats, and the Image and Icon composables that connect painters to the layout system. The fundamental problem: One API for many image formats Android has two fundamentally different kinds of image assets. Bitmaps PNG, JPG, WEBP are grids of pixels. Vector drawables are XML files containing mathematical path descriptions, lines, curves, and fills expressed as coordinate instructions. A bitmap stores the exact color value of every pixel, while a vector stores instructions for how to draw the image at any size. Compose needs a single abstraction that layout composables like Image and Icon can use without knowing the underlying image format. A composable that displays a photo and one that displays a search icon should both work through the same interface. The Painter abstraction solves this problem. It defines two things every image format must provide: an intrinsicSize reporting the image's natural dimensions and an onDraw method that knows how to render it. Each format provides its own implementation. Painter: The drawing abstraction Think of Painter like a print shop that accepts any original, whether it is a photograph, an illustration, or a piece of vector art, and produces output at the requested size. The consumer hands over the original and specifies dimensions, and the shop handles the rest. The consumer does not need to know whether the source was a JPEG or an SVG file. The Painter abstract class defines the contract that all image sources must implement. If you look at its core structure simplified: kotlin abstract class Painter { private var layerPaint: Paint? = null private var useLayer = false private var alpha: Float = DefaultAlpha private var colorFilter: ColorFilter? = null abstract val intrinsicSize: Size protected abstract fun DrawScope.onDraw protected open fun applyAlphaalpha: Float: Boolean = false protected open fun applyColorFiltercolorFilter: ColorFilter?: Boolean = false protected open fun applyLayoutDirectionlayoutDirection: LayoutDirection: Boolean = false } Each Painter subclass reports its natural dimensions through intrinsicSize. A BitmapPainter returns the pixel dimensions of its image. A VectorPainter returns the dp based default size. If a painter has no intrinsic size, like ColorPainter which fills any area with a solid color, it returns Size.Unspecified. The optimization hooks applyAlpha and applyColorFilter are where the design gets interesting. These methods return a Boolean. If the subclass returns true, it means "I'll handle this effect directly." If it returns false, the base class falls back to rendering into an offscreen layer using withSaveLayer, which works universally but costs an extra buffer allocation. This opt in pattern lets simple painters like BitmapPainter avoid the offscreen layer entirely. The draw method ties everything together. It configures alpha and color filter, insets the drawing area to the requested size, then decides whether to use a layer or call onDraw directly:
Every @Composable function you write produces invisible scaffolding. The Compose compiler wraps each Kotlin construct in a "group" that tells the runtime what it can do during recomposition. A conditional branch gets one type of group. A function body gets another. A key call gets yet another. These decisions happen at compile time, and they determine whether the runtime can skip, replace, move, or recycle each piece of your UI. In this article, you'll explore the seven group types in Compose's runtime, examining how replace groups handle conditional branches, how restart groups enable targeted recomposition, how movable groups preserve state across reordering, how node groups bridge the slot table and the UI tree, how reusable groups recycle composition structure, how defaults groups isolate default parameter calculations, and how all seven funnel into a single core function with just three GroupKind values. The fundamental problem: Why multiple group types? Consider a composable that mixes several Kotlin constructs together: kotlin @Composable fun UserCarduser: User, showBio: Boolean { keyuser.id { Textuser.name if showBio { val bio = remember { loadBiouser.id } Textbio } } } Each construct here needs different treatment from the runtime. The if branch might disappear entirely when showBio becomes false, so the runtime needs to delete everything inside it. The keyuser.id block might move to a different position in a list, so the runtime needs to find it and relocate it instead of destroying it. The UserCard function itself needs to restart independently when its parameters change. The Text calls need to emit actual nodes into the UI tree. One group type cannot handle all these cases efficiently. A group that searches for moved children would waste time on a simple if/else branch that will never move. A group that immediately deletes mismatches would destroy state that could have been preserved through a reorder. So the compiler classifies each construct into the group type that gives the runtime exactly the capabilities it needs, and nothing more. The funnel: Seven entry points, three GroupKind values All group types funnel into a single core mechanism. The GroupKind value class defines just three distinct representations: kotlin @JvmInline internal value class GroupKind private constructorval value: Int { inline val isNode get = value != Group.value inline val isReusable get = value != Node.value companion object { val Group = GroupKind0 val Node = GroupKind1 val ReusableNode = GroupKind2 } } Three values, but seven group types. Five of those seven use GroupKind.Group. The behavioral difference between them is not in how the slot table stores them, but in the logic each start method runs before or after calling the core start function. Here is the routing:
Every Android developer using Compose has written @Preview above a composable and watched it appear in the Studio design panel. But what actually happens between that annotation and the rendered pixels? The answer involves annotation metadata, XML layout inflation, fake Android lifecycle objects, reflection based composable invocation, and a JVM based rendering engine, all collaborating to make a composable believe it is running inside a real Activity. In this article, you'll explore the full pipeline that transforms a @Preview annotation into a rendered image, tracing the journey from the annotation definition itself, through ComposeViewAdapter the FrameLayout that orchestrates the render, ComposableInvoker which calls your composable via reflection while respecting the Compose compiler's ABI, Inspectable which enables inspection mode and records composition data, and the ViewInfo tree that maps rendered pixels back to source code lines. The fundamental problem: Rendering the uncallable A @Composable function is not a regular function. The Compose compiler transforms every @Composable function to accept a Composer parameter and synthetic $changed and $default integers. Beyond the function signature, composables expect to run inside an environment that provides lifecycle owners, a ViewModelStore, a SavedStateRegistry, and other Android framework objects. These dependencies come for free inside a running Activity, but Studio needs to render your composable without a running emulator or device. The tooling must reconstruct enough of the Android runtime for the composable to believe it is inside a real Activity, call the composable through reflection while matching the compiler's transformed signature exactly, and then extract the rendered layout information so Studio can map pixels to source code. This is the challenge the ui-tooling library solves. The @Preview annotation: Metadata, not behavior The @Preview annotation itself does nothing at runtime. It is purely metadata that Studio reads to configure the rendering environment. Looking at the annotation definition: kotlin @MustBeDocumented @RetentionAnnotationRetention.BINARY @TargetAnnotationTarget.ANNOTATIONCLASS, AnnotationTarget.FUNCTION @Repeatable annotation class Preview val name: String = "", val group: String = "", @IntRangefrom = 1 val apiLevel: Int = -1, val widthDp: Int = -1, val heightDp: Int = -1, val locale: String = "", @FloatRangefrom = 0.01 val fontScale: Float = 1f, val showSystemUi: Boolean = false, val showBackground: Boolean = false, val backgroundColor: Long = 0, @AndroidUiMode val uiMode: Int = 0, @Device val device: String = Devices.DEFAULT, @Wallpaper val wallpaper: Int = Wallpapers.NONE, Three meta annotations define how this annotation behaves:
Jetpack Compose is a UI toolkit on the surface, but its internals draw from decades of computer science research. The runtime uses a data structure borrowed from text editors to store composition state. The modifier system applies an algorithm from version control to diff node chains. The state management layer implements a concurrency model from database engines. These are not theoretical exercises. They are practical solutions to problems that Compose solves every frame. In this article, you'll explore five algorithms and data structures embedded in Compose's internals: the gap buffer that powers the slot table, Myers' diff algorithm applied to modifier chains, snapshot isolation borrowed from database MVCC, bit packing used to compress flags and parameters, and positional memoization that makes remember work without explicit keys. Gap Buffer: The text editor trick inside the slot table When you type in a text editor, characters are inserted at the cursor position. The naive approach, shifting every subsequent character one position to the right, costs On per keystroke. Text editors solved this decades ago with the gap buffer: an array that maintains a block of unused space the "gap" at the cursor position. Inserting a character fills one slot of the gap at O1 cost. Moving the cursor shifts the gap to a new location, but sequential edits at nearby positions are fast because the gap is already there. Compose's slot table faces the same problem. During composition, the runtime inserts groups and their associated data slots as composable functions execute. The SlotWriter maintains two parallel gap buffers: one for groups structural metadata and one for slots stored values like remembered state. Each buffer tracks a gapStart position and a gapLen count. When the writer needs to insert at a position away from the current gap, it moves the gap there first. The operation shifts array elements to reposition the empty space simplified: kotlin private fun moveGroupGapToindex: Int { if groupGapStart == index return
Every Compose developer has written remember { mutableStateOf0 }. The value survives recomposition without any explicit storage reference. No ViewModel, no map, no key. Compose knows where the value belongs based on where the remember call appears in the source code. This mechanism is called positional memoization: values are identified not by name, but by their position in the execution trace of the composition. In this article, you'll dive deep into the positional memoization system that powers remember, exploring how the compiler transforms remember calls into Composer.cache invocations, how cache reads from and writes to the slot table using a sequential cursor, how the changed function advances through stored keys to detect invalidation, why or is used instead of || when combining key checks, how RememberObserver values receive lifecycle callbacks, and how the skipping property determines whether the runtime re executes a group or reuses stored data. The fundamental problem: State without storage references Consider a simple counter composable: kotlin @Composable fun Counter { var count by remember { mutableStateOf0 } ButtononClick = { count++ } { Text"Count: $count" } } Where does count live? There's no field in a class, no entry in a map, no unique identifier passed to remember. If you call Counter from two different places, each call gets its own independent count. The runtime distinguishes them purely by position: the first Counter call occupies one position in the composition tree, the second call occupies another. Think of the slot table as a filing cabinet with numbered drawers. Each time composition runs, the runtime opens drawers in the same order: drawer 0, drawer 1, drawer 2, and so on. As long as your composable functions execute in the same order, each remember call opens the same drawer it opened last time and finds the same value waiting inside. The remember API: Surface and overloads The simplest remember overload takes no keys. It calls currentComposer.cachefalse, calculation, passing false to indicate the cached value is never invalidated by key changes: kotlin @Composable inline fun <T remember crossinline calculation: @DisallowComposableCalls - T : T = currentComposer.cachefalse, calculation The single key variant passes the result of currentComposer.changedkey1 as the invalid flag. If the key changed since the last composition, the cached value is recalculated: kotlin @Composable inline fun <T remember key1: Any?, crossinline calculation: @DisallowComposableCalls - T, : T { return currentComposer.cache currentComposer.changedkey1, calculation } The two key variant combines checks with or:
Jetpack Compose stores your entire composition tree in a data structure called the SlotTable. Every composable call, every remembered value, every key is recorded as groups and slots in this table. For years, the SlotTable used a gap buffer, the same data structure that powers text editors. It worked well for sequential operations, but as applications grew more dynamic with lists, animations, and conditional content, one limitation became painful: moving or reordering groups required copying large portions of memory. The Compose team rewrote the SlotTable as a linked list, and operations like list reordering now recompose over twice as fast. In this article, you'll dive deep into this architectural shift, exploring how the gap buffer stores groups in contiguous arrays, how the link buffer replaces positions with pointer based navigation, how the SlotTableEditor achieves O1 moves and deletes, how the free list and slot buffering manage memory, and how GroupHandle enables lazy position resolution. This isn't a guide on using Compose's APIs. It's an exploration of the data structure rewrite that makes recomposition fundamentally faster. The gap buffer: From text editors to composition trees A gap buffer is a data structure originally designed for text editing. Consider the problem a text editor faces: a document is a sequence of characters, and the user can insert or delete at any position. Storing characters in a flat array means inserting in the middle requires shifting every character after the insertion point. For a 100,000 character document, inserting near the beginning means moving nearly 100,000 characters. The gap buffer solves this by keeping an empty region the "gap" right at the cursor position. Inserting at the cursor fills in the gap without shifting anything. Deleting at the cursor expands the gap. Most text editing is sequential: you type at one spot, and the gap stays right where you need it. Editors like Emacs have used gap buffers for decades because of this property. The trade off appears when you jump to a distant position. The gap must slide to the new cursor, copying every element between the old and new positions. As long as edits cluster near one location, the gap rarely moves and performance stays excellent. This is why gap buffers work well beyond text editors too: any workload where modifications happen sequentially in a large, ordered data set benefits from the same principle. The data stays in a flat, contiguous array which is cache friendly, and insertions at the working position are O1. How Compose applied the gap buffer The original Compose SlotTable stores composition data in two flat arrays: kotlin internal class SlotTable : SlotStorage, CompositionData { var groups = IntArray0 var groupsSize = 0 var slots = Array<Any?0 { null } var slotsSize = 0 } The groups array stores group metadata as inline structs of 5 integers each GroupFieldsSize = 5. Each group contains a key, group size, node count, parent anchor, and a data anchor pointing into the slots array. The slots array holds the actual remembered values, composable node references, and other data associated with each group. The groups are ordered linearly: a parent's group fields are followed immediately by all its children's fields, forming a depth first layout. This makes linear scanning fast because you can read the tree from start to finish without jumping around. But it also means that a group's identity is tied to its position in the array. If you want to move a group, you have to physically relocate its data. The gap buffer maintains its "gap" in each array. Insertions at the gap are O1. But insertions elsewhere require moving the gap first, which copies every element between the old gap position and the new one. For a table with 10,000 groups 50,000 integers, moving the gap from the end to the beginning copies all 50,000 integers. Deletions work similarly. When you remove a group, its space becomes part of the gap. But if the gap isn't adjacent to the deleted group, it must be moved there first. This worked well for initial composition, which proceeds sequentially through the tree. But recomposition can touch any part of the tree in any order, and list reordering moves groups across large distances. The fundamental problem: Array copies that scale with composition size Imagine a Column rendering 1,000 items, and the data source reorders item 999 to position 0. The runtime must move that item's group and all its slots from the end of the SlotTable to the beginning. In the gap buffer, this requires a 9 step process:
Kotlin Coroutines have become the standard for asynchronous programming on the JVM, offering developers a way to write sequential, readable code that can pause and resume without blocking threads. Most developers interact with coroutines through familiar APIs like launch, async, and Flow, treating suspend as a language keyword that "just works." But coroutines are not simply a library feature layered on top of the language. They are a compiler level solution, built through the Kotlin compiler's IR lowering pipeline and bytecode generation, that transforms your sequential code into resumable state machines. The suspend keyword triggers a series of compiler transformations that rewrite your function's structure, signature, and control flow before it ever reaches the JVM. In this article, you'll dive deep into the Kotlin compiler's coroutine machinery, exploring the six stage transformation pipeline that converts a suspend function into a state machine. You'll trace through how the compiler injects hidden continuation parameters through CPS transformation, how it generates continuation classes with the clever sign bit trick for distinguishing fresh calls from resumptions, how the bytecode level transformer collects suspension points and inserts a TABLESWITCH dispatch, how local variables are "spilled" into continuation fields to survive across suspension, and how tail call optimization lets the compiler skip the entire state machine when it can prove every suspension point is a tail call. The fundamental problem: How do you make a function resumable? Consider this suspend function: kotlin suspend fun fetchUserData: UserData { val user = fetchUser val profile = fetchProfileuser.id return UserDatauser, profile } This looks like ordinary sequential code, but both fetchUser and fetchProfile might perform network requests that take hundreds of milliseconds. The function must be able to pause at each call, release the thread entirely, and later resume execution at the exact point where it left off, with all local variables intact. The JVM provides no native mechanism for this. A JVM method is a stack frame, and when a method returns, its stack frame is gone. There is no way to "freeze" a stack frame, release the thread, and later restore it. The function must return to release the thread, but returning destroys the local state. The Kotlin compiler solves this by transforming each suspend function into a state machine. The function's body is split into segments between suspension points. Local variables are saved into fields of a continuation object before each suspension, and restored after resumption. A label field tracks which segment to execute next, and a TABLESWITCH at the function entry dispatches to the correct segment. The developer writes linear code; the compiler generates the machinery to break it apart and reassemble it on demand. The six stage pipeline: From suspend to state machine The transformation happens across six distinct phases in the JVM backend. Understanding the full pipeline is essential to understanding why each phase exists and what it contributes. 1. SuspendLambdaLowering: Converts suspend lambda expressions into anonymous continuation classes 2. TailCallOptimizationLowering: Identifies suspend calls in tail position and marks them with IrReturn wrappers 3. AddContinuationLowering: The central IR lowering, generates continuation classes, injects $completion parameters, creates static suspend implementations 4. Code generation: Lowers IR to JVM bytecode, placing BeforeSuspendMarker/AfterSuspendMarker instructions around each suspension point 5. CoroutineTransformerMethodVisitor: The bytecode level state machine engine, inserts the TABLESWITCH, spills variables, generates resume paths 6. Tail call optimization check: If all suspension points are tail calls, the state machine is skipped entirely Let's trace through each phase. CPS transformation: The invisible parameter The foundation of coroutine compilation is Continuation Passing Style CPS transformation. Every suspend function, when compiled, receives a hidden additional parameter: the continuation. This continuation represents "what happens next" after the function completes or suspends. When you write: kotlin suspend fun fetchUser: User { // ... } The compiler transforms the signature to: kotlin fun fetchUser$completion: Continuation<User?: Any? Two changes happen. First, a $completion parameter of type Continuation is appended. Second, the return type becomes Any?, because the function can now return either the actual result or the special sentinel COROUTINESUSPENDED, indicating that the function has paused and will deliver its result later through the continuation. Looking at how AddContinuationLowering performs this injection: kotlin val continuationParameter = buildValueParameterfunction { kind = IrParameterKind.Regular name = Name.identifierSUSPENDFUNCTIONCOMPLETIONPARAMETERNAME // "$completion" type = continuationTypecontext.substitutesubstitutionMap // Continuation<RetType? origin = JvmLoweredDeclarationOrigin.CONTINUATIONCLASS } The parameter is inserted before any default argument masks but after all regular parameters. This is invisible in source code but always present in the bytecode. Every call site of a suspend function is also rewritten to pass the current continuation as this extra argument. The continuation class: Where state lives The central artifact of coroutine compilation is the continuation class. For each named suspend function, the compiler generates an inner class that extends ContinuationImpl and holds all the state needed to suspend and resume. Looking at generateContinuationClassForNamedFunction in AddContinuationLowering.kt: kotlin context.irFactory.buildClass { name = Name.special"<Continuation" origin = JvmLoweredDeclarationOrigin.CONTINUATIONCLASS }.apply { superTypes += context.symbols.continuationImplClass.owner.defaultType
Jetpack Compose manages UI state through a system called Snapshots, a concept borrowed from database theory that enables isolated, concurrent access to shared mutable state. When you write var count by mutableStateOf0, the runtime doesn't just store a value in a field. It creates a snapshot aware state object that participates in an isolation system where multiple threads can read and write state without interfering with each other. While most developers interact with snapshots implicitly through mutableStateOf and recomposition, the deeper question remains: what exactly is a snapshot, how does it provide isolation, and what happens when you "enter" one? In this article, you'll dive deep into the Snapshot abstraction itself, exploring how the class hierarchy provides different levels of isolation, how the thread local mechanism makes snapshots invisible to the developer, how the GlobalSnapshot serves as the always present default, how advanceGlobalSnapshot makes changes visible, how nested snapshots enable hierarchical isolation, and how TransparentObserverSnapshot achieves zero cost observation. This isn't a guide on using mutableStateOf or snapshotFlow. It's an exploration of the isolation architecture that makes Compose's reactive state management possible. The fundamental problem: Concurrent access to shared mutable state Consider a typical Compose application: kotlin @Composable fun UserProfile { var name by remember { mutableStateOf"" } var email by remember { mutableStateOf"" } Column { Text"Name: $name" Text"Email: $email" ButtononClick = { name = "Jaewoong" email = "jaewoong@example.com" } { Text"Load User" } } } This looks simple, but several problems lurk beneath the surface: 1. Torn reads: If composition reads name after the button click updates it but before email is updated, the UI shows inconsistent state. 2. Concurrent composition: Compose may run composition on a background thread while the main thread is modifying state. 3. Observation: The system must know that UserProfile read name and email so it can schedule recomposition when they change. 4. Batching: Multiple state changes from a single gesture should result in one recomposition, not one per change. The naive solution of locking every read and write would kill performance. Compose solves all four problems with a single abstraction: snapshots. A snapshot is an isolated view of mutable state at a specific point in time. Reads within a snapshot always see a consistent view, writes are invisible to other snapshots until explicitly applied, and observers track exactly which state each composable depends on. The Snapshot abstraction: An isolated view of state At its core, a Snapshot is a sealed class that encapsulates a unique ID and a set of snapshot IDs that should be considered invisible: kotlin // simplified public sealed class Snapshot snapshotId: SnapshotId, internal open var invalid: SnapshotIdSet, { public open var snapshotId: SnapshotId = snapshotId public abstract val root: Snapshot public abstract val readOnly: Boolean internal abstract val readObserver: Any - Unit? internal abstract val writeObserver: Any - Unit? } Three properties define how a snapshot sees the world:
Jetpack Compose revolutionized Android UI development with its declarative approach, but what makes it truly powerful is the sophisticated machinery underneath. At the heart of Compose's reactivity lies the Snapshot System, a multi-version concurrency control MVCC implementation that enables isolated state changes, automatic recomposition, and conflict-free concurrent updates. When you write var count by mutableStateOf0, you're interacting with one of the most elegant concurrent systems in modern Android development. In this article, you'll dive deep into the internal mechanisms of the Snapshot System, exploring how snapshots provide isolation through MVCC, how StateRecord chains track multiple versions of state, how the system decides which version to read, how writes create new StateRecords without blocking readers, how state observations trigger recomposition, and how the apply mechanism detects and resolves conflicts. This isn't a guide on using mutableStateOf, it's an exploration of the compiler and runtime machinery that makes reactive state management possible. The fundamental problem: How do you track state changes safely? Consider this simple Compose code: kotlin @Composable fun Counter { var count by remember { mutableStateOf0 } ButtononClick = { count++ } { Text"Count: $count" } } This looks deceptively simple, but several complex problems need solving: 1. Isolation: When count changes, the new value must be visible to recomposition but not affect in-progress compositions. 2. Observation: The system must know that this composable read count so it can recompose when count changes. 3. Concurrency: Multiple threads might read and write state simultaneously. 4. Memory: Old state versions must eventually be garbage collected. The naive approach would use locks everywhere, but that would kill performance. Compose solves this elegantly with snapshots, isolated views of mutable state that enable lock-free reads and conflict detection. Understanding the core abstraction: What makes Snapshot special At its heart, a Snapshot is an isolated view of mutable state at a specific point in time. The Snapshot class is a sealed class that encapsulates a unique snapshot ID and tracks which concurrent snapshots should be considered invalid for isolation purposes: kotlin public sealed class Snapshot snapshotId: SnapshotId, internal open var invalid: SnapshotIdSet, { public open var snapshotId: SnapshotId = snapshotId public abstract val root: Snapshot public abstract val readOnly: Boolean internal abstract val readObserver: Any - Unit? internal abstract val writeObserver: Any - Unit? } Three critical properties define snapshot isolation: Snapshot IDs are monotonically increasing Every snapshot gets a unique ID from nextSnapshotId, an atomically incremented counter. This creates a total ordering of snapshots. When you create a snapshot, it gets the next available ID: kotlin val nextSnapshotId: SnapshotId get = sync { currentGlobalSnapshot.get.snapshotId + 1 } This monotonic ID is the foundation of version selection, newer snapshots can see changes from older snapshots, but not vice versa. Invalid sets track concurrent snapshots Each snapshot maintains a SnapshotIdSet called invalid that contains the IDs of snapshots that were active but not yet applied when this snapshot was created. This is crucial for isolation: kotlin // From the test suite demonstrating isolation var state by mutableStateOf"0" val snapshot = takeSnapshot state = "1" assertEquals"1", state // Global sees "1" assertEquals"0", snapshot.enter { state } // Snapshot still sees "0" The snapshot can't see changes made by concurrent snapshots because their IDs are in its invalid set. This is how MVCC provides snapshot isolation. Observers enable reactive behavior The readObserver is called whenever state is read, allowing the system to track dependencies. The writeObserver is called on writes, enabling batched notifications. These observers are the bridge between snapshots and recomposition. Global vs Mutable: Two kinds of snapshots Compose uses two snapshot types for different purposes. GlobalSnapshot: The current state of the world There's a single GlobalSnapshot that represents the "current" global state: kotlin internal class GlobalSnapshotsnapshotId: SnapshotId, invalid: SnapshotIdSet : MutableSnapshot snapshotId, invalid, null, { state - sync { globalWriteObservers.fastForEach { itstate } } } The global snapshot is special: - It's the default snapshot when you're not inside any other snapshot - Writes to the global snapshot are immediately visible - It has a write observer that notifies all registered globalWriteObservers - When mutable snapshots are applied, they merge into the global snapshot
Every Android developer has overridden onCreate, onResume, and onDestroy. You write your initialization logic, register listeners, and clean up resources, trusting that the framework will call these methods at the right time, in the right order. But what actually invokes these callbacks? The lifecycle does not run itself. Somewhere deep in the Android framework, a sophisticated transaction system serializes commands on the system server side, sends them across a Binder IPC boundary, and then a state machine in your app's process figures out the exact sequence of intermediate transitions needed to reach the target state. The simplicity of Activity.onResume belies an entire internal architecture devoted to making that call happen reliably. In this article, you'll dive deep into the internal machinery that drives every Activity lifecycle callback. You'll trace the path from the system server's ClientTransaction through the TransactionExecutor's state machine, into ActivityThread's perform methods, through Instrumentation's dispatch layer, and all the way to the window management code that makes your Activity visible. Along the way, you'll see how the framework calculates intermediate lifecycle states, how it protects against invalid transitions, and why this layered architecture exists in the first place. This isn't a guide on using Activity lifecycle callbacks. It's an exploration of the internal transaction and state machine architecture that makes them possible. The fundamental problem: Coordinating lifecycle across process boundaries When you think about lifecycle callbacks, you might imagine something simple. The system server decides an Activity should resume, and it calls onResume. If only it were that straightforward. Consider the naive mental model: java // conceptual - what you might imagine happens activityInstance.onResume; The reality is far more complex. The system server running in its own process cannot directly invoke methods on your Activity running in your app's process. The call must cross a Binder IPC boundary. But that is just the start of the problem. What if the Activity is currently in the ONSTOP state and needs to reach ONRESUME? The framework cannot jump directly. It must first transition through ONRESTART, then ONSTART, and only then ONRESUME. Each of these intermediate callbacks must fire in order, because your code might depend on onStart having run before onResume. Furthermore, the system server may need to batch multiple commands deliver a result, then resume into a single transaction. It must handle edge cases like an Activity being destroyed while a resume command is in flight. And once the Activity is resumed, the framework must add its DecorView to the WindowManager so it actually becomes visible. This is the fundamental problem: lifecycle callbacks are not simple method calls. They are the output of a distributed state machine that spans two processes, handles arbitrary state jumps, manages window visibility, and must always produce a deterministic callback order. ActivityClientRecord: Tracking lifecycle state on the client side The framework needs a way to track each Activity's current lifecycle state within the app process. This is the job of ActivityClientRecord, a static inner class of ActivityThread that serves as the client side bookkeeping record for each Activity instance. If you examine the ActivityClientRecord: java // android.app.ActivityThread.ActivityClientRecord public static final class ActivityClientRecord { public IBinder token; Activity activity; Window window; @LifecycleState private int mLifecycleState = PREONCREATE; boolean paused; boolean stopped; // ... } Notice the structure: 1. token is the Binder token that uniquely identifies this Activity across the system server and app process boundary. Every lifecycle command references an Activity by this token. 2. mLifecycleState tracks the current lifecycle state as an integer constant. It starts at PREONCREATE, meaning the Activity has not yet been created. 3. paused and stopped are legacy boolean flags maintained for backward compatibility with older APIs, but mLifecycleState is the authoritative state tracker. The setState method keeps everything in sync: java // android.app.ActivityThread.ActivityClientRecord public void setState@LifecycleState int newLifecycleState { mLifecycleState = newLifecycleState; switch mLifecycleState { case ONCREATE: paused = true; stopped = true; break; case ONRESUME: paused = false; stopped = false; break; case ONPAUSE: paused = true; stopped = false; break; case ONSTOP: paused = true; stopped = true; break; } } This is important: every time the Activity advances through a lifecycle state, setState is called immediately after the callback completes. The TransactionExecutor which you'll see next reads getLifecycleState to determine where the Activity currently is before calculating the path to the next target state. If this bookkeeping were ever out of sync, the state machine would produce incorrect transition sequences. ClientTransaction: Bundling lifecycle commands for IPC The system server cannot call methods on your Activity directly. Instead, it constructs a ClientTransaction, a Parcelable container that bundles one or more lifecycle commands for delivery to the app process.
Google Maps popularized a bottom sheet pattern that most Android developers recognize immediately: a small panel peeking from the bottom of the screen, expandable to a mid height for quick details, and draggable to full screen for comprehensive information. The user interacts with the map behind the sheet at all times. This pattern looks simple on the surface, but implementing it correctly requires solving several problems: multi state anchoring, non modal interaction, dynamic content adaptation, and nested scroll coordination. Jetpack Compose's standard ModalBottomSheet only supports two states expanded and hidden and blocks background interaction with a scrim, making it unsuitable for this use case. In this article, you'll explore how to build a Google Maps style bottom sheet using FlexibleBottomSheethttps://github.com/skydoves/flexiblebottomsheet, covering how to configure three expansion states with custom height ratios, how to enable non modal mode so users can interact with the content behind the sheet, how to adapt your UI dynamically based on the sheet's current state, how to control state transitions programmatically, how to handle nested scrolling inside the sheet, and how to wrap content dynamically for variable height sheets. Why ModalBottomSheet falls short Consider the standard Material 3 bottom sheet: kotlin @Composable fun StandardBottomSheet { ModalBottomSheet onDismissRequest = { / dismiss / }, sheetState = rememberModalBottomSheetState, { Text"Content here" } } This gives you two states: expanded and hidden. The sheet covers the background with a scrim, blocking all interaction behind it. For a confirmation dialog or action menu, this is fine. But for a Google Maps style experience, you need: 1. Three visible states: A peek height showing a summary, a mid height for details, and a full height for comprehensive content. 2. No scrim: The map behind the sheet must remain fully interactive. 3. Dynamic content: The content should adapt based on the current expansion state. 4. Nested scrolling: Scrollable content inside the fully expanded sheet should scroll naturally, and dragging down from the top of the scroll should collapse the sheet. FlexibleBottomSheethttps://github.com/skydoves/flexiblebottomsheet addresses all of these. Setting up a three state bottom sheet
Jetpack Compose's stability system determines whether a composable function can be skipped during recomposition. When all parameters are stable, Compose can compare them and skip the function entirely if nothing changed. When even one parameter is unstable, the composable must re-execute every time its parent recomposes. Understanding which composables are stable and which are not is the first step toward optimizing Compose performance, but it's not the whole picture. Compose Stability Analyzerhttps://github.com/skydoves/compose-stability-analyzer has been providing real time stability analysis directly in Android Studio through gutter icons, hover tooltips, inline hints, and code inspections. These features answer the question "is this composable stable?" at a glance.
Android's WorkManager has become the recommended solution for persistent, deferrable background work. Unlike transient background operations that live and die with your app process, WorkManager guarantees that enqueued work eventually executes, even if the user force-stops the app, the device reboots, or constraints aren't met yet. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around work persistence, dual-scheduler coordination, constraint tracking, process resilience, and state management that span a Room database, multiple scheduler backends, and a carefully orchestrated execution pipeline. In this article, you'll dive deep into how Jetpack WorkManager works internally, exploring how the singleton is initialized and bootstrapped through AndroidX Startup, how WorkSpec entities persist work metadata in a Room database, how the dual-scheduler system coordinates between GreedyScheduler and SystemJobScheduler, how Processor and WorkerWrapper orchestrate the actual execution of work, how ConstraintTracker monitors system state for constraint satisfaction, how ForceStopRunnable detects app force stops and reschedules work, and how work chaining creates dependency graphs through the Dependency table. The fundamental problem: Reliable background execution Background execution on Android is fundamentally unreliable. The system aggressively kills processes to reclaim memory, Doze mode restricts background activity, and app standby buckets throttle work for rarely-used apps. A naive approach to background work: kotlin class SyncActivity : AppCompatActivity { override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState Thread { // Sync data with server api.syncAllData }.start } } This fails in multiple ways. The thread dies when the process is killed. There's no retry mechanism if the network fails. The work doesn't survive device reboots. There's no way to specify constraints like "only on Wi-Fi" or "only when charging." You might try using a Service: kotlin class SyncService : Service { override fun onStartCommandintent: Intent?, flags: Int, startId: Int: Int { Thread { api.syncAllData }.start return STARTREDELIVERINTENT } } This is better, STARTREDELIVERINTENT ensures the Intent is redelivered if the process is killed. But you still have no constraint support, no work chaining, no persistence across reboots, and no observability of work status. You'd need to build all of that yourself. WorkManager solves this by providing a complete infrastructure for persistent, constraint-aware, observable, chainable background work with guaranteed execution. Initialization: The bootstrap sequence WorkManager initializes itself automatically before your Application.onCreate runs. The entry point is WorkManagerInitializer, which implements AndroidX Startup's Initializer interface: java public final class WorkManagerInitializer implements Initializer<WorkManager { @Override public WorkManager createContext context { Logger.get.debugTAG, "Initializing WorkManager with default configuration."; WorkManager.initializecontext, new Configuration.Builder.build; return WorkManager.getInstancecontext; } @Override public List<Class<? extends Initializer<? dependencies { return Collections.emptyList; } } AndroidX Startup uses a ContentProvider to trigger initialization before Application.onCreate. This is critical because it ensures WorkManager is ready before any application code runs. The dependencies method returns an empty list, meaning WorkManager has no initialization dependencies on other Startup initializers. The singleton with dual-lock pattern WorkManager.initialize delegates to WorkManagerImpl.initialize, which uses a synchronized dual-instance pattern: java public static void initializeContext context, Configuration configuration { synchronized sLock { if sDelegatedInstance != null && sDefaultInstance != null { throw new IllegalStateException"WorkManager is already initialized."; } if sDelegatedInstance == null { context = context.getApplicationContext; if sDefaultInstance == null { sDefaultInstance = createWorkManagercontext, configuration; } sDelegatedInstance = sDefaultInstance; } } } Two static fields serve different purposes. sDefaultInstance holds the real singleton. sDelegatedInstance enables testing by allowing test code to inject a mock via setDelegate. The sLock object provides thread-safe access. The explicit check for double initialization throws an IllegalStateException with a helpful message guiding developers to disable WorkManagerInitializer in the manifest if they want custom initialization. On-demand initialization via Configuration.Provider When getInstanceContext is called and no instance exists, WorkManager falls back to on-demand initialization: java public static WorkManagerImpl getInstanceContext context { synchronized sLock { WorkManagerImpl instance = getInstance; if instance == null { Context appContext = context.getApplicationContext; if appContext instanceof Configuration.Provider { initializeappContext, Configuration.Provider appContext.getWorkManagerConfiguration; instance = getInstanceappContext; } else { throw new IllegalStateException "WorkManager is not initialized properly."; } } return instance; } } If your Application class implements Configuration.Provider, WorkManager lazily initializes with that configuration. This pattern allows developers to disable automatic initialization and provide custom configuration without calling initialize explicitly in Application.onCreate. The createWorkManager factory The actual WorkManagerImpl construction wires together all the internal components: kotlin fun WorkManagerImpl context: Context, configuration: Configuration, workTaskExecutor: TaskExecutor = WorkManagerTaskExecutorconfiguration.taskExecutor, workDatabase: WorkDatabase = WorkDatabase.create context.applicationContext, workTaskExecutor.serialTaskExecutor, configuration.clock, context.resources.getBooleanR.bool.workmanagertestconfiguration, , trackers: Trackers = Trackerscontext.applicationContext, workTaskExecutor, processor: Processor = Processorcontext.applicationContext, configuration, workTaskExecutor, workDatabase, schedulersCreator: SchedulersCreator = ::createSchedulers, : WorkManagerImpl { val schedulers = schedulersCreator context, configuration, workTaskExecutor, workDatabase, trackers, processor, return WorkManagerImpl context.applicationContext, configuration, workTaskExecutor, workDatabase, schedulers, processor, trackers, }
Compose's derivedStateOf provides a way to create computed state that only triggers recomposition when the computed result actually changes. When you write val fullName by remember { derivedStateOf { "${firstName.value} ${lastName.value}" } }, Compose tracks which state objects were read during calculation and intelligently determines when recalculation is necessary. While most developers know that derivedStateOf helps avoid unnecessary recompositions from intermediate state changes, the deeper question remains: how does Compose know when to recalculate without explicitly tracking dependencies, and what makes this different from a simple remember { computed value }? In this article, you'll dive deep into the internal mechanisms of derivedStateOf, exploring how the Snapshot.observe mechanism captures dependencies during calculation, how the nesting level system distinguishes direct from indirect reads, how hash based validation determines invalidation without value comparison, how the ResultRecord structure caches results across snapshots, and how equivalence policies enable allocation free updates when values haven't changed. This isn't a guide on using derivedStateOf. It's an exploration of the runtime machinery that makes intelligent state derivation possible. The fundamental problem: Computed values that recompose too often Just imagine a search screen with a filter: kotlin @Composable fun SearchScreen { var searchQuery by remember { mutableStateOf"" } var selectedCategory by remember { mutableStateOf<Category?null } var items by remember { mutableStateOflistOf<Item } val filteredItems = items.filter { item - item.name.containssearchQuery, ignoreCase = true && selectedCategory == null || item.category == selectedCategory } LazyColumn { itemsfilteredItems { item - ItemCarditem } } } Every time any state changes, filteredItems is recalculated. Worse, even if the filter produces the same result, the recomposition still happens because Compose sees a new list object. With 10,000 items, this becomes a performance problem. The naive solution is memoization with remember: kotlin val filteredItems = remembersearchQuery, selectedCategory, items { items.filter { ... } } This helps, but you must manually specify all dependencies. Miss one, and you get stale results. Add an unnecessary one, and you get extra recalculations. derivedStateOf solves both problems: kotlin val filteredItems by remember { derivedStateOf { items.filter { item - item.name.containssearchQuery, ignoreCase = true && selectedCategory == null || item.category == selectedCategory } } }
Modern Android applications commonly adopt multi layered architectures such as MVVM or MVI, where data flows through distinct layers: a data source, a repository, and a ViewModel or presentation layer. Each layer has a specific responsibility, and network responses must propagate through all of them before reaching the UI. While this separation produces clean, testable code, it introduces a real challenge: how do you handle API responses, including errors and exceptions, as they cross each layer boundary? Most developers solve this by wrapping API calls in try-catch blocks and returning fallback values. This works for small projects, but as the number of API calls grows, the approach creates ambiguous results, scattered boilerplate, and lost context that downstream layers need. You end up with ViewModels that cannot tell whether an empty list means "no data" or "network failure," repositories that swallow important error details, and data sources that repeat the same error handling pattern dozens of times. In this article, you'll explore the problems that emerge when handling Retrofit API calls across layered architectures, why conventional approaches break down at scale, and how Sandwichhttps://github.com/skydoves/sandwich provides a type safe, composable solution that simplifies response handling from the network layer all the way to the UI. You'll also walk through the full set of Sandwich APIs, from basic response handling to advanced patterns like sequential composition, response merging, global error mapping, and Flow integration, each with real world use cases that show when and why you would reach for them. Retrofit API calls with coroutines Most Android projects use Retrofithttps://github.com/square/retrofit with Kotlin coroutineshttps://github.com/Kotlin/kotlinx.coroutines for network communication. A typical service interface looks like this: kotlin interface PosterService { @GET"DisneyPosters.json" suspend fun fetchPosterList: List<Poster } The service returns a List<Poster directly. Retrofit deserializes the JSON response body and gives you the data. This works perfectly when the request succeeds, but it gives you no structured way to handle failures. Retrofit throws an HttpException for non 2xx status codes and various IO exceptions for network problems. The responsibility of catching these falls entirely on the caller. When you consume this service in a data source, the conventional approach looks like this: kotlin class PosterRemoteDataSource private val posterService: PosterService, { suspend fun fetchPosterList: List<Poster { return try { posterService.fetchPosterList } catch e: HttpException { emptyList } catch e: Throwable { emptyList } } } The data source catches every possible exception and returns emptyList as a fallback. From the caller's perspective, this function always succeeds, it always returns a List<Poster. If we create a flow from the code above, it will be like so: !https://velog.velcdn.com/images/skydoves/post/cc3deaea-7244-4091-88d3-744d297112cc/image.png But that apparent simplicity hides a serious problem. This compiles and runs. But once you trace the data flow through a full architecture, where the data source feeds a repository that feeds a ViewModel that drives the UI, the problems become clear. The problems with conventional response handling The code above has three major issues that compound as your project grows and the number of API endpoints increases. Ambiguous results The data source returns emptyList for both HTTP errors and network exceptions. Downstream layers the repository, the ViewModel receive a List<Poster with no way to distinguish between three completely different scenarios: 1. The request succeeded and the server returned an empty list. 2. The request failed with a 401 Unauthorized error. 3. The device had no network connectivity. All three produce the same result: an empty list. The repository cannot decide whether to show an error message, redirect to a login screen, or display "no data" content. The ViewModel might show an empty state when it should be showing a "please log in" dialog. The response has lost its context, and once that context is gone, no amount of downstream logic can recover it. You might try to work around this by returning null for failures instead of emptyList. But that introduces its own ambiguity: does null mean "error" or "no data"? You end up needing a wrapper type anyway, which leads to the next problem. That's just adding one more implicit convention on your head. Boilerplate error handling Every API call requires its own try-catch block. If you have 20 service methods, you write 20 nearly identical try-catch blocks. Each one catches HttpException, catches Throwable, and returns some fallback value. This repetition creates maintenance overhead and increases the surface area for mistakes, like forgetting to handle a specific exception type in one of the 20 call sites. Consider a data source with multiple methods: kotlin class UserRemoteDataSourceprivate val userService: UserService { suspend fun fetchUserid: String: User? { return try { userService.fetchUserid } catch e: HttpException { null } catch e: Throwable { null } } suspend fun fetchFollowersid: String: List<User { return try { userService.fetchFollowersid } catch e: HttpException { emptyList } catch e: Throwable { emptyList } } suspend fun updateProfileprofile: Profile: Boolean { return try { userService.updateProfileprofile true } catch e: HttpException { false } catch e: Throwable { false } } } The pattern is identical every time: try the call, catch HttpException, catch Throwable, return a fallback. The only thing that changes is the fallback value null, emptyList, false. This is textbook boilerplate that should not exist in every data source class. One dimensional response processing
Jetpack Compose manages UI state through a sophisticated identity system that determines when composables should be reused versus recreated. When you wrap content in keyuserId { UserCarduser }, you're providing Compose with identity information that survives recomposition, reordering, and structural changes. While most developers understand that key helps preserve state when list items move, the deeper question remains: how does Compose actually track identity, and what happens at the compiler and runtime level when you use key? In this article, you'll dive deep into Compose's identity mechanisms, exploring how the compiler transforms key calls into movable group instructions, how the runtime distinguishes between replaceable, movable, and restart groups, how the two level identity system combines source location keys with object keys, how JoinedKey combines multiple keys with special enum handling, and how the slot table stores and retrieves identity information during recomposition. This isn't a guide on using key. It's an exploration of the compiler and runtime machinery that makes stable identity possible. The fundamental problem: Positional identity breaks with structural changes Consider a simple list that can be reordered: kotlin @Composable fun UserListusers: List<User { Column { for user in users { UserCarduser } } } @Composable fun UserCarduser: User { var expanded by remember { mutableStateOffalse } // ... } When users is Alice, Bob, Charlie and Alice's card is expanded, Compose remembers the expanded state. But what happens when the list becomes Bob, Alice, Charlie? Without explicit identity, Compose uses positional memoization: the first UserCard call maps to position 0, the second to position 1, and so on. When the list reorders, position 0 now contains Bob, but the expanded = true state from position 0 is still there. Bob's card incorrectly appears expanded. The naive solution is recreating all state on every structural change, but this destroys the user experience. Scroll positions reset, animations restart, and text field contents vanish. Compose needs a way to track identity that survives positional changes. The key composable solves this by providing explicit identity: kotlin for user in users { keyuser.id { UserCarduser } }
Every Android release build passes through R8, the whole-program optimizing compiler that shrinks, obfuscates, and optimizes your code before it ships to users. At the center of R8's decision-making are keep rules, the declarative specifications that tell the compiler which classes and members must survive the optimization pipeline. When you write -keep class com.example.MyClass, you're interacting with a sophisticated rule resolution engine that matches patterns against the entire class graph, feeds matched items into a worklist-based reachability analyzer, and ultimately determines what lives and what dies in your final APK. In this article, you'll dive deep into how R8 resolves keep rules under the hood, exploring how the six keep options form a semantic matrix controlling shrinking, obfuscation, and optimization independently, how the RootSetBuilder matches rule patterns against every class in the application, how the Enqueuer performs worklist-based reachability analysis from matched roots, how the Minifier renames surviving classes and members, how conditional -if/-keep pairs enable annotation-driven rule activation, how full mode differs from compatibility mode at the behavioral level, how libraries bundle consumer rules through META-INF directories, and how DI frameworks like Dagger and Hilt interact with R8's tree shaking. This isn't a guide on writing keep rules, it's an exploration of the compiler machinery that resolves, matches, and enforces them. The fundamental problem: Static analysis meets dynamic access Consider a simple Android application using reflection: java public class PluginLoader { public Plugin loadPluginString className throws Exception { Class<? clazz = Class.forNameclassName; return Plugin clazz.getDeclaredConstructor.newInstance; } } R8 performs static analysis. It reads every instruction in every method and builds a graph of what references what. But Class.forNameclassName is a string argument known only at runtime. R8 cannot see this usage statically, so className's target class appears unreferenced. Without intervention, R8 removes it. The naive approach would keep everything: proguard -keep class { ; } This defeats the purpose of R8 entirely, no shrinking, no obfuscation, no size reduction. The challenge is surgical precision: keep exactly what dynamic code needs, nothing more. Keep rules solve this by providing a declarative specification language that tells R8: "These classes and members are entry points. Trace from here, and remove everything else." The keep rule system is the interface between your knowledge of dynamic access patterns and R8's static analysis engine. The R8 pipeline: Where keep rules fit Before examining keep rules in detail, understanding R8's position in the build pipeline provides essential context. Unlike ProGuard, which operated as a separate step producing optimized Java bytecode that was then dexed, R8 is a unified step that reads Java bytecode and directly outputs optimized DEX: ProGuard legacy: .java → javac → .class → ProGuard → optimized .class → dx → .dex R8 current: .java → javac/kotlinc → .class → R8 → optimized .dex Internally, R8 maintains three code representations. CfCode represents JVM classfile bytecode from input .class files. DexCode represents Dalvik DEX bytecode for the output. IRCode is R8's high-level intermediate representation, a register-based SSA Static Single Assignment form that both CfCode and DexCode are lifted into for optimization. The master pipeline in R8.java orchestrates the following phases: 1. Input Reading ApplicationReader 2. Type Hierarchy AppInfoWithSubtyping 3. Root Set Building RootSetBuilder ← Keep rules matched here 4. Enqueuing/Tracing Enqueuer ← Reachability analysis 5. Tree Pruning TreePruner ← Dead code removed 6. Annotation Removal AnnotationRemover 7. Member Rebinding MemberRebindingAnalysis 8. Class Merging SimpleClassMerger 9. IR Conversion IRConverter.optimize ← SSA-based optimizations 10. Second Enqueuer Pass Enqueuer ← Re-traces after optimization 11. Minification Minifier.run ← Name obfuscation 12. Output Writing ApplicationWriter.write Keep rules enter the pipeline at phase 3, where the RootSetBuilder matches them against the full class graph. The matched items become "roots" that seed the Enqueuer's reachability analysis in phase 4. The keep rule taxonomy: Six options, three axes
Kotlin's internal visibility modifier provides a useful mechanism for hiding implementation details within a module while exposing a clean public API. But as codebases grow and libraries modularize, a tension emerges: the logical boundaries of your API don't always align with the compilation boundaries of your modules. Test modules need access to production internals. Library families like kotlinx.coroutines want to share implementation details across artifacts without exposing them to consumers. The current workaround, "friend modules," is an undocumented compiler feature that lacks language-level design. KEEP-0451 proposes a solution: the shared internal visibility modifier. This new visibility level sits between internal and public, allowing modules to explicitly declare which internals they share and with whom. In this article, you'll explore the motivation behind this proposal, the design decisions that shaped it, how transitive sharing simplifies complex dependency graphs, and the technical challenges of implementing cross-module visibility on the JVM. The fundamental problem: Module boundaries vs. logical boundaries Consider a typical library structure: kotlinx-coroutines/ ├── kotlinx-coroutines-core/ ├── kotlinx-coroutines-test/ ├── kotlinx-coroutines-reactive/ └── kotlinx-coroutines-android/ These artifacts form a cohesive library family. Internally, they share implementation details: dispatcher internals, continuation machinery, and testing utilities. But from Kotlin's perspective, each artifact is a separate module. The internal modifier in kotlinx-coroutines-core is invisible to kotlinx-coroutines-test, even though both are maintained by the same team and shipped together. The current workarounds are unsatisfying: Option 1: Make everything public. This works, but pollutes the API surface. Consumers developers see implementation details they shouldn't use, and maintainers lose the ability to change internals without breaking compatibility. Option 2: Use the undocumented friend modules feature. The Kotlin compiler supports a -Xfriend-paths flag that grants one module access to another's internals. But this is a compiler implementation detail, not a language feature. It has no syntax, no IDE support, and no guarantees of stability. Option 3: Merge modules. You could combine related modules into a single compilation unit, then split them for distribution. But this complicates build configurations and doesn't scale to complex dependency graphs. KEEP-0451 addresses this gap by elevating friend modules to a first-class language feature with explicit syntax and clear semantics. The shared internal modifier The proposal introduces a new visibility modifier: shared internal. Declarations marked with this modifier are visible to designated dependent modules, but invisible to the general public.
Android's ViewModel is one of the most widely used architecture components, yet its core survival mechanism remains a mystery to most developers. You annotate a class, call viewModels in your Activity, and your state magically survives screen rotation. But what actually happens behind the scenes? The answer involves a retained in memory object that is never serialized, a simple HashMap keyed by strings, a carefully ordered resource cleanup sequence, and a factory system that separates creation from retrieval. In this article, you'll dive deep into the internal machinery that makes ViewModel survive configuration changes, exploring how ComponentActivity retains the ViewModelStore through Android's NonConfigurationInstances mechanism, how ViewModelProvider coordinates thread safe retrieval and creation through ViewModelProviderImpl, how ViewModelImpl manages resource lifecycle with a deliberate clearing order, how CreationExtras enables stateless factory injection, and how fragments piggyback on this entire system through FragmentManagerViewModel. This isn't a guide on using ViewModel. It's an exploration of the retention, creation, and destruction machinery that makes configuration change survival possible. The fundamental problem: State that outlives Activity instances Consider this common scenario: kotlin class CounterActivity : ComponentActivity { private var count = 0 override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState setContent { ButtononClick = { count++ } { Text"Count: $count" } } } } Rotate the device, and count resets to zero. The Android framework destroys and recreates the Activity on configuration changes. Every field, every local variable, every reference is gone. The Bundle approach works for small, serializable data: kotlin override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putInt"count", count } But Bundles have a strict 1MB transaction limit, can only hold primitive and parcelable types, and require manual serialization and deserialization. What about a list of 10,000 items fetched from a network request? A database cursor? A WebSocket connection? These cannot be serialized into a Bundle. ViewModel solves this by retaining the object in memory across configuration changes. Not serialized. Not parceled. The exact same object instance, held in memory while the old Activity is destroyed and the new one is created. ViewModelStore: The retention container At the foundation of the system is ViewModelStore, a wrapper around a MutableMap: kotlin public open class ViewModelStore { private val map = mutableMapOf<String, ViewModel @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } @RestrictToRestrictTo.Scope.LIBRARYGROUP public operator fun getkey: String: ViewModel? = mapkey public fun clear { for vm in map.values { vm.clear } map.clear } }
Jetpack Compose introduced a declarative paradigm for Android UI, but declarative doesn't mean stateless. User interactions create state like scroll positions, text field contents, and expanded sections that must survive configuration changes and process death. While remember preserves state across recompositions, it's helpless against activity recreation. This is where the runtime saveable module enters: a sophisticated state persistence system that bridges Compose's reactive world with Android's saved instance state mechanism. In this article, you'll dive deep into the internal mechanisms of Compose's saveable APIs, exploring how rememberSaveable tracks and restores state through composition position keys, how the Saver interface enables type safe serialization of arbitrary objects, how SaveableStateRegistry manages multiple providers and preserves registration order, how SaveableStateHolder enables navigation patterns by scoping state to screen keys, and how all these components coordinate to seamlessly preserve UI state. This isn't a guide on using rememberSaveable. It's an exploration of the runtime machinery that makes state persistence invisible to developers. The fundamental problem: State that survives process death Consider this simple Compose code: kotlin @Composable fun Counter { var count by remember { mutableStateOf0 } ButtononClick = { count++ } { Text"Count: $count" } } This works perfectly for recomposition. Click the button, count increments, UI updates. But rotate the device, and count resets to zero. The activity was destroyed and recreated, and remember only survives within a single composition lifecycle. The traditional Android solution is onSaveInstanceState: kotlin class CounterActivity : ComponentActivity { private var count = 0 override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putInt"count", count } override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState count = savedInstanceState?.getInt"count" ?: 0 } } But this approach doesn't compose well with Compose. The state lives in the Activity, not the composable. You need to manually thread state through your composition hierarchy. And if you have dozens of stateful composables, the boilerplate becomes unmanageable. Compose's saveable APIs solve this elegantly by integrating saved instance state directly into the composition model. Each rememberSaveable call automatically participates in the save/restore cycle, keyed by its position in the composition tree. The Saver interface: Type safe state serialization At the heart of the saveable system is the Saver interface, which defines how to convert between your domain types and Bundle compatible representations. The core abstraction The Saver interface is elegantly minimal: kotlin public interface Saver<Original, Saveable : Any { public fun SaverScope.savevalue: Original: Saveable? public fun restorevalue: Saveable: Original? } Two methods handle the round trip: 1. save: Converts your type to something Bundle compatible. Returning null means "don't save this value." 2. restore: Converts back to your original type. Returning null means "use the init lambda instead." The SaverScope receiver on save provides access to canBeSavedvalue: Any: Boolean, allowing savers to validate nested values before attempting serialization. The factory function For convenience, a factory function creates Saver implementations from lambdas: kotlin public fun <Original, Saveable : Any Saver save: SaverScope.value: Original - Saveable?, restore: value: Saveable - Original?, : Saver<Original, Saveable { return object : Saver<Original, Saveable { override fun SaverScope.savevalue: Original = save.invokethis, value override fun restorevalue: Saveable = restore.invokevalue } } This enables concise saver definitions:
Jetpack Compose's Modifier system has been the primary way to apply visual properties to composables. You chain modifiers like background, padding, and border to build up the appearance and behavior of UI elements. While powerful, this approach has limitations when dealing with interactive states. When you want a button to change color when pressed, you need to manually track state, create animated values, and conditionally apply different modifiers. The new experimental Styles APIhttps://android-review.googlesource.com/c/platform/frameworks/support/+/3756487 aims to solve this by providing a declarative way to define state-dependent styling with automatic animations. In this article, you'll explore how the Styles API works, examining how Style objects encapsulate visual properties as composable lambdas, how StyleScope provides access to layout, drawing, and text properties, how StyleState exposes interaction states like pressed, hovered, and focused, how the system automatically animates between style states without manual Animatable management, and how the two-node modifier architecture efficiently applies styles while minimizing invalidation. This isn't a guide on basic Compose styling; it's an exploration of a new paradigm for defining interactive, stateful UI appearances. The problem with stateful styling Consider implementing a button that changes color when hovered and pressed. With the current Modifier approach, you need to manage this manually: kotlin @Composable fun InteractiveButtononClick: - Unit { val interactionSource = remember { MutableInteractionSource } val isPressed by interactionSource.collectIsPressedAsState val isHovered by interactionSource.collectIsHoveredAsState val backgroundColor by animateColorAsState targetValue = when { isPressed - Color.Red isHovered - Color.Yellow else - Color.Green } Box modifier = Modifier .clickableinteractionSource = interactionSource, indication = null { onClick } .backgroundbackgroundColor .size150.dp } This pattern requires several pieces: an InteractionSource to track interactions, state derivations for each interaction type, animated values for smooth transitions, and conditional logic to determine the current appearance. The code is verbose and the concerns are scattered across multiple declarations. The Styles API consolidates this into a single declarative definition: kotlin @Composable fun InteractiveButtononClick: - Unit { ClickableStyleableBox onClick = onClick, style = { backgroundColor.Green size150.dp hovered { animate { backgroundColor.Yellow } } pressed { animate { backgroundColor.Red } } } }
Kotlin Coroutines introduced structured concurrency as a fundamental principle, ensuring that coroutines are properly scoped and cancelled when their parent scope completes. At the heart of this mechanism lies CancellationException, a special exception that signals cancellation and must be handled with care. While most developers know they shouldn't catch this exception, the deeper question remains: why is CancellationException special, and what happens when you accidentally swallow it? In this article, you'll dive deep into the internal mechanisms of CancellationException, exploring why it must be re-thrown, how runCatching can break structured concurrency, the proposals for safer alternatives, and the design decisions that make cancellation propagation both correct and performant. The fundamental problem: Catching cancellation breaks structured concurrency Consider this seemingly innocent code: kotlin suspend fun processData: Result<Data = runCatching { val user = fetchUser val profile = fetchProfileuser.id Datauser, profile } This looks reasonable. You're wrapping a suspend operation in runCatching to convert exceptions into Result values for safer error handling. But there's a subtle bug: if the coroutine is cancelled during fetchUser or fetchProfile, the CancellationException is caught by runCatching and wrapped in Result.failure. The cancellation signal never propagates to the parent scope, breaking structured concurrency. The core issue is that runCatching is implemented like this: kotlin public inline fun <R runCatchingblock: - R: Result<R { return try { Result.successblock } catch e: Throwable { Result.failuree } } Notice the catch e: Throwable clause. This catches everything, including CancellationException. When cancellation occurs, instead of propagating up the coroutine hierarchy, it's captured in a Result object, and the coroutine continues executing as if nothing happened. Understanding CancellationException: Not just another exception CancellationException is fundamentally different from other exceptions in Kotlin Coroutines. Let's examine its definition: kotlin public actual open class CancellationException message: String?, cause: Throwable? : IllegalStateExceptionmessage, cause It extends IllegalStateException, but its purpose is not to signal an error, it's to signal intentional cancellation. This distinction is crucial for understanding why it must be handled specially. The cancellation contract When a coroutine is cancelled, the cancellation mechanism works through these steps: 1. Cancellation signal: The parent scope or job calls cancel on the coroutine's Job 2. CancellationException thrown: At the next suspension point, the coroutine throws a CancellationException 3. Propagation: The exception propagates up the coroutine hierarchy 4. Cleanup: Each coroutine in the chain can run cleanup logic in finally blocks 5. Parent notification: The parent scope is notified that the child completed due to cancellation If you catch CancellationException and don't re-throw it, steps 3-5 never happen. The parent scope thinks the child is still running, resource cleanup might not occur, and the entire structured concurrency guarantee breaks down. The invisibility principle
Landscapisthttps://github.com/skydoves/landscapist Core is a standalone image loading engine built from scratch for Kotlin Multiplatform. Unlike Landscapist's wrappers around Coil, Glide, and Fresco, Landscapist Corehttps://skydoves.github.io/landscapist/landscapist/landscapist-core/ handles fetching, caching, decoding, and transformations internally. This eliminates platform dependencies and provides fine grained control over every aspect of image loading. In this article, you'll explore the internal architecture of Landscapist Core, examining how the Landscapist class orchestrates the loading pipeline, how TwoTierMemoryCache provides a second chance for evicted items through weak references, how DecodeScheduler prioritizes visible images over background loads, how progressive decoding improves perceived performance, and how memory pressure handling keeps the app responsive under constrained conditions. The Landscapist orchestrator The Landscapist class is the main entry point for image loading. It coordinates fetching, caching, decoding, and transformation into a unified pipeline: kotlin public class Landscapist private constructor public val config: LandscapistConfig, private val memoryCache: MemoryCache, private val diskCache: DiskCache?, private val fetcher: ImageFetcher, private val decoder: ImageDecoder, private val dispatcher: CoroutineDispatcher, public val requestManager: RequestManager = RequestManager, public val memoryPressureManager: MemoryPressureManager = MemoryPressureManager, Each component has a single responsibility. The memoryCache stores decoded images in memory. The diskCache persists raw image data to storage. The fetcher retrieves images from network or local sources. The decoder converts raw bytes into displayable images. The requestManager tracks active requests for cancellation. The memoryPressureManager responds to system memory warnings. The loading pipeline The load function implements a three stage lookup with progressive enhancement: kotlin public fun loadrequest: ImageRequest: Flow<ImageResult = flow { emitImageResult.Loading val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight, // 1. Check memory cache instant if request.memoryCachePolicy.readEnabled { memoryCachecacheKey?.let { cached - emitImageResult.Successdata = cached.data, dataSource = DataSource.MEMORY return@flow } } // 2. Check disk cache if request.diskCachePolicy.readEnabled && diskCache != null { diskCache.getcacheKey?.use { snapshot - val bytes = snapshot.data.buffer.readByteArray // Decode and emit... } } // 3. Fetch from network val fetchResult = fetcher.fetchrequest // Process result... }.flowOndispatcher The pipeline follows a predictable order: memory cache first instant, disk cache second fast I/O, network last slow. Each stage can be enabled or disabled through CachePolicy, allowing fine grained control for special cases like forcing a refresh or skipping caching entirely. Cache key generation The CacheKey uniquely identifies a cached image based on all factors that affect its appearance: kotlin val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight,
Jetpack Compose transforms declarative UI code into pixels on screen through a pipeline of three distinct phases: Composition, Layout, and Drawing. When you change a state variable, Compose doesn't redraw everything, it determines which phases need to run and executes only the necessary work. A change that only affects drawing can skip composition and layout entirely, while a structural change might require all three phases. Understanding which phase your code triggers helps you write more efficient Compose applications. In this article, you'll explore how the three phases work internally, examining how the Composition phase builds and updates the UI tree through the SlotTable and Composer, how the Layout phase measures and positions nodes through LayoutNode and Constraints propagation, how the Drawing phase renders content through DrawScope and GraphicsLayer, and how invalidation propagates through the system. This isn't a guide on using Compose, it's an exploration of the execution pipeline that transforms your composable functions into rendered UI. The execution pipeline: From state to pixels When Compose needs to display UI, it executes three phases in strict order. Composition builds the UI tree by running your composable functions and recording what needs to be displayed. Layout takes that tree and determines the size and position of every element. Drawing takes the positioned elements and renders them to the screen. Each phase depends on the previous phase completing, but not every state change requires all three phases. Consider what happens when you animate an element's opacity. In a naive implementation, changing opacity would trigger composition rebuild the tree, layout remeasure and reposition, and drawing render. But opacity doesn't affect tree structure or element positions, it's purely a visual property. Compose optimizes this by allowing opacity changes in GraphicsLayer to trigger only the drawing phase, skipping composition and layout entirely. This optimization is only possible because of the phase separation. The phase model also explains why certain patterns are problematic. Reading layout coordinates during composition forces the system to complete layout before finishing composition, breaking the normal phase ordering. Understanding the phases helps you write code that works with the system rather than against it. Composition phase: Building the UI tree The Composition phase is where your composable functions execute. The Composer walks through your code, tracks what you've called, compares it to the previous composition, and records changes. This phase doesn't produce pixels, it produces a tree of nodes that the subsequent phases will process. The Composer's role The Composer is the runtime engine that executes composable functions. Every composable function receives an implicit $composer parameter injected by the compiler: kotlin // What you write @Composable fun Greetingname: String { Text"Hello, $name" } // What the compiler generates simplified fun Greetingname: String, $composer: Composer, $changed: Int { $composer.startRestartGroup1234 if $composer.changedname || !$composer.skipping { Text"Hello, $name", $composer, 0 } else { $composer.skipToGroupEnd } $composer.endRestartGroup?.updateScope { $composer - Greetingname, $composer, $changed or 1 } } The Composer serves three functions. First, it records positional information, tracking the results of remember lambdas, composable function parameters, and the structure of calls. Second, it detects changes by comparing current values against previous composition state. Third, it incrementally evaluates composition by only recomposing functions whose inputs have changed. The SlotTable: Persistent memory Composition state lives in the SlotTable, a data structure that stores the UI tree in a flattened format optimized for incremental updates. The SlotTable uses two arrays: one for group metadata and one for slot values. Each group in the table contains:
Building complex user interfaces in Jetpack Compose often requires going beyond the standard Box, Row, and Column layouts. While these composables handle most common scenarios beautifully, there are times when you need complete control over how children are measured and positioned. This is where the Layout composable becomes essential—the fundamental building block that powers every layout in Compose, including the standard ones you use daily. In this article, you'll dive deep into the Layout composable, exploring how measurement and placement work under the hood. You'll examine real implementations from the Compose UI library, understand the constraint system, and learn patterns for building sophisticated custom layouts. This isn't a basic tutorial—it's an exploration of the layout system's internals and the design decisions that make it powerful. Understanding the core abstraction: What makes Layout special At its heart, the Layout composable is a function that takes content and a measurement policy, then produces a UI element with specific dimensions and child positions. What distinguishes it from higher-level layouts is its adherence to two fundamental principles: single-pass measurement and constraint-based sizing. Single-pass measurement Single-pass measurement means each child is measured exactly once per layout pass. This constraint exists for performance—measuring the same child multiple times would create exponential complexity as layout hierarchies deepen. The implication is significant: you must make all measurement decisions with the information available in a single pass. kotlin Layoutcontent { measurables, constraints - // Each measurable can only be measured ONCE val placeables = measurables.map { it.measureconstraints } // After measurement, you work with Placeables, not Measurables layoutwidth, height { placeables.forEach { it.placex, y } } } This differs fundamentally from traditional Android Views, where onMeasure could be called multiple times with different MeasureSpec configurations. Compose's single-pass model is faster but requires more upfront planning. Constraint-based sizing Constraint-based sizing means parents communicate size expectations to children through Constraints objects, and children respond with their chosen size through Placeable objects. This bidirectional communication enables flexible layouts that adapt to available space. Parent │ ├─── ConstraintsminWidth, maxWidth, minHeight, maxHeight ───→ Child │ └─── Placeablewidth, height ←───────────────────────────────── Child The Constraints class encapsulates four values: minWidth, maxWidth, minHeight, and maxHeight. A child must choose dimensions within these bounds. This is more expressive than Android's MeasureSpec, which could only communicate one dimension's constraints at a time. These properties aren't just implementation details—they're architectural constraints that enable predictable performance and composable layout logic. The Layout function signature: Anatomy of a custom layout Let's examine the Layout function signature to understand its components: kotlin @Composable inline fun Layout content: @Composable - Unit, modifier: Modifier = Modifier, measurePolicy: MeasurePolicy The three parameters serve distinct roles: 1. content - A composable lambda that defines the children. These become Measurable objects during measurement. 2. modifier - Applied to the layout itself, affecting its measurement and drawing. Modifiers can intercept and transform constraints before they reach your measure policy. 3. measurePolicy - The brain of the layout. It receives Measurable children and parent Constraints, then returns a MeasureResult containing the layout's size and placement logic. The MeasurePolicy interface is where the real work happens: kotlin interface MeasurePolicy { fun MeasureScope.measure measurables: List<Measurable, constraints: Constraints : MeasureResult } The MeasureScope receiver provides density information and the layout function for creating results. The measurables list contains one entry per child composable. The constraints represent what the parent allows. Real-world case study: Box implementation Let's examine how Box is implemented in the Compose UI library. The source is located at foundation/foundation-layout/src/commonMain/kotlin/androidx/compose/foundation/layout/Box.kt: kotlin @Composable inline fun Box modifier: Modifier = Modifier, contentAlignment: Alignment = Alignment.TopStart, propagateMinConstraints: Boolean = false, content: @Composable BoxScope. - Unit, { val measurePolicy = maybeCachedBoxMeasurePolicycontentAlignment, propagateMinConstraints Layout content = { BoxScopeInstance.content }, measurePolicy = measurePolicy, modifier = modifier, }
Jetpack Compose's declarative UI paradigm promises simplicity: you describe your UI as a function of state, and the framework handles updates automatically. But behind this elegant abstraction lies a sophisticated selective recomposition system that makes Compose remarkably efficient. When a single state variable changes, Compose doesn't re-execute your entire UI tree,it surgically recomposes only the specific composable functions that read that state. This precision is enabled by Recompose Scopes, the runtime tracking mechanism that connects state reads to composable functions and orchestrates minimal UI updates. In this article, you'll dive deep into how "Recompose Scopes" work, exploring how RecomposeScopeImpl tracks which composables read which state, how invalidation propagates through the composition hierarchy, how the compiler-generated restart lambda enables precise recomposition, how the system determines when to skip recomposition entirely, and how bit-packed flags and token-based tracking optimize memory and performance. This isn't a guide on writing efficient composables; it's an exploration of the runtime machinery that makes selective recomposition possible. The fundamental problem: How do you know what to recompose? Consider this simple Compose code: kotlin @Composable fun UserProfileuserId: String { val user by viewModel.userState.collectAsState val settings by viewModel.settingsState.collectAsState Column { UserHeaderuser.name UserAvataruser.avatarUrl SettingsPanelsettings } } When user changes, only UserHeader and UserAvatar should recompose, SettingsPanel shouldn't, because it didn't read user. But how does Compose know this? The naive approach would be to re-execute everything and compare the results, but that would be expensive. Compose needs to track, at runtime, which composables read which state, so when state changes, only the affected composables are re-executed. This requires solving several complex problems: 1. Dependency tracking: Which composable functions read which state objects? 2. Invalidation: When state changes, which scopes should be marked for recomposition? 3. Precise restart: How do you re-execute just one composable function with the same parameters? 4. Skipping: How do you avoid re-executing functions when nothing they depend on changed? 5. Memory: How do you track dependencies without excessive memory overhead? Recompose Scopes solve these problems through a combination of compiler cooperation and runtime tracking. RecomposeScopeImpl: The tracking mechanism Every composable function that might need to recompose gets an associated RecomposeScopeImpl instance. This class, defined in the Compose runtime, is the central bookkeeping structure for selective recomposition. The RecomposeScopeImpl class encapsulates everything needed to track and restart a composable function: kotlin internal class RecomposeScopeImplinternal var owner: RecomposeScopeOwner? : ScopeUpdateScope, RecomposeScope, IdentifiableRecomposeScope Compact flag-based state storage Rather than using multiple boolean fields, RecomposeScopeImpl uses a single integer with bit masks for state: kotlin private var flags: Int = 0 private const val UsedFlag = 0x001 // Scope was used during composition private const val DefaultsInScopeFlag = 0x002 // Has default parameter calculations private const val DefaultsInvalidFlag = 0x004 // Default calculations changed private const val RequiresRecomposeFlag = 0x008 // Direct invalidation occurred private const val SkippedFlag = 0x010 // Scope was skipped private const val RereadingFlag = 0x020 // Re-reading tracked instances private const val ForcedRecomposeFlag = 0x040 // Forced recomposition private const val ForceReusing = 0x080 // Forced reusing state private const val Paused = 0x100 // Paused for pausable compositions private const val Resuming = 0x200 // Resuming from pause private const val ResetReusing = 0x400 // Reset reusing state This compact representation saves memory—11 boolean flags fit in a single 32-bit integer instead of consuming 11 bytes or more with padding. The getters and setters use bitwise operations: kotlin private inline fun getFlagflag: Int = flags and flag != 0 private inline fun setFlagflag: Int, value: Boolean { flags = if value { flags or flag } else { flags and flag.inv } } This pattern appears throughout high-performance Compose code—prefer bit-packing over separate booleans for frequently allocated objects.
Android's ViewModel has become an essential component of modern Android development, providing a lifecycle-aware container for UI-related data that survives configuration changes. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around lifecycle management, multiplatform abstraction, resource cleanup, and thread-safe caching. Understanding how ViewModel works under the hood helps you make better architectural decisions and avoid subtle bugs. In this article, you'll dive deep into how Jetpack ViewModel works internally, exploring how the ViewModelStore retains instances across configuration changes, how ViewModelProvider orchestrates creation and caching, how the factory pattern enables flexible instantiation, how CreationExtras enables stateless factories, how resource cleanup is managed through the Closeable pattern, and how viewModelScope integrates coroutines with ViewModel lifecycle. This isn't a guide on using ViewModel, it's an exploration of the internal machinery that makes lifecycle-aware state management possible. The fundamental problem: Surviving configuration changes Configuration changes present a fundamental challenge for Android development. When a user rotates their device, changes language settings, or triggers any configuration change, the system destroys and recreates the Activity. Any data stored in the Activity is lost: kotlin class MyActivity : ComponentActivity { private var userData: User? = null // Lost on rotation! override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState // Must reload data after every rotation loadUserData } } The naive approach is to use onSaveInstanceState: kotlin override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putParcelable"user", userData } override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState userData = savedInstanceState?.getParcelable"user" } This works for small, serializable data. But what about large datasets, network connections, or objects that can't be serialized? What about ongoing operations like network requests? The Bundle approach fails for these cases, both because of size limitations and because serialization/deserialization is expensive. ViewModel solves this by providing a lifecycle-aware container that survives configuration changes through a retained object pattern, not serialization. The ViewModelStore: The retention mechanism At the heart of ViewModel's configuration-change survival is ViewModelStore, a simple key-value store that holds ViewModel instances: kotlin public open class ViewModelStore { private val map = mutableMapOf<String, ViewModel @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } @RestrictToRestrictTo.Scope.LIBRARYGROUP public operator fun getkey: String: ViewModel? { return mapkey } @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun keys: Set<String { return HashSetmap.keys } public fun clear { for vm in map.values { vm.clear } map.clear } } The implementation is remarkably straightforward, just a MutableMap<String, ViewModel. The magic isn't in the store itself, it's in how the store is retained. Key replacement behavior Notice the put method's behavior: kotlin public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } If a ViewModel already exists with the same key, the old ViewModel is immediately cleared. This ensures proper cleanup when a ViewModel is replaced. You might wonder when this happens, it occurs when you request a ViewModel with the same key but a different type: kotlin // First request creates TestViewModel1 with key "mykey" val vm1: TestViewModel1 = viewModelProvider"mykey", TestViewModel1::class // Second request with same key but different type val vm2: TestViewModel2 = viewModelProvider"mykey", TestViewModel2::class // vm1.onCleared has been called, vm1 is no longer valid This behavior is validated in the test suite: kotlin @Test fun twoViewModelsWithSameKey { val key = "thekey" val vm1 = viewModelProviderkey, TestViewModel1::class assertThatvm1.cleared.isFalse val vw2 = viewModelProviderkey, TestViewModel2::class assertThatvw2.isNotNull assertThatvm1.cleared.isTrue } The ViewModelStoreOwner contract The ViewModelStoreOwner interface defines who owns the store: kotlin public interface ViewModelStoreOwner { public val viewModelStore: ViewModelStore } This simple interface is implemented by ComponentActivity, Fragment, and NavBackStackEntry. The owner's responsibility is twofold:
Dependency injection excels at wiring together static dependency graphs, but what happens when you need runtime parameters? This is the challenge assisted injection solves, bridging the gap between compile-time dependency management and runtime parameter passing. While the concept appears simple on the surface, the internal machinery that makes it work reveals sophisticated compile-time code generation, careful separation of concerns, and elegant integration with Hilt's component system. In this article, you'll dive deep into how assisted injection works under the hood, exploring how the annotation processor distinguishes assisted parameters from injected dependencies, how factories are generated and wired together, how Hilt integrates assisted injection with ViewModels through multibinding maps, and the runtime mechanisms that tie everything together. This isn't a guide on using @AssistedInject, it's an exploration of the compiler machinery that makes it possible. The fundamental problem: Runtime parameters in a compile-time framework At its core, dependency injection frameworks operate at compile time. When you write: java class MyService { @Inject MyServiceDatabase db, NetworkClient client { // ... } } The framework generates code that wires Database and NetworkClient from the dependency graph. But what if you need to pass a runtime parameter, a user ID, a configuration object, or data from user input? You can't put these in the dependency graph because they don't exist until runtime. The naive approach is to inject a factory and manually construct the object: java class MyService { private final Database db; private final NetworkClient client; private final String userId; MyServiceDatabase db, NetworkClient client, String userId { this.db = db; this.client = client; this.userId = userId; } } // Manually created factory interface MyServiceFactory { MyService createString userId; } // Manual implementation class MyServiceFactoryImpl implements MyServiceFactory { private final Database db; private final NetworkClient client; @Inject MyServiceFactoryImplDatabase db, NetworkClient client { this.db = db; this.client = client; } @Override public MyService createString userId { return new MyServicedb, client, userId; } } This works, but it's tedious and error-prone. Every time you add or remove a dependency, you must update both the constructor and the factory implementation. Assisted injection automates this entire pattern. The annotation taxonomy: Distinguishing assisted from injected Assisted injection introduces three annotations that work together to automate factory generation: @AssistedInject: Marking the constructor The @AssistedInject annotation marks a constructor that mixes injected dependencies with assisted parameters: java @RetentionRUNTIME @TargetCONSTRUCTOR public @interface AssistedInject {} The RUNTIME retention is important, unlike Hilt's component-scoped annotations which use CLASS retention, assisted injection needs runtime information for validation and debugging. However, the actual injection logic is entirely compile-time generated. A critical constraint: types with @AssistedInject constructors cannot be scoped. This makes sense, if you're passing runtime parameters, each call to the factory creates a new instance. Scoping would require caching based on assisted parameters, which would be complex and rarely useful. @Assisted: Marking runtime parameters The @Assisted annotation distinguishes runtime parameters from injected dependencies: java @RetentionRUNTIME @TargetPARAMETER public @interface Assisted { String value default ""; } The value parameter serves as a discriminator when you have multiple assisted parameters of the same type. Consider: java class DataService { @AssistedInject DataService Database db, @Assisted String name, @Assisted"id" String id, @Assisted"repo" String repo { // ... } } Here, three String parameters are assisted. Without identifiers, the framework couldn't distinguish them. The identifier creates a unique key: type, identifier. So you have: - String, "" for name - String, "id" for id - String, "repo" for repo During processing, these parameters are wrapped in an AssistedParameter class that implements equality based on both type and identifier. This ensures uniqueness validation at compile time. @AssistedFactory: Defining the user-facing API The @AssistedFactory annotation marks the interface that users will interact with: java @RetentionRUNTIME @TargetTYPE public @interface AssistedFactory {} The annotated type must obey strict constraints: - Must be abstract interface or abstract class - Must contain exactly one abstract, non-default method - That method must return the @AssistedInject-annotated type - That method's parameters must exactly match the assisted parameters type + identifier, in the same order
Monday, November 24, 2025Image loading is one of the most critical yet complex aspects of Android development. While libraries like Glide and Picasso have served developers for years, Coil emerged as a modern, Kotlin-first solution built from the ground up with coroutines. But the power of Coil goes far beyond its clean API, it's in the solid internal machinery that makes it both performant and memory-efficient. In this article, you'll dive deep into the internal mechanisms of Coil, exploring how image requests flow through an interceptor chain, how the two-tier memory cache achieves high hit rates while preventing memory leaks, how bitmap sampling uses bit manipulation for optimal memory usage, and the subtle optimizations that make it production-ready. Understanding the core abstraction At its heart, Coil is an image loading library that transforms data sources URLs, files, resources into decoded images displayed in views. What distinguishes Coil from other image loaders is its adherence to two fundamental principles: coroutine-native design and composable interceptor architecture. The coroutine-native design means everything in Coil is built around suspend functions. Image loading naturally fits the structured concurrency model, requests have lifecycles, can be cancelled, and should respect scopes. Traditional image loaders use callback chains, but Coil embraces coroutines: kotlin // Traditional callback approach imageLoader.loadurl { bitmap - imageView.setImageBitmapbitmap } // Coil's coroutine approach val result = imageLoader.execute ImageRequest.Buildercontext .dataurl .targetimageView .build The composable interceptor architecture means the entire request pipeline is a chain of interceptors, similar to OkHttp. Each interceptor can observe, transform, or short-circuit the request. This makes the library extensible without modifying core code. These properties aren't just conveniences, they're architectural decisions that enable better resource management, cleaner cancellation semantics, and powerful customization. Let's explore how these principles manifest in the implementation. The ImageLoader interface and RealImageLoader implementation If you examine the ImageLoader interface, it defines two primary entry points: kotlin interface ImageLoader { fun enqueuerequest: ImageRequest: Disposable suspend fun executerequest: ImageRequest: ImageResult } Two methods for the same operation? This reflects Android's dual nature, some callers need fire-and-forget loading enqueue for views, while others need structured concurrency execute for repositories or composables. The RealImageLoader implementation handles both cases with a unified internal pipeline: kotlin internal class RealImageLoader val options: Options, : ImageLoader { private val scope = CoroutineScopeoptions.logger private val systemCallbacks = SystemCallbacksthis private val requestService = RequestServicethis, systemCallbacks, options.logger override fun enqueuerequest: ImageRequest: Disposable { // Start executing the request on the main thread. val job = scope.asyncoptions.mainCoroutineContextLazy.value { executerequest, REQUESTTYPEENQUEUE } // Update the current request attached to the view and return a new disposable. return getDisposablerequest, job } override suspend fun executerequest: ImageRequest: ImageResult { if !needsExecuteOnMainDispatcherrequest { // Fast path: skip dispatching. return executerequest, REQUESTTYPEEXECUTE } else { // Slow path: dispatch to the main thread. return coroutineScope { val job = asyncoptions.mainCoroutineContextLazy.value { executerequest, REQUESTTYPEEXECUTE } getDisposablerequest, job.job.await } } } } Notice the fast path optimization in execute: if the request doesn't need main thread dispatch no target view, it executes immediately without the overhead of launching a coroutine. This is important for background image loading in repositories where you're just fetching the bitmap. The scope is a SupervisorJob scope, meaning one failed request doesn't cancel other in-flight requests: kotlin private fun CoroutineScopelogger: Logger?: CoroutineScope { val context = SupervisorJob + CoroutineExceptionHandler { , throwable - logger?.logTAG, throwable } return CoroutineScopecontext } This isolation ensures that a network error loading one image doesn't affect other images currently loading. The CoroutineExceptionHandler logs uncaught exceptions rather than crashing, making the library resilient to unexpected errors. The request execution pipeline: Interceptors all the way down The core of Coil's architecture is the interceptor chain. When you execute a request, it flows through a series of interceptors before reaching the EngineInterceptor, which performs the actual fetch and decode: kotlin private suspend fun executeinitialRequest: ImageRequest, type: Int: ImageResult { val requestDelegate = requestService.requestDelegate request = initialRequest, job = coroutineContext.job, findLifecycle = type == REQUESTTYPEENQUEUE, .apply { assertActive } val request = requestService.updateRequestinitialRequest val eventListener = options.eventListenerFactory.createrequest
Jetpack Compose uses a smart recomposition system to optimize UI updates. At the heart of this optimization is stability inference - the compiler's ability to determine whether a type's values can change over time. Understanding how the compiler reasons about stability is crucial for writing performant Compose code. What is Stability? In Compose, a type is considered stable if it meets these conditions: 1. The result of equals will always return the same result for the same two instances 2. If a public property of the type changes, Composition will be notified 3. All public properties are also stable types Common examples: - Stable: Primitives Int, String, Boolean, @Immutable data classes, function types - Unstable: Classes with var properties, mutable collections MutableList, MutableMap The Stability Type System The Compose compiler uses a sophisticated type system to track stability information during compilation. This is represented by the Stability sealed class:
Dependency injection frameworks excel at wiring individual dependencies, but what happens when you need to collect multiple implementations of the same type into a set or map? This is where multibinding comes in, a mechanism that allows distributed contributions across modules to be aggregated into a single collection. While the API appears simple, the internal machinery reveals sophisticated compile-time aggregation, careful separation between declarations and contributions, and runtime factories optimized for both memory efficiency and performance. In this article, you'll dive deep into how @IntoSet and @IntoMap work under the hood, exploring how the annotation processor distinguishes contributions from declarations, how the binding graph aggregates distributed bindings, how runtime factories materialize collections on demand, how map keys are processed and validated, and how Hilt leverages multibinding internally for its ViewModel infrastructure. This isn't a guide on using multibindings, it's an exploration of the compiler and runtime machinery that makes distributed collection building possible. The fundamental problem: Distributed collection building Consider a plugin architecture where multiple modules contribute plugins: java // Module A @Module class ModuleA { @Provides @IntoSet static Plugin providePluginA { return new PluginA; } } // Module B @Module class ModuleB { @Provides @IntoSet static Plugin providePluginB { return new PluginB; } } // Application @Componentmodules = {ModuleA.class, ModuleB.class} interface AppComponent { Set<Plugin plugins; // Returns {PluginA, PluginB} } The challenge: how does Dagger know to collect these separate @Provides methods into a single Set<Plugin binding? The modules are independent, ModuleA doesn't know about ModuleB. Yet the component must somehow aggregate all contributions. The naive approach would require manual collection: java @Module class PluginCollectionModule { @Provides static Set<Plugin providePlugins PluginA a, PluginB b, PluginC c, ... { Set<Plugin plugins = new HashSet<; plugins.adda; plugins.addb; plugins.addc; return plugins; } } This is brittle, every time you add a plugin, you must modify this central module. Multibinding solves this by allowing modules to independently contribute to collections without knowing about each other. The annotation taxonomy: Contributions vs declarations Multibinding introduces four annotations with distinct roles: @IntoSet: Contributing individual elements The @IntoSet annotation marks a method that contributes a single element to a set: java @Documented @TargetMETHOD @RetentionRUNTIME public @interface IntoSet {} The method's return type becomes the element type. For a method returning Plugin, it contributes to Set<Plugin: java @Provides @IntoSet static Plugin providePlugin { return new PluginImpl; } The RUNTIME retention is important, while Dagger processes these annotations at compile time, the retention allows runtime inspection for debugging and tooling. However, the actual multibinding logic is entirely compile-time generated. @IntoMap: Contributing key-value pairs The @IntoMap annotation marks a method that contributes an entry to a map: java @Documented @TargetMETHOD @RetentionRUNTIME public @interface IntoMap {} Unlike @IntoSet, @IntoMap requires a companion @MapKey annotation to specify the key: java @Provides @IntoMap @StringKey"key1" static Plugin providePlugin { return new PluginImpl; } This contributes to Map<String, Provider<Plugin. Notice the map value is Provider<Plugin, not Plugin, maps use lazy evaluation by default. @ElementsIntoSet: Contributing collections The @ElementsIntoSet annotation contributes multiple elements at once: java @Provides @ElementsIntoSet static Set<Plugin provideDefaultPlugins { return ImmutableSet.ofnew PluginA, new PluginB; } This is useful for providing default contributions or bulk additions. The method returns a Set<T, and all elements are added to the multibinding. @Multibinds: Declaring empty multibindings The @Multibinds annotation declares that a multibinding exists, even if empty: java @Module abstract class MyModule { @Multibinds abstract Set<Plugin plugins; @Multibinds abstract Map<String, Plugin pluginMap; } This is necessary only for potentially empty multibindings. If at least one contribution exists, the declaration is implicit. However, if a component requests Set<Plugin and there are zero contributions without a @Multibinds declaration, compilation fails. The critical insight: Dagger never implements or calls @Multibinds methods. They're purely metadata, a compile-time signal that the multibinding should exist. The ContributionType enum: Classification at compile time During annotation processing, Dagger classifies each binding by contribution type: java public enum ContributionType { UNIQUE, // Regular non-multibinding SET, // @IntoSet contribution SETVALUES, // @ElementsIntoSet contribution MAP, // @IntoMap contribution }
Monday, November 24, 2025Building dynamic user interfaces has long been a fundamental challenge in Android development. The traditional approach requires recompiling and redeploying the entire application whenever the UI needs to change—a process that creates significant friction for A/B testing, feature flags, and real-time content updates. Consider a scenario where your marketing team wants to test a new checkout button design: in the traditional model, this simple change requires developer time, code review, QA testing, app store submission, and weeks of waiting for user adoption. Compose Remote emerges as a powerful solution to this problem, enabling developers to create, transmit, and render Jetpack Compose UI layouts at runtime without any recompilation. In this article, you'll explore what Compose Remote is, understand its core architecture, and discover the benefits it brings to dynamic screen design with Jetpack Compose. This isn't a tutorial on using the library, it's an exploration of the paradigm shift it represents for Android UI development. Understanding the core abstraction: What makes Compose Remote special At its heart, Compose Remote is a framework that enables remote rendering of Compose UI components. What distinguishes it from traditional UI approaches is its adherence to two fundamental principles: declarative document serialization and platform-independent rendering. Declarative document serialization Declarative document serialization means you can capture any Jetpack Compose layout into a compact, serialized format. Think of it like taking a "screenshot" of your UI, except instead of pixels, you're capturing the actual drawing instructions. This captured document contains everything needed to recreate the UI: shapes, colors, text, images, animations, and even interactive touch regions. kotlin // On the server or creation side val document = captureRemoteDocument context = context, creationDisplayInfo = displayInfo, profile = profile { // Standard Compose UI - looks exactly like regular Compose code Columnmodifier = RemoteModifier.fillMaxSize { Text"Dynamic Content" ButtononClick = { / action / } { Text"Click Me" } } } // Result: A ByteArray that can be sent over the network The cool of this approach is that the creation side writes standard Compose code. There's no new DSL to learn, no JSON schema to maintain, no template language to master. If you can write it in Compose, you can capture it with Compose Remote. Platform-independent rendering Platform-independent rendering means the captured document can be transmitted over the network and rendered on any Android device without needing the original Compose code. The client device doesn't need your composable functions, your view models, or your business logic, it just needs the document bytes and a player. kotlin // On the client or player side RemoteDocumentPlayer document = remoteDocument.document, documentWidth = windowInfo.containerSize.width, documentHeight = windowInfo.containerSize.height, onAction = { actionId, value - // Handle user interactions } These properties aren't just conveniences, they're architectural constraints that enable true decoupling of UI definition from deployment. The document format captures not just static layouts but also state, animations, and interactions, making it a complete representation of the UI experience. Comparing approaches: Why not JSON or WebViews? Before diving deeper, it's worth understanding why Compose Remote takes this approach rather than alternatives: JSON-based server-driven UI like Airbnb's Epoxy or Shopify's approach requires defining a schema that maps to native components. This works well for structured content but struggles with: - Complex animations and transitions - Custom drawing and graphics - Rich text with inline styling - Gradients, shadows, and visual effects WebViews offer full flexibility but introduce: - Performance overhead separate rendering process - Inconsistent look and feel web styling vs native - Memory pressure each WebView is expensive - Touch handling complexity gesture conflicts Compose Remote takes a third path: capturing the actual drawing operations that Compose would execute. This means any UI you can build in Compose, including custom Canvas drawing, complex animations, and Material Design components, can be captured and replayed remotely with native performance. The document-based architecture: Creation and playback Compose Remote's architecture is built around a clear separation between two phases: document creation and document playback. Understanding this separation is key to understanding the framework's power. Document creation: Capturing UI as data The creation phase transforms Compose UI code into a serialized document. This happens through a sophisticated capture mechanism that intercepts drawing operations at the Canvas level, the lowest level of Android's rendering pipeline. @Composable Content ↓ RemoteComposeCreationState Tracks state and modifiers ↓ CaptureComposeView Virtual Display - no actual screen needed ↓ RecordingCanvas Intercepts every draw call ↓ Operations 93+ operation types covering all drawing primitives ↓ RemoteComposeBuffer Efficient binary serialization ↓ ByteArray Network-ready, typically 10-100KB for complex UIs The creation side provides a complete Compose integration layer. You write standard @Composable functions, and the framework captures everything: layout hierarchies, modifiers, text styles, images, animations, and even touch handlers.
Making REST API calls has been a fundamental requirement in Android development, yet the complexity of managing HTTP requests, serialization, error handling, and thread management has long been a persistent challenge. Retrofit emerged as Square's solution to this problem, transforming a verbose, error-prone process into an elegant, annotation-driven API. But the real power of Retrofit isn't just its simplified interface, it's the sophisticated machinery working behind the scenes to turn interface methods into HTTP calls. In this article, you'll dive deep into the internal mechanisms of Retrofit, exploring how Java's dynamic proxies create implementation classes at runtime, how annotations are parsed and cached using sophisticated locking strategies, how the framework transforms method calls into OkHttp requests through a layered architecture, and the subtle optimizations that make it production-ready. This isn't a beginner's guide to using Retrofit, it's a deep dive into how Retrofit actually works under the hood. Understanding the core abstraction: What makes Retrofit special At its heart, Retrofit is a type-safe HTTP client that uses dynamic proxies and annotation processing to convert interface method declarations into HTTP requests. What distinguishes Retrofit from manual HTTP clients is its adherence to two fundamental principles: declarative API definition and pluggable architecture. The declarative API definition means you don't manually construct HTTP requests for every endpoint. Instead, Retrofit provides annotations that describe the request: kotlin interface GitHubApi { @GET"users/{user}/repos" fun listRepos@Path"user" user: String: Call<List<Repo } // Implementation generated automatically: val api = retrofit.create<GitHubApi val call = api.listRepos"octocat" The pluggable architecture means Retrofit separates concerns through factory patterns. Every aspect of request/response handling is customizable: - CallAdapter: Transforms Call<T into other types RxJava Observable, Kotlin suspend fun, Java 8 CompletableFuture - Converter: Serializes/deserializes request/response bodies Gson, Jackson, Moshi, Protobuf - Call.Factory: Creates HTTP calls typically OkHttp, but swappable These properties aren't just conveniences, they're architectural constraints that enable compile-time type safety and runtime flexibility. The dynamic proxy mechanism allows Retrofit to parse annotations once per method and cache the parsing logic, making subsequent calls extremely fast. The factory chains allow you to add Gson JSON parsing or RxJava integration without modifying any core Retrofit code. The dynamic proxy pattern: How Retrofit creates implementations When you call retrofit.createMyApi.class, you're not getting a manually written implementation. You're getting a JDK dynamic proxy that intercepts every method call at runtime. This is the foundation of Retrofit's "magic." Proxy creation in Retrofit.create Let's examine the actual proxy creation code in the Retrofit class: java @SuppressWarnings"unchecked" public <T T createfinal Class<T service { validateServiceInterfaceservice; return T Proxy.newProxyInstance service.getClassLoader, new Class<? {service}, new InvocationHandler { private final Object emptyArgs = new Object0; @Override public @Nullable Object invokeObject proxy, Method method, @Nullable Object args throws Throwable { // If the method is a method from Object then defer to normal invocation. if method.getDeclaringClass == Object.class { return method.invokethis, args; } args = args != null ? args : emptyArgs; Reflection reflection = Platform.reflection; return reflection.isDefaultMethodmethod ? reflection.invokeDefaultMethodmethod, service, proxy, args : loadServiceMethodservice, method.invokeproxy, args; } }; } This code uses Java's Proxy.newProxyInstance to generate a class at runtime that implements your interface. Every method call goes through the InvocationHandler.invoke method, which has three dispatch paths: 1. Object methods: Methods like equals, hashCode, and toString are delegated to the handler itself: java if method.getDeclaringClass == Object.class { return method.invokethis, args; } This ensures that basic Java object operations work correctly on the proxy instance. 2. Default methods Java 8+ - Interface default methods are invoked using platform-specific reflection: java return reflection.invokeDefaultMethodmethod, service, proxy, args; On Java 8+, Retrofit uses MethodHandle to invoke default methods. This allows you to add helper methods to your API interfaces without Retrofit trying to parse them as HTTP endpoints. 3. Retrofit methods: Everything else is treated as an HTTP endpoint: java return loadServiceMethodservice, method.invokeproxy, args; This is where the real work happens. The loadServiceMethod call parses annotations and caches the result, then invoke executes the HTTP request. Interface validation Before creating the proxy, Retrofit validates the interface with some strict rules in the Retrofit class: java private void validateServiceInterfaceClass<? service { if !service.isInterface { throw new IllegalArgumentException"API declarations must be interfaces."; } Deque<Class<? check = new ArrayDeque<1; check.addservice; while !check.isEmpty { Class<? candidate = check.removeFirst; if candidate.getTypeParameters.length != 0 { StringBuilder message = new StringBuilder"Type parameters are unsupported on ".appendcandidate.getName; if candidate != service { message.append" which is an interface of ".appendservice.getName; } throw new IllegalArgumentExceptionmessage.toString; } Collections.addAllcheck, candidate.getInterfaces; } // ... } This validation enforces two critical constraints: 1. Must be an interface: Classes can't be proxied by JDK proxies they'd need CGLIB or ByteBuddy 2. No generic type parameters: interface Api<T is forbidden because generics are erased at runtime The breadth-first search through the interface hierarchy ensures that even inherited interfaces don't violate these rules. The performance benefit of proxies Why use dynamic proxies instead of annotation processing to generate implementation classes at compile time? The answer is flexibility. Proxies allow Retrofit to: - Parse annotations lazily only when methods are first called - Support different return types through the CallAdapter mechanism - Avoid compile-time code generation complexity The trade-off is a slight runtime overhead for the first method call annotation parsing, but this is amortized through aggressive caching. The service method cache: lazy initialization
REST APIs form the backbone of modern Android applications, yet the question of how Retrofit creates concrete implementations from plain interface definitions has puzzled many developers. When you call retrofit.createGitHubApi.class and receive a working API client, something remarkable happens under the hood, an entire implementation class is generated at runtime, complete with HTTP logic, parameter handling, and response parsing. This seemingly magical transformation is powered by Java's dynamic proxy mechanism combined with Retrofit's sophisticated annotation parsing and caching strategies. In this article, you'll dive deep into the internal mechanisms of Retrofit's proxy system, exploring how Java's Proxy.newProxyInstance creates implementation classes at runtime, how the InvocationHandler intercepts method calls and routes them to the correct execution path, how Retrofit's three-state cache ensures thread-safe lazy initialization with lock-free fast paths, and how annotation metadata is transformed into executable HTTP requests. This isn't a guide on using Retrofit, it's a deep dive into how the instance creation magic actually works. Understanding the fundamental challenge: From interfaces to instances At its core, Retrofit faces a seemingly impossible task: creating instances from interfaces. In standard Java, you cannot instantiate an interface: kotlin interface GitHubApi { @GET"users/{user}/repos" fun listRepos@Path"user" user: String: Call<List<Repo } // This won't compile val api = GitHubApi // ERROR: Cannot create an instance of an interface Interfaces define contracts, not implementations. Yet Retrofit allows you to write: kotlin val retrofit = Retrofit.Builder .baseUrl"https://api.github.com/" .build val api = retrofit.createGitHubApi::class.java val repos = api.listRepos"octocat".execute The api object behaves as if someone manually wrote an implementation class that handles HTTP requests, parameter encoding, and response parsing. How does Retrofit generate this implementation without any explicit code? The answer lies in Java's dynamic proxy pattern, a runtime code generation mechanism that allows creating interface implementations on the fly. Understanding this pattern is key to understanding Retrofit's architecture. The dynamic proxy pattern: Runtime class generation Java's java.lang.reflect.Proxy class provides a mechanism to create proxy instances that implement specified interfaces at runtime. Let's examine how this works before diving into Retrofit's specific implementation. The basic proxy mechanism The Proxy.newProxyInstance method takes three parameters and returns an object that implements your interface: java public static Object newProxyInstance ClassLoader loader, // Which ClassLoader to define the proxy class in Class<? interfaces, // The list of interfaces to implement InvocationHandler h // The handler that processes method calls When you call any method on the proxy instance, Java automatically routes the call to the InvocationHandler.invoke method, which receives: - The proxy instance itself - The Method object representing the called method - The array of arguments passed to the method This is powerful because you can implement one handler that processes all methods on the interface. The handler can inspect the method's annotations, parameters, and return type to determine what to do. Here's a minimal example: java interface HelloService { String sayHelloString name; String sayGoodbyeString name; } HelloService service = HelloService Proxy.newProxyInstance HelloService.class.getClassLoader, new Class<? { HelloService.class }, new InvocationHandler { @Override public Object invokeObject proxy, Method method, Object args { String methodName = method.getName; String name = String args0; return methodName + " called with: " + name; } } ; service.sayHello"Alice"; // Returns: "sayHello called with: Alice" service.sayGoodbye"Bob"; // Returns: "sayGoodbye called with: Bob" The same invoke method handles both sayHello and sayGoodbye. It inspects the method name and arguments to determine the behavior. This is exactly how Retrofit works: it creates a proxy instance where every method call is routed to an InvocationHandler that parses annotations and constructs HTTP requests. Retrofit's proxy creation: The create method Let's examine Retrofit's actual implementation of create in the Retrofit class: java @SuppressWarnings"unchecked" public <T T createfinal Class<T service { validateServiceInterfaceservice; return T Proxy.newProxyInstance service.getClassLoader, new Class<? {service}, new InvocationHandler { private final Object emptyArgs = new Object0;
Table of Contents 1. Introductionintroduction 2. Architecture Overviewarchitecture-overview 3. Core Modulescore-modules - hot-reload-agenthot-reload-agent-the-java-instrumentation-agent - hot-reload-orchestrationhot-reload-orchestration-communication-backbone - hot-reload-runtime-jvmhot-reload-runtime-jvm-runtime-integration - hot-reload-gradle-pluginhot-reload-gradle-plugin-build-integration - hot-reload-corehot-reload-core-shared-utilities - hot-reload-analysishot-reload-analysis-bytecode-analysis 4. The Hot Reload Flowthe-hot-reload-flow 5. Communication Protocolcommunication-protocol 6. State Managementstate-management 7. Static Field Re-initializationstatic-field-re-initialization 8. Compose Integrationcompose-integration 9. Window Managementwindow-management 10. Advanced Topicsadvanced-topics Introduction When developing user interfaces, the traditional cycle of making a change, recompiling, restarting the application, and navigating back to the state you were testing can be frustrating. Each iteration can take tens of seconds, breaking your flow and making it harder to experiment with different designs. Compose Hot Reload addresses this problem by enabling real-time UI updates in Compose Multiplatform applications without requiring a full restart. The system works by combining the JVM's HotSwap capabilities with Compose's recomposition model. When you save a file, the changes propagate to your running application within a second or two, and you can see the updated UI immediately while preserving the application's current state. This means you don't lose your place in a multi-step workflow or need to manually recreate the conditions you were testing. The implementation involves multiple layers working together. A Java agent instruments the application's bytecode and handles class redefinition at runtime. An orchestration protocol coordinates communication between the build system and the running application. The runtime integration provides Compose-aware UI updates that only recompose the parts of your interface that actually changed. Finally, a Gradle plugin integrates everything into the build system so you can use hot reload without complicated configuration. What Hot Reload Can Do Hot reload supports instant UI updates when you modify composable functions, change layout logic, or update visual properties. The system preserves your application's state across reloads, so if you're testing a form with several fields filled in, those values remain after the code updates. When classes change, the system performs selective invalidation of Compose groups, recomposing only the affected parts of the UI rather than rebuilding everything. It can also re-initialize static fields when their definitions change, which is important for singleton objects and global configuration. The window state persists across reloads and even across restarts, so your window doesn't jump to a different position every time you test a change. All of this works across multiple processes, with the build system, IDE, and application coordinating through a shared orchestration layer. Architecture Overview The hot reload system is built from several interconnected modules, each handling a specific part of the process. Understanding how these modules work together helps explain both what hot reload can do and what its limitations are. ┌──────────────────────────────────────────────────────────┐ │ Developer's IDE │ │ │ │ Source Code ── Kotlin Compiler ── .class files │ └────────────────────────┬─────────────────────────────────┘ │ │ File System Watch ▼ ┌──────────────────────────────────────────────────────────┐ │ Gradle Plugin │ │ │ │ • ComposeHotSnapshotTask detect changes │ │ • ComposeHotReloadTask send reload request │ │ • ComposeHotRun launch with agent │ └────────────────────────┬─────────────────────────────────┘ │ │ TCP Socket Binary Protocol ▼ ┌──────────────────────────────────────────────────────────┐ │ Orchestration Server │ │ │ │ • Message Broadcasting │ │ • State Management │ │ • Client Coordination │ └────────────────────────┬─────────────────────────────────┘ │ │ ReloadClassesRequest ▼ ┌──────────────────────────────────────────────────────────┐ │ Java Agent │ │ │ │ • Bytecode Transformation │ │ • Class Redefinition │ │ • Static Re-initialization │ │ • Compose Group Invalidation │ └────────────────────────┬─────────────────────────────────┘ │ │ Instrumentation API ▼ ┌──────────────────────────────────────────────────────────┐ │ Runtime JVM │ │ │ │ • DevelopmentEntryPoint │ │ • HotReloadState Management │ │ • Composition Reset │ │ • UI Re-rendering │ └──────────────────────────────────────────────────────────┘ The flow starts in your IDE, where the Kotlin compiler transforms your source code into bytecode. The Gradle plugin watches for these changes and creates snapshots of what actually changed between compilations. When changes are detected, the plugin sends a reload request through the orchestration server, which acts as a message broker between different components. The Java agent receives this request and performs the actual class redefinition using the JVM's instrumentation API. Finally, the runtime integration updates the Compose UI by triggering recomposition of the affected parts. This architecture keeps the concerns separated. The build system handles compilation and change detection. The orchestration layer handles communication without either side needing to know the implementation details of the other. The agent handles the low-level bytecode manipulation. The runtime handles the high-level UI updates. This separation means you can understand and modify each piece independently. Core Modules hot-reload-agent: The Java Instrumentation Agent The agent runs inside your application's JVM process and handles the actual class redefinition. It's loaded at startup through the Java agent mechanism, which gives it special privileges to intercept and modify class loading. The agent is the most complex part of the system because it has to handle bytecode transformation, track class loaders, coordinate with the Compose runtime, and manage all the edge cases that come with redefining classes at runtime. Entry Point When your application starts with hot reload enabled, the JVM calls the agent's premain function before your application's main method runs. This function initializes all the subsystems the agent needs: kotlin // hot-reload-agent/src/main/kotlin/org/jetbrains/compose/reload/agent/agent.kt @file:JvmName"ComposeHotReloadAgent" fun premain@Suppress"unused" args: String?, instrumentation: Instrumentation { startDevTools startOrchestration createPidfile startWritingLogs launchWindowInstrumentationinstrumentation launchComposeInstrumentationinstrumentation launchRuntimeTrackinginstrumentation launchReloadRequestHandlerinstrumentation launchJdwpTrackerinstrumentation } The agent first starts any development tools that might be available, then establishes a connection to the orchestration server. It creates a pidfile that contains information about how to connect to this application instance, which the Gradle plugin uses later to send reload requests. Log writing starts so you can debug issues when they occur. Then the agent registers several class file transformers with the JVM's instrumentation API. Each transformer intercepts class loading for a specific purpose. Window instrumentation injects code to wrap your UI in the development entry point. Compose instrumentation enables hot reload mode in the Compose runtime. Runtime tracking builds a global view of all loaded classes and their relationships. The reload request handler listens for incoming reload requests and processes them. JDWP tracking monitors whether a debugger is attached, which can affect how hot reload behaves. Key Components RuntimeTrackingTransformer Every time the JVM loads a class, the runtime tracking transformer gets a chance to inspect it. The transformer builds a comprehensive map of what classes exist in your application and what Compose groups they contain: kotlin // hot-reload-agent/src/main/kotlin/org/jetbrains/compose/reload/agent/runtimeTracking.kt internal fun launchRuntimeTrackinginstrumentation: Instrumentation { val transformer = RuntimeTrackingTransformer instrumentation.addTransformertransformer, false } private class RuntimeTrackingTransformer : ClassFileTransformer { override fun transform loader: ClassLoader?, className: String?, classBeingRedefined: Class<?, protectionDomain: ProtectionDomain?, classfileBuffer: ByteArray : ByteArray? { // Track class loading val classId = ClassId.fromSlashDelimitedFqnclassName classLoadersclassId = WeakReferenceloader // Perform bytecode analysis val analysis = performComposeAnalysisclassfileBuffer applicationInfo.update { current - current.withClassclassId, analysis } return null // No transformation during initial load } } The transform method receives the raw bytecode of every class as it loads. The method first extracts the class identifier from the internal JVM format, which uses slashes instead of dots to separate package names. It stores a weak reference to the class loader that loaded this class. Weak references are important here because class loaders can be garbage collected, and we don't want to prevent that by holding strong references to them. Then the transformer performs bytecode analysis to understand what Compose groups exist in this class. This analysis involves parsing the bytecode to find calls to methods like startRestartGroup and startReplaceableGroup, which the Compose compiler inserts around composable functions. The results go into a global applicationInfo structure that tracks everything the agent knows about the application. The transformer returns null to indicate it's not modifying the bytecode during initial loading. The modification happens later when classes are redefined during hot reload. This two-phase approach avoids potential class circularity errors that can occur if you try to transform classes too early in the JVM's initialization sequence. ComposeTransformer The Compose transformer watches for the Compose runtime itself to load and configures it for hot reload: kotlin // hot-reload-agent/src/main/kotlin/org/jetbrains/compose/reload/agent/compose.kt private class ComposeTransformer : ClassFileTransformer { override fun transform loader: ClassLoader?, className: String?, classBeingRedefined: Class<?, protectionDomain: ProtectionDomain?, classfileBuffer: ByteArray : ByteArray? { if className == "androidx/compose/runtime/Recomposer\$Companion" { // Enable hot reload mode Recomposer.Companion::class.java .getMethod"setHotReloadEnabled", Boolean::class.java .invokenull, true // Set up group invalidation Recomposer.Companion::class.java .getMethod"setGroupInvalidator", Function1::class.java .invokenull, groupInvalidator } return null } } When the Recomposer companion object loads, the transformer calls setHotReloadEnabledtrue on it. The Recomposer is the core of Compose's runtime, responsible for scheduling recomposition and managing the composition tree. Enabling hot reload mode changes how it handles recomposition, allowing the agent to trigger targeted recomposition of specific groups rather than rebuilding the entire UI. The transformer also installs a group invalidator callback. After class redefinition, the agent will call this callback with information about which Compose groups need to be recomposed. The Recomposer uses this information to invalidate only the affected groups, avoiding unnecessary recomposition of UI elements that didn't change. WindowInstrumentation The window instrumentation transformer wraps your UI in a development entry point without requiring you to modify your application code: kotlin // hot-reload-agent/src/main/kotlin/org/jetbrains/compose/reload/agent/window.kt private class WindowInstrumentation : ClassFileTransformer { override fun transform loader: ClassLoader?, className: String?, classBeingRedefined: Class<?, protectionDomain: ProtectionDomain?, classfileBuffer: ByteArray : ByteArray { if className == "androidx/compose/ui/awt/ComposeWindow" { return transformComposeWindowclassfileBuffer } return classfileBuffer } private fun transformComposeWindowbytecode: ByteArray: ByteArray { val clazz = ClassPool.getDefault.makeClassbytecode.inputStream // Redirect setContent to DevelopmentEntryPoint clazz.getDeclaredMethod"setContent".apply { insertBefore""" org.jetbrains.compose.reload.jvm.JvmDevelopmentEntryPoint .setContentthis, $$1, $$2, $$3; return; """ } return clazz.toBytecode } } Every Compose Desktop application calls setContent on a ComposeWindow to define its UI. The window instrumentation intercepts this call and redirects it through the DevelopmentEntryPoint, which adds hot reload support. The instrumentation uses Javassist to insert bytecode at the beginning of the setContent method that calls the development entry point instead of the original implementation. The inserted code receives all the same parameters as the original method using Javassist's $$ syntax, which expands to the list of method parameters. After calling the development entry point, the instrumented method returns immediately, preventing the original implementation from running. This approach works transparently without requiring developers to change their application code. Reload Request Handler When the build system detects changes and sends a reload request, the reload request handler processes it: kotlin // hot-reload-agent/src/main/kotlin/org/jetbrains/compose/reload/agent/reloadRequestHandler.kt internal fun launchReloadRequestHandlerinstrumentation: Instrumentation { orchestration.asFlow .filterIsInstance<ReloadClassesRequest .onEach { request - // Ensure reload happens on UI thread SwingUtilities.invokeAndWait { val result = performReloadinstrumentation, request // Send result back ReloadClassesResult reloadRequestId = request.messageId, isSuccess = result.isSuccess, errorMessage = result.errorMessage, errorStacktrace = result.errorStacktrace .sendBlocking } } .launchInreloadScope } The handler listens to the orchestration message flow and filters for reload class requests. When one arrives, it schedules the reload to happen on the Swing Event Dispatch Thread. Running on the EDT is crucial because the JVM's class redefinition mechanism can cause problems if you try to redefine classes while they're actively being used on other threads. By doing all the work on the EDT, we ensure that any Compose UI code is idle during the redefinition process. The invokeAndWait call blocks the handler's coroutine until the reload completes on the EDT. This blocking behavior is intentional because the handler needs to send back a result message indicating success or failure, and it can't do that until the reload actually finishes. After the reload completes, the handler sends a ReloadClassesResult message back through the orchestration, which the build system receives and displays to the developer. Class Reloading Logic The actual reload process involves several steps, all coordinated by the reload function: kotlin // hot-reload-agent/src/main/kotlin/org/jetbrains/compose/reload/agent/reload.kt internal fun Context.reload instrumentation: Instrumentation, reloadRequestId: OrchestrationMessageId, pendingChanges: Map<File, ReloadClassesRequest.ChangeType : Try<Reload = Try { val definitions = pendingChanges.mapNotNull { file, change - if change == ReloadClassesRequest.ChangeType.Removed { return@mapNotNull null } // Read the new bytecode val code = file.readBytes val classId = ClassIdcode ?: return@mapNotNull null // Find the appropriate ClassLoader val loader = findClassLoaderclassId.get ?: return@mapNotNull null // Load original class val originalClass = loader.loadClassclassId.toFqn // Transform bytecode with Javassist val clazz = getClassPoolloader.makeClasscode.inputStream // Add static re-initialization support clazz.transformForStaticsInitializationoriginalClass
A comprehensive study of how the Compose compiler determines type stability for recomposition optimization. Table of Contents - Compose Compiler Stability Inference Systemcompose-compiler-stability-inference-system - Table of Contentstable-of-contents - Chapter 1: Foundationschapter-1-foundations - 1.1 Introduction11-introduction - 1.2 Core Concepts12-core-concepts - Stability Definitionstability-definition - Recomposition Mechanicsrecomposition-mechanics - 1.3 The Role of Stability13-the-role-of-stability - Performance Impactperformance-impact - Chapter 2: Stability Type Systemchapter-2-stability-type-system - 2.1 Type Hierarchy21-type-hierarchy - 2.2 Compile-Time Stability22-compile-time-stability - Stability.Certainstabilitycertain - 2.3 Runtime Stability23-runtime-stability - Stability.Runtimestabilityruntime - 2.4 Uncertain Stability24-uncertain-stability - Stability.Unknownstabilityunknown - 2.5 Parametric Stability25-parametric-stability - Stability.Parameterstabilityparameter - 2.6 Combined Stability26-combined-stability - Stability.Combinedstabilitycombined - 2.7 Stability Decision Tree27-stability-decision-tree - Complete Decision Treecomplete-decision-tree - Decision Tree for Generic Typesdecision-tree-for-generic-types - Expression Stability Decision Treeexpression-stability-decision-tree - Key Decision Points Explainedkey-decision-points-explained - Chapter 3: The Inference Algorithmchapter-3-the-inference-algorithm - 3.1 Algorithm Overview31-algorithm-overview - 3.2 Type-Level Analysis32-type-level-analysis - Phase 1: Fast Path Type Checksphase-1-fast-path-type-checks - Phase 2: Type Parameter Handlingphase-2-type-parameter-handling - Phase 3: Nullable Type Unwrappingphase-3-nullable-type-unwrapping - Phase 4: Inline Class Handlingphase-4-inline-class-handling - 3.3 Class-Level Analysis33-class-level-analysis - Phase 5: Cycle Detectionphase-5-cycle-detection - Phase 6: Annotation and Marker Checksphase-6-annotation-and-marker-checks - Phase 7: Known Constructsphase-7-known-constructs - Phase 8: External Configurationphase-8-external-configuration - Phase 9: External Module Handlingphase-9-external-module-handling - Phase 10: Java Type Handlingphase-10-java-type-handling - Phase 11: General Interface Handlingphase-11-general-interface-handling - Phase 12: Field-by-Field Analysisphase-12-field-by-field-analysis - 3.4 Expression-Level Analysis34-expression-level-analysis - Constant Expressionsconstant-expressions - Function Call Expressionsfunction-call-expressions - Variable Reference Expressionsvariable-reference-expressions - Chapter 4: Implementation Mechanismschapter-4-implementation-mechanisms - 4.1 Bitmask Encoding41-bitmask-encoding - Encoding Schemeencoding-scheme - Special Bit: Known Stablespecial-bit-known-stable - Bitmask Applicationbitmask-application - 4.2 Runtime Field Generation42-runtime-field-generation - JVM Platformjvm-platform - Non-JVM Platformsnon-jvm-platforms - 4.3 Annotation Processing43-annotation-processing - @StabilityInferred Annotationstabilityinferred-annotation - Annotation Generationannotation-generation - 4.4 Normalization Process44-normalization-process - Chapter 5: Case Studieschapter-5-case-studies - 5.1 Primitive and Built-in Types51-primitive-and-built-in-types - Integer Typesinteger-types - String Typestring-type - Function Typesfunction-types - 5.2 User-Defined Classes52-user-defined-classes - Simple Data Classsimple-data-class - Class with Mutable Propertyclass-with-mutable-property - Class with Mixed Propertiesclass-with-mixed-properties - 5.3 Generic Types53-generic-types - Simple Generic Containersimple-generic-container - Multiple Type Parametersmultiple-type-parameters - Nested Generic Typesnested-generic-types - 5.4 External Dependencies54-external-dependencies - External Class with Annotationexternal-class-with-annotation - External Class Without Annotationexternal-class-without-annotation - 5.5 Interface and Abstract Types55-interface-and-abstract-types - Interface Parameterinterface-parameter - Abstract Classabstract-class - Interface with @Stableinterface-with-stable - 5.6 Inheritance Hierarchies56-inheritance-hierarchies - Stable Inheritancestable-inheritance - Unstable Inheritanceunstable-inheritance - Chapter 6: Configuration and Toolingchapter-6-configuration-and-tooling - 6.1 Stability Annotations61-stability-annotations - @Stable Annotationstable-annotation - @Immutable Annotationimmutable-annotation - Compiler-Level Differences: @Stable vs @Immutablecompiler-level-differences-stable-vs-immutable - @StableMarker Meta-Annotationstablemarker-meta-annotation - 6.2 Configuration Files62-configuration-files - File Formatfile-format - Pattern Syntaxpattern-syntax - Gradle Configurationgradle-configuration - 6.3 Compiler Reports63-compiler-reports - Enabling Reportsenabling-reports - Generated Filesgenerated-files - 6.4 Common Issues and Solutions64-common-issues-and-solutions - Issue 1: Accidental var Usageissue-1-accidental-var-usage - Issue 2: Mutable Collectionsissue-2-mutable-collections - Issue 3: Interface Parametersissue-3-interface-parameters - Issue 4: External Library Typesissue-4-external-library-types - Issue 5: Inheritance from Unstable Baseissue-5-inheritance-from-unstable-base - Chapter 7: Advanced Topicschapter-7-advanced-topics - 7.1 Type Substitution71-type-substitution - Substitution Map Constructionsubstitution-map-construction - Substitution Applicationsubstitution-application - Nested Substitutionnested-substitution - 7.2 Cycle Detection72-cycle-detection - Detection Mechanismdetection-mechanism - Example: Self-Referential Typeexample-self-referential-type - Limitationlimitation - 7.3 Special Cases73-special-cases - Protobuf Typesprotobuf-types - Delegated Propertiesdelegated-properties - Inline Classes with Markersinline-classes-with-markers - Chapter 8: Compiler Analysis Systemchapter-8-compiler-analysis-system - 8.1 Analysis Infrastructure81-analysis-infrastructure - WritableSlices: Data Flow Storagewritableslices-data-flow-storage - BindingContext and BindingTracebindingcontext-and-bindingtrace - 8.2 Composable Call Validation82-composable-call-validation - Context Checking Algorithmcontext-checking-algorithm - Inline Lambda Restrictionsinline-lambda-restrictions - Type Compatibility Checkingtype-compatibility-checking - 8.3 Declaration Validation83-declaration-validation - Composable Function Rulescomposable-function-rules - Property Restrictionsproperty-restrictions - Override Consistencyoverride-consistency - 8.4 Applier Target System84-applier-target-system - Scheme Structurescheme-structure - Target Inference Algorithmtarget-inference-algorithm - Cross-Target Validationcross-target-validation - 8.5 Type Resolution and Inference85-type-resolution-and-inference - Automatic Composable Inferenceautomatic-composable-inference - Lambda Type Adaptationlambda-type-adaptation - 8.6 Analysis Pipeline86-analysis-pipeline - Compilation Phasescompilation-phases - Data Flow Through Phasesdata-flow-through-phases - 8.7 Practical Examples87-practical-examples - Example: Composable Context Validationexample-composable-context-validation - Example: Inline Lambda Analysisexample-inline-lambda-analysis - Example: Stability and Skippingexample-stability-and-skipping - Appendix: Source Code Referencesappendix-source-code-references - Primary Source Filesprimary-source-files - Conclusionconclusion Chapter 1: Foundations 1.1 Introduction The Compose compiler implements a stability inference system to enable recomposition optimization. This system analyzes types at compile time to determine whether their values can be safely compared for equality during recomposition. Source File: compiler-hosted/src/main/java/androidx/compose/compiler/plugins/kotlin/analysis/Stability.kt The inference process involves analyzing type declarations, examining field properties, and tracking stability through generic type parameters. The results inform the runtime whether to skip recomposition when parameter values remain unchanged. 1.2 Core Concepts Stability Definition A type is considered stable when it satisfies three conditions: 1. Immutability: The observable state of an instance does not change after construction 2. Equality semantics: Two instances with equal observable state are equal via equals 3. Change notification: If the type contains observable mutable state, all state changes trigger composition invalidation These properties allow the runtime to make optimization decisions based on value comparison. Recomposition Mechanics When a composable function receives parameters, the runtime determines whether to execute the function body: kotlin @Composable fun UserProfileuser: User { // Function body } The decision process: 1. Compare the new user value with the previous value 2. If equal and the type is stable, skip recomposition 3. If different or unstable, execute the function body Without stability information, the runtime must conservatively recompose on every invocation, regardless of whether parameters changed. 1.3 The Role of Stability Performance Impact Stability inference affects recomposition in three ways: Smart Skipping: Composable functions with stable parameters can be skipped when parameter values remain unchanged. This reduces the number of function executions during recomposition. Comparison Propagation: The compiler passes stability information to child composable calls, enabling nested optimizations throughout the composition tree. Comparison Strategy: The runtime selects between structural equality equals for stable types and referential equality === for unstable types, affecting change detection behavior. Consider this example: kotlin // Unstable parameter type - interface with unknown stability @Composable fun ExpensiveListitems: List<String { // List is an interface - has Unknown stability // Falls back to instance comparison } // Stable parameter type - using immutable collection @Composable fun ExpensiveListitems: ImmutableList<String { // ImmutableList is in KnownStableConstructs // Can skip recomposition when unchanged } // Alternative: Using listOf result @Composable fun ExpensiveListitems: List<String { // If items comes from listOf, the expression is stable // But the List type itself is still an interface with Unknown stability } The key insight: List and MutableList are both interfaces with Unknown stability. To achieve stable parameters, use: 1. ImmutableList from kotlinx.collections.immutable in KnownStableConstructs 2. Add kotlin.collections.List to your stability configuration file 3. Use @Stable annotation on your data classes containing List Chapter 2: Stability Type System 2.1 Type Hierarchy The compiler represents stability through a sealed class hierarchy defined in :
In Jetpack Compose, Crossfade provides a simple and declarative way to animate the transition between two different UI states. When the targetState passed to it changes, it smoothly fades out the old content while simultaneously fading in the new content. While its public API is minimal, a study of its internal source code reveals a sophisticated state machine that manages the lifecycle of both the incoming and outgoing composables, orchestrates their animations, and ensures a seamless visual transition. The entire mechanism is built upon the foundational Transition API, which is the core engine for state-based animations in Compose. The Entry Point: CrossfadetargetState, ... The most common Crossfade function that developers use is a simple wrapper. Its entire purpose is to create and manage a Transition object for you.
The derivedStateOf API in Jetpack Compose provides a convenient mechanism for creating memoized state that automatically updates when its underlying dependencies change. While essential for performance optimization in many scenarios, it is often described as "expensive." This study analyzes the internal implementation of DerivedSnapshotState to demystify this cost. We will show that the expense of derivedStateOf is not in the read operation, but in the complex machinery required to track dependencies, validate its cached value, and perform recalculations. By examining the isValid, currentRecord, and Snapshot.observe calls, this analysis will reveal the intricate dependency tracking, hashing, and transactional record-keeping that make derivedStateOf a precision tool to be used judiciously, not universally. 1. Introduction: The Promise and the Price The public API is deceptively simple: kotlin public fun <T derivedStateOfcalculation: - T: State<T = DerivedSnapshotStatecalculation, null It promises to run a calculation lambda, cache the result, and only re-run the calculation when one of the State objects read inside it changes. Let's see an example:
The SlotTable is the in-memory data structure that represents the UI tree of a Jetpack Compose application. Instead of a traditional tree of objects, it's a highly optimized, flat structure designed for extremely fast UI updates. Let's explore its internals by examining the code you provided. 1. The Core Data Model: groups and slots At the heart of the SlotTable are two parallel, flat arrays. This is the first and most critical concept to grasp. kotlin internal class SlotTable : CompositionData, Iterable<CompositionGroup { / An array to store group information... an array of an inline struct. / var groups = IntArray0 private set / An array that stores the slots for a group. / var slots = Array<Any?0 { null } private set //... } groups: IntArray: This is the blueprint of your UI. It stores the structure and metadata of your composables in a compact, primitive array. Think of it as a highly efficient, inlined list of instructions that describes the hierarchy, keys, and properties of each composable call. Because it's a flat IntArray, the CPU can scan it very rapidly without expensive memory jumps pointer chasing.
The Kotlin language has long been praised for its pragmatic approach to solving common programming challenges, particularly with its robust null-safety system. However, the domain of recoverable, predictable errors has remained an area where developers rely on a patchwork of patterns rather than a first-class language feature. The "Rich Errors" proposal, also known as Error Union Types, is a significant design initiative aimed at addressing this gap. This study explores the motivation and rationale behind this proposal. We will analyze the shortcomings of existing error-handling patterns in Kotlin and examine how the proposed Rich Errors feature aims to unify them into a more expressive, type-safe, and ergonomic system. The State of Error Handling in Kotlin: A Spectrum of Patterns
Kotlin provides a very useful delegate: lazy. The lazy function creates a property whose value is computed only on its first access and then cached for all subsequent calls. While the public API is super simple, a deep dive into its internals reveals a well-architected system built on the Lazy interface, with multiple, specialized implementations designed to handle different thread-safety requirements. The entire lazy mechanism is built around a simple but creative interface. This interface defines the public contract for any object that represents a lazily initialized value. kotlin public interface Lazy<out T { / Gets the lazily initialized value of the current Lazy instance. Once the value was initialized it must not change during the rest of lifetime of this Lazy instance. / public val value: T
In the Jetpack Compose ecosystem, state is typically consumed synchronously. A composable function reads a State<T object during recomposition to get its current value. However, many modern Android architectures are built on asynchronous streams, using Kotlin's Flow to represent a sequence of values over time. The snapshotFlow function is the useful and highly efficient bridge that connects these two worlds, allowing developers to convert Compose's pull-based State into a push-based Flow. An analysis of its internal mechanism reveals a sophisticated, three-part system: it observes global state changes, tracks which specific state objects were read by the user's code, and uses a coroutine Channel to trigger re-evaluation, all while ensuring correctness and efficiency. The Core Components of the snapshotFlow Mechanism
Google has recently launched the official runtime-annotation library, which serves a similar purpose to the community-built compose-stable-markerhttps://github.com/skydoves/compose-stable-marker library.
Dependency Injection DI is a core software design pattern that promotes loose coupling and enhances the testability and scalability of applications. While powerful libraries like Hilt and Koin are the standard for production Android apps, building a simple DI container from scratch is a valuable exercise. It demystifies the "magic" and solidifies the core concepts: providing dependencies to classes instead of having them create their own. In this study, we will design and implement a basic, lifecycle-aware DI container. Our goal is to create a tool that can: 1. Register dependencies like a UserRepository or AnalyticsService. 2. Provide instances of these dependencies on demand. 3. Manage the scope of these dependencies e.g., as singletons. 4. Integrate cleanly with the Android ViewModel architecture. Step 1: Designing the Core DIContainer The heart of our tool will be a container class responsible for holding and creating our dependencies. A simple way to store registered dependencies is in a Map, where the key is the class type KClass and the value is a factory lambda that knows how to create an instance of that class.
R8 is the default code shrinker, optimizer, and obfuscator for Android applications. It plays a crucial role in reducing APK size and improving runtime performance. While R8 is designed to be a drop-in replacement for ProGuard, its more advanced optimizations can introduce subtle behavioral changes. To manage this, R8 operates in two distinct modes: compatibility mode and full mode. This study will explore the differences between these two modes, with a particular focus on the aggressive optimizations of "full mode" and the necessary configuration adjustments developers must make, especially when working with reflection-heavy libraries like Gson and Retrofit. R8 Compatibility Mode: The Safe Default
The Jetpack Compose ecosystem has grown exponentially in recent years, and it is now widely adopted for building production-level UIs in Android applications. We can now say that Jetpack Compose is the future of Android UI development. One of the biggest advantages of Compose is its declarative approach. It allows developers to describe what the UI should display, while the framework handles how the UI should update when the underlying state changes. This model shifts the focus from imperative UI logic to a more intuitive and reactive way of thinking. However, building reusable and scalable UI components requires more than just a grasp of declarative principles. It demands a thoughtful approach to API design. To guide developers, the Android team has published a comprehensive set of API guidelines. These best practices are not strict rules but are strongly recommended for creating components that are consistent, scalable, and intuitive for other developers to use.
The core idea behind all these "compatibilitieshttps://android.googlesource.com/platform/tools/metalava/+/HEAD/COMPATIBILITY.md" is answering the question: "If I update a library, what might break for people who are already using it?" Let's imagine you are the author of a popular library, AwesomeLibrary, and a developer, "Alex," is using it in their app. 1. Binary Compatibility The "Plug-and-Play" Contract What it is: This is the most strict and important type of compatibility. It means that an app a binary file, like an .apk or .jar that was compiled with an OLD version of your library will still run correctly with a NEW version of your library, without needing to be recompiled. The user can just drop in the new library file, and the app won't crash on startup.
Sunday, August 24, 2025In the modern Android development ecosystem, the synergy between Kotlin and Java is quite still important since many of very traditional projects are written in Java. A prime example of this great interoperability is Square's Retrofit library. Despite being written entirely in Java, Retrofit seamlessly supports Kotlin's suspend functions, allowing developers to write clean, idiomatic asynchronous code for network requests. This capability is not magic, it is a sophisticated illusion built upon a cooperative understanding between the Kotlin compiler and Retrofit's dynamic, reflection-based architecture. This study examines the internal mechanisms that make this "interpolation" possible, revealing how a Java library can interact with a language feature it has no native concept of. The Foundation: Continuation-Passing Style CPS Transformation
Like what you see?
Subscribe to Dove Letter to get weekly insights about Android and Kotlin development, plus access to exclusive content and discussions.