Android & Kotlin Technical Articles
Detailed articles on Android development, Jetpack Compose internals, Kotlin coroutines, and open source library design by skydoves, Google Developer Expert and maintainer of Android libraries with 40M+ annual downloads. Read practical guides on Retrofit, Compose Preview, BottomSheet UI, coroutine compilation, and more.
This is a collection of private or subscriber-first articles written by the Dove Letter, skydoves (Jaewoong). These articles can be released somewhere like Medium in the future, but always they will be revealed for Dove Letter members first.
If you use Jetpack Room, every @Dao interface turns into a full database implementation. If you use Hilt, every @Inject constructor gets wired into a dependency graph. If you use Moshi, every @JsonClass generates a JSON adapter. You add one annotation, hit Build, and new source files appear in your build/generated/ksp directory. The engine behind all of these is KSP, Kotlin Symbol Processing. In this article, you'll start from a practical processor that you'd write yourself, then trace inward through the KSP pipeline: how Gradle discovers your processor, how the Resolver lets you query the entire codebase as a symbol tree, how the multi round processing loop handles dependencies between generated files, and how KSP tracks which files need reprocessing on incremental builds. The fundamental problem: Why KAPT was slow Before KSP, the only way to do annotation processing in Kotlin was KAPT Kotlin Annotation Processing Tool. KAPT works by generating Java stub files from your Kotlin source code, then feeding those stubs to the standard javac annotation processing pipeline. This means the Kotlin compiler has to generate a complete set of Java declarations for every Kotlin class, interface, and function in your project, even if only a handful of them carry annotations. For a project with hundreds of Kotlin files, this stub generation can add 20 to 30 seconds to each build. The stubs are thrown away after processing, so the work is purely overhead. KSP takes a different approach. Instead of generating Java stubs and running through javac, KSP reads the Kotlin compiler's own symbol tree directly. Your processor receives KSClassDeclaration, KSFunctionDeclaration, and KSPropertyDeclaration objects that represent the actual Kotlin program structure, including Kotlin specific features like nullable types, extension functions, sealed classes, and default parameter values that get lost in Java stubs. The result is that KSP processors run roughly twice as fast as equivalent KAPT processors, and they see a more accurate representation of the source code.
Every Android developer has overridden onCreate, onResume, and onDestroy. You write your initialization logic, register listeners, and clean up resources, trusting that the framework will call these methods at the right time, in the right order. But what actually invokes these callbacks? The lifecycle does not run itself. Somewhere deep in the Android framework, a sophisticated transaction system serializes commands on the system server side, sends them across a Binder IPC boundary, and then a state machine in your app's process figures out the exact sequence of intermediate transitions needed to reach the target state. The simplicity of Activity.onResume belies an entire internal architecture devoted to making that call happen reliably. In this article, you'll dive deep into the internal machinery that drives every Activity lifecycle callback. You'll trace the path from the system server's ClientTransaction through the TransactionExecutor's state machine, into ActivityThread's perform methods, through Instrumentation's dispatch layer, and all the way to the window management code that makes your Activity visible. Along the way, you'll see how the framework calculates intermediate lifecycle states, how it protects against invalid transitions, and why this layered architecture exists in the first place. This isn't a guide on using Activity lifecycle callbacks. It's an exploration of the internal transaction and state machine architecture that makes them possible. The fundamental problem: Coordinating lifecycle across process boundaries When you think about lifecycle callbacks, you might imagine something simple. The system server decides an Activity should resume, and it calls onResume. If only it were that straightforward. Consider the naive mental model: java // conceptual - what you might imagine happens activityInstance.onResume; The reality is far more complex. The system server running in its own process cannot directly invoke methods on your Activity running in your app's process. The call must cross a Binder IPC boundary. But that is just the start of the problem. What if the Activity is currently in the ONSTOP state and needs to reach ONRESUME? The framework cannot jump directly. It must first transition through ONRESTART, then ONSTART, and only then ONRESUME. Each of these intermediate callbacks must fire in order, because your code might depend on onStart having run before onResume. Furthermore, the system server may need to batch multiple commands deliver a result, then resume into a single transaction. It must handle edge cases like an Activity being destroyed while a resume command is in flight. And once the Activity is resumed, the framework must add its DecorView to the WindowManager so it actually becomes visible. This is the fundamental problem: lifecycle callbacks are not simple method calls. They are the output of a distributed state machine that spans two processes, handles arbitrary state jumps, manages window visibility, and must always produce a deterministic callback order. ActivityClientRecord: Tracking lifecycle state on the client side The framework needs a way to track each Activity's current lifecycle state within the app process. This is the job of ActivityClientRecord, a static inner class of ActivityThread that serves as the client side bookkeeping record for each Activity instance. If you examine the ActivityClientRecord: java // android.app.ActivityThread.ActivityClientRecord public static final class ActivityClientRecord { public IBinder token; Activity activity; Window window; @LifecycleState private int mLifecycleState = PREONCREATE; boolean paused; boolean stopped; // ... } Notice the structure: 1. token is the Binder token that uniquely identifies this Activity across the system server and app process boundary. Every lifecycle command references an Activity by this token. 2. mLifecycleState tracks the current lifecycle state as an integer constant. It starts at PREONCREATE, meaning the Activity has not yet been created. 3. paused and stopped are legacy boolean flags maintained for backward compatibility with older APIs, but mLifecycleState is the authoritative state tracker. The setState method keeps everything in sync: java // android.app.ActivityThread.ActivityClientRecord public void setState@LifecycleState int newLifecycleState { mLifecycleState = newLifecycleState; switch mLifecycleState { case ONCREATE: paused = true; stopped = true; break; case ONRESUME: paused = false; stopped = false; break; case ONPAUSE: paused = true; stopped = false; break; case ONSTOP: paused = true; stopped = true; break; } } This is important: every time the Activity advances through a lifecycle state, setState is called immediately after the callback completes. The TransactionExecutor which you'll see next reads getLifecycleState to determine where the Activity currently is before calculating the path to the next target state. If this bookkeeping were ever out of sync, the state machine would produce incorrect transition sequences. ClientTransaction: Bundling lifecycle commands for IPC The system server cannot call methods on your Activity directly. Instead, it constructs a ClientTransaction, a Parcelable container that bundles one or more lifecycle commands for delivery to the app process.
Android's WorkManager has become the recommended solution for persistent, deferrable background work. Unlike transient background operations that live and die with your app process, WorkManager guarantees that enqueued work eventually executes, even if the user force-stops the app, the device reboots, or constraints aren't met yet. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around work persistence, dual-scheduler coordination, constraint tracking, process resilience, and state management that span a Room database, multiple scheduler backends, and a carefully orchestrated execution pipeline. In this article, you'll dive deep into how Jetpack WorkManager works internally, exploring how the singleton is initialized and bootstrapped through AndroidX Startup, how WorkSpec entities persist work metadata in a Room database, how the dual-scheduler system coordinates between GreedyScheduler and SystemJobScheduler, how Processor and WorkerWrapper orchestrate the actual execution of work, how ConstraintTracker monitors system state for constraint satisfaction, how ForceStopRunnable detects app force stops and reschedules work, and how work chaining creates dependency graphs through the Dependency table. The fundamental problem: Reliable background execution Background execution on Android is fundamentally unreliable. The system aggressively kills processes to reclaim memory, Doze mode restricts background activity, and app standby buckets throttle work for rarely-used apps. A naive approach to background work: kotlin class SyncActivity : AppCompatActivity { override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState Thread { // Sync data with server api.syncAllData }.start } } This fails in multiple ways. The thread dies when the process is killed. There's no retry mechanism if the network fails. The work doesn't survive device reboots. There's no way to specify constraints like "only on Wi-Fi" or "only when charging." You might try using a Service: kotlin class SyncService : Service { override fun onStartCommandintent: Intent?, flags: Int, startId: Int: Int { Thread { api.syncAllData }.start return STARTREDELIVERINTENT } } This is better, STARTREDELIVERINTENT ensures the Intent is redelivered if the process is killed. But you still have no constraint support, no work chaining, no persistence across reboots, and no observability of work status. You'd need to build all of that yourself. WorkManager solves this by providing a complete infrastructure for persistent, constraint-aware, observable, chainable background work with guaranteed execution. Initialization: The bootstrap sequence WorkManager initializes itself automatically before your Application.onCreate runs. The entry point is WorkManagerInitializer, which implements AndroidX Startup's Initializer interface: java public final class WorkManagerInitializer implements Initializer<WorkManager { @Override public WorkManager createContext context { Logger.get.debugTAG, "Initializing WorkManager with default configuration."; WorkManager.initializecontext, new Configuration.Builder.build; return WorkManager.getInstancecontext; } @Override public List<Class<? extends Initializer<? dependencies { return Collections.emptyList; } } AndroidX Startup uses a ContentProvider to trigger initialization before Application.onCreate. This is critical because it ensures WorkManager is ready before any application code runs. The dependencies method returns an empty list, meaning WorkManager has no initialization dependencies on other Startup initializers. The singleton with dual-lock pattern WorkManager.initialize delegates to WorkManagerImpl.initialize, which uses a synchronized dual-instance pattern: java public static void initializeContext context, Configuration configuration { synchronized sLock { if sDelegatedInstance != null && sDefaultInstance != null { throw new IllegalStateException"WorkManager is already initialized."; } if sDelegatedInstance == null { context = context.getApplicationContext; if sDefaultInstance == null { sDefaultInstance = createWorkManagercontext, configuration; } sDelegatedInstance = sDefaultInstance; } } } Two static fields serve different purposes. sDefaultInstance holds the real singleton. sDelegatedInstance enables testing by allowing test code to inject a mock via setDelegate. The sLock object provides thread-safe access. The explicit check for double initialization throws an IllegalStateException with a helpful message guiding developers to disable WorkManagerInitializer in the manifest if they want custom initialization. On-demand initialization via Configuration.Provider When getInstanceContext is called and no instance exists, WorkManager falls back to on-demand initialization: java public static WorkManagerImpl getInstanceContext context { synchronized sLock { WorkManagerImpl instance = getInstance; if instance == null { Context appContext = context.getApplicationContext; if appContext instanceof Configuration.Provider { initializeappContext, Configuration.Provider appContext.getWorkManagerConfiguration; instance = getInstanceappContext; } else { throw new IllegalStateException "WorkManager is not initialized properly."; } } return instance; } } If your Application class implements Configuration.Provider, WorkManager lazily initializes with that configuration. This pattern allows developers to disable automatic initialization and provide custom configuration without calling initialize explicitly in Application.onCreate. The createWorkManager factory The actual WorkManagerImpl construction wires together all the internal components: kotlin fun WorkManagerImpl context: Context, configuration: Configuration, workTaskExecutor: TaskExecutor = WorkManagerTaskExecutorconfiguration.taskExecutor, workDatabase: WorkDatabase = WorkDatabase.create context.applicationContext, workTaskExecutor.serialTaskExecutor, configuration.clock, context.resources.getBooleanR.bool.workmanagertestconfiguration, , trackers: Trackers = Trackerscontext.applicationContext, workTaskExecutor, processor: Processor = Processorcontext.applicationContext, configuration, workTaskExecutor, workDatabase, schedulersCreator: SchedulersCreator = ::createSchedulers, : WorkManagerImpl { val schedulers = schedulersCreator context, configuration, workTaskExecutor, workDatabase, trackers, processor, return WorkManagerImpl context.applicationContext, configuration, workTaskExecutor, workDatabase, schedulers, processor, trackers, }
Modern Android applications commonly adopt multi layered architectures such as MVVM or MVI, where data flows through distinct layers: a data source, a repository, and a ViewModel or presentation layer. Each layer has a specific responsibility, and network responses must propagate through all of them before reaching the UI. While this separation produces clean, testable code, it introduces a real challenge: how do you handle API responses, including errors and exceptions, as they cross each layer boundary? Most developers solve this by wrapping API calls in try-catch blocks and returning fallback values. This works for small projects, but as the number of API calls grows, the approach creates ambiguous results, scattered boilerplate, and lost context that downstream layers need. You end up with ViewModels that cannot tell whether an empty list means "no data" or "network failure," repositories that swallow important error details, and data sources that repeat the same error handling pattern dozens of times. In this article, you'll explore the problems that emerge when handling Retrofit API calls across layered architectures, why conventional approaches break down at scale, and how Sandwichhttps://github.com/skydoves/sandwich provides a type safe, composable solution that simplifies response handling from the network layer all the way to the UI. You'll also walk through the full set of Sandwich APIs, from basic response handling to advanced patterns like sequential composition, response merging, global error mapping, and Flow integration, each with real world use cases that show when and why you would reach for them. Retrofit API calls with coroutines Most Android projects use Retrofithttps://github.com/square/retrofit with Kotlin coroutineshttps://github.com/Kotlin/kotlinx.coroutines for network communication. A typical service interface looks like this: kotlin interface PosterService { @GET"DisneyPosters.json" suspend fun fetchPosterList: List<Poster } The service returns a List<Poster directly. Retrofit deserializes the JSON response body and gives you the data. This works perfectly when the request succeeds, but it gives you no structured way to handle failures. Retrofit throws an HttpException for non 2xx status codes and various IO exceptions for network problems. The responsibility of catching these falls entirely on the caller. When you consume this service in a data source, the conventional approach looks like this: kotlin class PosterRemoteDataSource private val posterService: PosterService, { suspend fun fetchPosterList: List<Poster { return try { posterService.fetchPosterList } catch e: HttpException { emptyList } catch e: Throwable { emptyList } } } The data source catches every possible exception and returns emptyList as a fallback. From the caller's perspective, this function always succeeds, it always returns a List<Poster. If we create a flow from the code above, it will be like so: !https://velog.velcdn.com/images/skydoves/post/cc3deaea-7244-4091-88d3-744d297112cc/image.png But that apparent simplicity hides a serious problem. This compiles and runs. But once you trace the data flow through a full architecture, where the data source feeds a repository that feeds a ViewModel that drives the UI, the problems become clear. The problems with conventional response handling The code above has three major issues that compound as your project grows and the number of API endpoints increases. Ambiguous results The data source returns emptyList for both HTTP errors and network exceptions. Downstream layers the repository, the ViewModel receive a List<Poster with no way to distinguish between three completely different scenarios: 1. The request succeeded and the server returned an empty list. 2. The request failed with a 401 Unauthorized error. 3. The device had no network connectivity. All three produce the same result: an empty list. The repository cannot decide whether to show an error message, redirect to a login screen, or display "no data" content. The ViewModel might show an empty state when it should be showing a "please log in" dialog. The response has lost its context, and once that context is gone, no amount of downstream logic can recover it. You might try to work around this by returning null for failures instead of emptyList. But that introduces its own ambiguity: does null mean "error" or "no data"? You end up needing a wrapper type anyway, which leads to the next problem. That's just adding one more implicit convention on your head. Boilerplate error handling Every API call requires its own try-catch block. If you have 20 service methods, you write 20 nearly identical try-catch blocks. Each one catches HttpException, catches Throwable, and returns some fallback value. This repetition creates maintenance overhead and increases the surface area for mistakes, like forgetting to handle a specific exception type in one of the 20 call sites. Consider a data source with multiple methods: kotlin class UserRemoteDataSourceprivate val userService: UserService { suspend fun fetchUserid: String: User? { return try { userService.fetchUserid } catch e: HttpException { null } catch e: Throwable { null } } suspend fun fetchFollowersid: String: List<User { return try { userService.fetchFollowersid } catch e: HttpException { emptyList } catch e: Throwable { emptyList } } suspend fun updateProfileprofile: Profile: Boolean { return try { userService.updateProfileprofile true } catch e: HttpException { false } catch e: Throwable { false } } } The pattern is identical every time: try the call, catch HttpException, catch Throwable, return a fallback. The only thing that changes is the fallback value null, emptyList, false. This is textbook boilerplate that should not exist in every data source class. One dimensional response processing
Kotlin's internal visibility modifier provides a useful mechanism for hiding implementation details within a module while exposing a clean public API. But as codebases grow and libraries modularize, a tension emerges: the logical boundaries of your API don't always align with the compilation boundaries of your modules. Test modules need access to production internals. Library families like kotlinx.coroutines want to share implementation details across artifacts without exposing them to consumers. The current workaround, "friend modules," is an undocumented compiler feature that lacks language-level design. KEEP-0451 proposes a solution: the shared internal visibility modifier. This new visibility level sits between internal and public, allowing modules to explicitly declare which internals they share and with whom. In this article, you'll explore the motivation behind this proposal, the design decisions that shaped it, how transitive sharing simplifies complex dependency graphs, and the technical challenges of implementing cross-module visibility on the JVM. The fundamental problem: Module boundaries vs. logical boundaries Consider a typical library structure: kotlinx-coroutines/ ├── kotlinx-coroutines-core/ ├── kotlinx-coroutines-test/ ├── kotlinx-coroutines-reactive/ └── kotlinx-coroutines-android/ These artifacts form a cohesive library family. Internally, they share implementation details: dispatcher internals, continuation machinery, and testing utilities. But from Kotlin's perspective, each artifact is a separate module. The internal modifier in kotlinx-coroutines-core is invisible to kotlinx-coroutines-test, even though both are maintained by the same team and shipped together. The current workarounds are unsatisfying: Option 1: Make everything public. This works, but pollutes the API surface. Consumers developers see implementation details they shouldn't use, and maintainers lose the ability to change internals without breaking compatibility. Option 2: Use the undocumented friend modules feature. The Kotlin compiler supports a -Xfriend-paths flag that grants one module access to another's internals. But this is a compiler implementation detail, not a language feature. It has no syntax, no IDE support, and no guarantees of stability. Option 3: Merge modules. You could combine related modules into a single compilation unit, then split them for distribution. But this complicates build configurations and doesn't scale to complex dependency graphs. KEEP-0451 addresses this gap by elevating friend modules to a first-class language feature with explicit syntax and clear semantics. The shared internal modifier The proposal introduces a new visibility modifier: shared internal. Declarations marked with this modifier are visible to designated dependent modules, but invisible to the general public.
Landscapisthttps://github.com/skydoves/landscapist Core is a standalone image loading engine built from scratch for Kotlin Multiplatform. Unlike Landscapist's wrappers around Coil, Glide, and Fresco, Landscapist Corehttps://skydoves.github.io/landscapist/landscapist/landscapist-core/ handles fetching, caching, decoding, and transformations internally. This eliminates platform dependencies and provides fine grained control over every aspect of image loading. In this article, you'll explore the internal architecture of Landscapist Core, examining how the Landscapist class orchestrates the loading pipeline, how TwoTierMemoryCache provides a second chance for evicted items through weak references, how DecodeScheduler prioritizes visible images over background loads, how progressive decoding improves perceived performance, and how memory pressure handling keeps the app responsive under constrained conditions. The Landscapist orchestrator The Landscapist class is the main entry point for image loading. It coordinates fetching, caching, decoding, and transformation into a unified pipeline: kotlin public class Landscapist private constructor public val config: LandscapistConfig, private val memoryCache: MemoryCache, private val diskCache: DiskCache?, private val fetcher: ImageFetcher, private val decoder: ImageDecoder, private val dispatcher: CoroutineDispatcher, public val requestManager: RequestManager = RequestManager, public val memoryPressureManager: MemoryPressureManager = MemoryPressureManager, Each component has a single responsibility. The memoryCache stores decoded images in memory. The diskCache persists raw image data to storage. The fetcher retrieves images from network or local sources. The decoder converts raw bytes into displayable images. The requestManager tracks active requests for cancellation. The memoryPressureManager responds to system memory warnings. The loading pipeline The load function implements a three stage lookup with progressive enhancement: kotlin public fun loadrequest: ImageRequest: Flow<ImageResult = flow { emitImageResult.Loading val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight, // 1. Check memory cache instant if request.memoryCachePolicy.readEnabled { memoryCachecacheKey?.let { cached - emitImageResult.Successdata = cached.data, dataSource = DataSource.MEMORY return@flow } } // 2. Check disk cache if request.diskCachePolicy.readEnabled && diskCache != null { diskCache.getcacheKey?.use { snapshot - val bytes = snapshot.data.buffer.readByteArray // Decode and emit... } } // 3. Fetch from network val fetchResult = fetcher.fetchrequest // Process result... }.flowOndispatcher The pipeline follows a predictable order: memory cache first instant, disk cache second fast I/O, network last slow. Each stage can be enabled or disabled through CachePolicy, allowing fine grained control for special cases like forcing a refresh or skipping caching entirely. Cache key generation The CacheKey uniquely identifies a cached image based on all factors that affect its appearance: kotlin val cacheKey = CacheKey.create model = request.model, transformationKeys = request.transformations.map { it.key }, width = request.targetWidth, height = request.targetHeight,
Android's ViewModel has become an essential component of modern Android development, providing a lifecycle-aware container for UI-related data that survives configuration changes. While the API appears simple on the surface, the internal machinery reveals sophisticated design decisions around lifecycle management, multiplatform abstraction, resource cleanup, and thread-safe caching. Understanding how ViewModel works under the hood helps you make better architectural decisions and avoid subtle bugs. In this article, you'll dive deep into how Jetpack ViewModel works internally, exploring how the ViewModelStore retains instances across configuration changes, how ViewModelProvider orchestrates creation and caching, how the factory pattern enables flexible instantiation, how CreationExtras enables stateless factories, how resource cleanup is managed through the Closeable pattern, and how viewModelScope integrates coroutines with ViewModel lifecycle. This isn't a guide on using ViewModel, it's an exploration of the internal machinery that makes lifecycle-aware state management possible. The fundamental problem: Surviving configuration changes Configuration changes present a fundamental challenge for Android development. When a user rotates their device, changes language settings, or triggers any configuration change, the system destroys and recreates the Activity. Any data stored in the Activity is lost: kotlin class MyActivity : ComponentActivity { private var userData: User? = null // Lost on rotation! override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState // Must reload data after every rotation loadUserData } } The naive approach is to use onSaveInstanceState: kotlin override fun onSaveInstanceStateoutState: Bundle { super.onSaveInstanceStateoutState outState.putParcelable"user", userData } override fun onCreatesavedInstanceState: Bundle? { super.onCreatesavedInstanceState userData = savedInstanceState?.getParcelable"user" } This works for small, serializable data. But what about large datasets, network connections, or objects that can't be serialized? What about ongoing operations like network requests? The Bundle approach fails for these cases, both because of size limitations and because serialization/deserialization is expensive. ViewModel solves this by providing a lifecycle-aware container that survives configuration changes through a retained object pattern, not serialization. The ViewModelStore: The retention mechanism At the heart of ViewModel's configuration-change survival is ViewModelStore, a simple key-value store that holds ViewModel instances: kotlin public open class ViewModelStore { private val map = mutableMapOf<String, ViewModel @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } @RestrictToRestrictTo.Scope.LIBRARYGROUP public operator fun getkey: String: ViewModel? { return mapkey } @RestrictToRestrictTo.Scope.LIBRARYGROUP public fun keys: Set<String { return HashSetmap.keys } public fun clear { for vm in map.values { vm.clear } map.clear } } The implementation is remarkably straightforward, just a MutableMap<String, ViewModel. The magic isn't in the store itself, it's in how the store is retained. Key replacement behavior Notice the put method's behavior: kotlin public fun putkey: String, viewModel: ViewModel { val oldViewModel = map.putkey, viewModel oldViewModel?.clear } If a ViewModel already exists with the same key, the old ViewModel is immediately cleared. This ensures proper cleanup when a ViewModel is replaced. You might wonder when this happens, it occurs when you request a ViewModel with the same key but a different type: kotlin // First request creates TestViewModel1 with key "mykey" val vm1: TestViewModel1 = viewModelProvider"mykey", TestViewModel1::class // Second request with same key but different type val vm2: TestViewModel2 = viewModelProvider"mykey", TestViewModel2::class // vm1.onCleared has been called, vm1 is no longer valid This behavior is validated in the test suite: kotlin @Test fun twoViewModelsWithSameKey { val key = "thekey" val vm1 = viewModelProviderkey, TestViewModel1::class assertThatvm1.cleared.isFalse val vw2 = viewModelProviderkey, TestViewModel2::class assertThatvw2.isNotNull assertThatvm1.cleared.isTrue } The ViewModelStoreOwner contract The ViewModelStoreOwner interface defines who owns the store: kotlin public interface ViewModelStoreOwner { public val viewModelStore: ViewModelStore } This simple interface is implemented by ComponentActivity, Fragment, and NavBackStackEntry. The owner's responsibility is twofold:
Making REST API calls has been a fundamental requirement in Android development, yet the complexity of managing HTTP requests, serialization, error handling, and thread management has long been a persistent challenge. Retrofit emerged as Square's solution to this problem, transforming a verbose, error-prone process into an elegant, annotation-driven API. But the real power of Retrofit isn't just its simplified interface, it's the sophisticated machinery working behind the scenes to turn interface methods into HTTP calls. In this article, you'll dive deep into the internal mechanisms of Retrofit, exploring how Java's dynamic proxies create implementation classes at runtime, how annotations are parsed and cached using sophisticated locking strategies, how the framework transforms method calls into OkHttp requests through a layered architecture, and the subtle optimizations that make it production-ready. This isn't a beginner's guide to using Retrofit, it's a deep dive into how Retrofit actually works under the hood. Understanding the core abstraction: What makes Retrofit special At its heart, Retrofit is a type-safe HTTP client that uses dynamic proxies and annotation processing to convert interface method declarations into HTTP requests. What distinguishes Retrofit from manual HTTP clients is its adherence to two fundamental principles: declarative API definition and pluggable architecture. The declarative API definition means you don't manually construct HTTP requests for every endpoint. Instead, Retrofit provides annotations that describe the request: kotlin interface GitHubApi { @GET"users/{user}/repos" fun listRepos@Path"user" user: String: Call<List<Repo } // Implementation generated automatically: val api = retrofit.create<GitHubApi val call = api.listRepos"octocat" The pluggable architecture means Retrofit separates concerns through factory patterns. Every aspect of request/response handling is customizable: - CallAdapter: Transforms Call<T into other types RxJava Observable, Kotlin suspend fun, Java 8 CompletableFuture - Converter: Serializes/deserializes request/response bodies Gson, Jackson, Moshi, Protobuf - Call.Factory: Creates HTTP calls typically OkHttp, but swappable These properties aren't just conveniences, they're architectural constraints that enable compile-time type safety and runtime flexibility. The dynamic proxy mechanism allows Retrofit to parse annotations once per method and cache the parsing logic, making subsequent calls extremely fast. The factory chains allow you to add Gson JSON parsing or RxJava integration without modifying any core Retrofit code. The dynamic proxy pattern: How Retrofit creates implementations When you call retrofit.createMyApi.class, you're not getting a manually written implementation. You're getting a JDK dynamic proxy that intercepts every method call at runtime. This is the foundation of Retrofit's "magic." Proxy creation in Retrofit.create Let's examine the actual proxy creation code in the Retrofit class: java @SuppressWarnings"unchecked" public <T T createfinal Class<T service { validateServiceInterfaceservice; return T Proxy.newProxyInstance service.getClassLoader, new Class<? {service}, new InvocationHandler { private final Object emptyArgs = new Object0; @Override public @Nullable Object invokeObject proxy, Method method, @Nullable Object args throws Throwable { // If the method is a method from Object then defer to normal invocation. if method.getDeclaringClass == Object.class { return method.invokethis, args; } args = args != null ? args : emptyArgs; Reflection reflection = Platform.reflection; return reflection.isDefaultMethodmethod ? reflection.invokeDefaultMethodmethod, service, proxy, args : loadServiceMethodservice, method.invokeproxy, args; } }; } This code uses Java's Proxy.newProxyInstance to generate a class at runtime that implements your interface. Every method call goes through the InvocationHandler.invoke method, which has three dispatch paths: 1. Object methods: Methods like equals, hashCode, and toString are delegated to the handler itself: java if method.getDeclaringClass == Object.class { return method.invokethis, args; } This ensures that basic Java object operations work correctly on the proxy instance. 2. Default methods Java 8+ - Interface default methods are invoked using platform-specific reflection: java return reflection.invokeDefaultMethodmethod, service, proxy, args; On Java 8+, Retrofit uses MethodHandle to invoke default methods. This allows you to add helper methods to your API interfaces without Retrofit trying to parse them as HTTP endpoints. 3. Retrofit methods: Everything else is treated as an HTTP endpoint: java return loadServiceMethodservice, method.invokeproxy, args; This is where the real work happens. The loadServiceMethod call parses annotations and caches the result, then invoke executes the HTTP request. Interface validation Before creating the proxy, Retrofit validates the interface with some strict rules in the Retrofit class: java private void validateServiceInterfaceClass<? service { if !service.isInterface { throw new IllegalArgumentException"API declarations must be interfaces."; } Deque<Class<? check = new ArrayDeque<1; check.addservice; while !check.isEmpty { Class<? candidate = check.removeFirst; if candidate.getTypeParameters.length != 0 { StringBuilder message = new StringBuilder"Type parameters are unsupported on ".appendcandidate.getName; if candidate != service { message.append" which is an interface of ".appendservice.getName; } throw new IllegalArgumentExceptionmessage.toString; } Collections.addAllcheck, candidate.getInterfaces; } // ... } This validation enforces two critical constraints: 1. Must be an interface: Classes can't be proxied by JDK proxies they'd need CGLIB or ByteBuddy 2. No generic type parameters: interface Api<T is forbidden because generics are erased at runtime The breadth-first search through the interface hierarchy ensures that even inherited interfaces don't violate these rules. The performance benefit of proxies Why use dynamic proxies instead of annotation processing to generate implementation classes at compile time? The answer is flexibility. Proxies allow Retrofit to: - Parse annotations lazily only when methods are first called - Support different return types through the CallAdapter mechanism - Avoid compile-time code generation complexity The trade-off is a slight runtime overhead for the first method call annotation parsing, but this is amortized through aggressive caching. The service method cache: lazy initialization
In Jetpack Compose, Crossfade provides a simple and declarative way to animate the transition between two different UI states. When the targetState passed to it changes, it smoothly fades out the old content while simultaneously fading in the new content. While its public API is minimal, a study of its internal source code reveals a sophisticated state machine that manages the lifecycle of both the incoming and outgoing composables, orchestrates their animations, and ensures a seamless visual transition. The entire mechanism is built upon the foundational Transition API, which is the core engine for state-based animations in Compose. The Entry Point: CrossfadetargetState, ... The most common Crossfade function that developers use is a simple wrapper. Its entire purpose is to create and manage a Transition object for you.
The derivedStateOf API in Jetpack Compose provides a convenient mechanism for creating memoized state that automatically updates when its underlying dependencies change. While essential for performance optimization in many scenarios, it is often described as "expensive." This study analyzes the internal implementation of DerivedSnapshotState to demystify this cost. We will show that the expense of derivedStateOf is not in the read operation, but in the complex machinery required to track dependencies, validate its cached value, and perform recalculations. By examining the isValid, currentRecord, and Snapshot.observe calls, this analysis will reveal the intricate dependency tracking, hashing, and transactional record-keeping that make derivedStateOf a precision tool to be used judiciously, not universally. 1. Introduction: The Promise and the Price The public API is deceptively simple: kotlin public fun <T derivedStateOfcalculation: - T: State<T = DerivedSnapshotStatecalculation, null It promises to run a calculation lambda, cache the result, and only re-run the calculation when one of the State objects read inside it changes. Let's see an example:
The SlotTable is the in-memory data structure that represents the UI tree of a Jetpack Compose application. Instead of a traditional tree of objects, it's a highly optimized, flat structure designed for extremely fast UI updates. Let's explore its internals by examining the code you provided. 1. The Core Data Model: groups and slots At the heart of the SlotTable are two parallel, flat arrays. This is the first and most critical concept to grasp. kotlin internal class SlotTable : CompositionData, Iterable<CompositionGroup { / An array to store group information... an array of an inline struct. / var groups = IntArray0 private set / An array that stores the slots for a group. / var slots = Array<Any?0 { null } private set //... } groups: IntArray: This is the blueprint of your UI. It stores the structure and metadata of your composables in a compact, primitive array. Think of it as a highly efficient, inlined list of instructions that describes the hierarchy, keys, and properties of each composable call. Because it's a flat IntArray, the CPU can scan it very rapidly without expensive memory jumps pointer chasing.
Kotlin provides a very useful delegate: lazy. The lazy function creates a property whose value is computed only on its first access and then cached for all subsequent calls. While the public API is super simple, a deep dive into its internals reveals a well-architected system built on the Lazy interface, with multiple, specialized implementations designed to handle different thread-safety requirements. The entire lazy mechanism is built around a simple but creative interface. This interface defines the public contract for any object that represents a lazily initialized value. kotlin public interface Lazy<out T { / Gets the lazily initialized value of the current Lazy instance. Once the value was initialized it must not change during the rest of lifetime of this Lazy instance. / public val value: T
In the Jetpack Compose ecosystem, state is typically consumed synchronously. A composable function reads a State<T object during recomposition to get its current value. However, many modern Android architectures are built on asynchronous streams, using Kotlin's Flow to represent a sequence of values over time. The snapshotFlow function is the useful and highly efficient bridge that connects these two worlds, allowing developers to convert Compose's pull-based State into a push-based Flow. An analysis of its internal mechanism reveals a sophisticated, three-part system: it observes global state changes, tracks which specific state objects were read by the user's code, and uses a coroutine Channel to trigger re-evaluation, all while ensuring correctness and efficiency. The Core Components of the snapshotFlow Mechanism
Like what you see?
Subscribe to Dove Letter to get weekly insights about Android and Kotlin development, plus access to exclusive content and discussions.