Wave Field

Multiple sine wave layers stacked for parallax, each driven by its own phase, frequency, and amplitude ramps.

Compose APIs
withFrameNanosCanvasPathsin
Try tweaking
BASE_AMPLITUDE_DPBASE_PHASE_SPEEDFREQ_RAMPAMPL_FALLOFFLAYER_COUNT

A single sine wave drawn across a canvas is a calm, almost static thing. It rolls, but it does not feel like a place. The moment you stack several sines on top of each other, each with a slightly different frequency, amplitude, and phase, the canvas turns into a moving landscape. Front waves race past, back waves drift, and the eye reads the difference as depth. That is the entire trick behind the Wave Field example, and it is built from nothing more than sin, a Path, and a per frame time accumulator.

In this article, you'll explore how the Wave Field animation samples a sine into a Path, how per layer ramps for amplitude, frequency, and phase speed produce parallax, how withFrameNanos drives time, and how each tweakable constant changes the visual feel. The implementation lives in a single composable that uses Canvas, Path, and a small waveY helper.

How the example is structured

The animation lives inside a BoxWithConstraints so the wave geometry can read the actual pixel width and height of the drawing area. Inside, a Canvas first paints a vertical gradient background, then loops from the back layer to the front, drawing one Path per layer. Each layer is a sampled sine across the canvas width, closed off at the bottom corners so it fills like water rather than drawing a line.

There are four moving parts to keep in mind:

  • A time accumulator that grows in seconds, driven by withFrameNanos.
  • A waveY helper that returns the y position of the wave at a given x.
  • A per layer parameter block that derives frequency, amplitude, phase speed, and color from the layer index.
  • A Path builder that walks across the canvas in fixed pixel steps and feeds each x into waveY.

The example draws back to front using for (layerIndex in (LAYER_COUNT - 1) downTo 0). Painting deeper layers first lets the front layers cover them, which is what creates the stacked silhouette the eye reads as depth.

Sampling a sine into a Path

A wave that looks smooth at any screen size is not actually a continuous curve. It is a polyline with enough points to fool the eye. The sampling loop walks across the canvas in fixed pixel steps and emits a lineTo for each sample.

val path = Path()
var x = 0f
val firstY = waveY(x, baseline, frequency, amplitude, phase, SECONDARY_RATIO, SECONDARY_AMP)
path.moveTo(0f, firstY)

var i = 1
while (i < sampleCount) {
  x = (i * safeStep).coerceAtMost(widthPx)
  val y = waveY(x, baseline, frequency, amplitude, phase, SECONDARY_RATIO, SECONDARY_AMP)
  path.lineTo(x, y)
  if (x >= widthPx) break
  i++
}

The step size comes from SAMPLE_STEP_PX = 5f, and sampleCount is computed once as ((widthPx / safeStep).toInt()).coerceAtLeast(2) + 1. Sampling every 5 pixels is the trade off that matters here. A per pixel sample would call sin thousands of times per layer per frame and hand the GPU a polyline with thousands of segments. A 5 pixel step cuts that work by a factor of five with no visible loss, because the sine values between two close samples are already nearly collinear at typical amplitudes.

After the curve samples, the path is closed into a filled shape:

path.lineTo(widthPx, heightPx)
path.lineTo(0f, heightPx)
path.close()

Those two extra lineTo calls drop the path to the bottom right corner and then to the bottom left corner before closing back to the start. Without them, drawPath with a solid color would draw a line, not a body of water.

Per layer parameters: amplitude, frequency, phase

Each layer is the same wave function with different inputs. The example computes those inputs from the layer index so adding or removing layers stays consistent.

val layerT = layerIndex.toFloat() / denom
val frequency = BASE_FREQUENCY * (1f + layerIndex * FREQ_RAMP)
val amplitude = baseAmplitudePx * (1f - layerIndex * AMPL_FALLOFF).coerceAtLeast(0.05f)
val phaseSpeed = BASE_PHASE_SPEED * (1f + layerIndex * PHASE_RAMP)
val phaseOffset = layerIndex * (twoPi / LAYER_COUNT.coerceAtLeast(1)) * 0.5f
val phase = time * phaseSpeed + phaseOffset

FREQ_RAMP = 0.18f means each layer beyond the front gets 18 percent more frequency than the previous one. Higher frequency means shorter wavelengths, so back layers look like tighter, choppier waves while the front layer rolls in long swells. AMPL_FALLOFF = 0.08f shrinks amplitude by 8 percent per layer, with a coerceAtLeast(0.05f) floor so the back layers do not collapse into a flat line. BASE_PHASE_SPEED = 2.2f is the time multiplier for the front layer in radians per second, and PHASE_RAMP = 0.35f makes back layers advance their phase faster.

The phaseOffset is what prevents every layer from peaking at the same x at startup. By offsetting each layer by a fraction of a full cycle, the waves visually interleave even on the very first frame.

The waveY helper itself blends two sines, a primary and a faster, smaller secondary harmonic, so the wave does not look like a textbook sine:

val primary = sin(x * frequency + phase) * amplitude
val secondary = sin(x * frequency * secondaryRatio + phase * 1.7f) * amplitude * secondaryAmp
return baseline + primary + secondary

SECONDARY_RATIO = 2.7f makes the harmonic frequency unrelated to the primary, which keeps the combined shape from settling into a repeating pattern that the eye notices.

Animating phase with withFrameNanos

The driver is short and sits in a LaunchedEffect(Unit) that runs for the lifetime of the composable.

var time by remember { mutableStateOf(0f) }
LaunchedEffect(Unit) {
  var lastNanos = 0L
  while (true) {
    withFrameNanos { nowNanos ->
      if (lastNanos != 0L) {
        val dtSec = (nowNanos - lastNanos) / 1_000_000_000f
        time += dtSec
      }
      lastNanos = nowNanos
    }
  }
}

withFrameNanos suspends until the next Choreographer frame and hands back the frame timestamp in nanoseconds. The first frame skips accumulation because there is no previous timestamp to subtract from. After that, every frame computes dtSec and adds it to time. Driving from real elapsed time, rather than incrementing a counter by a fixed amount per frame, keeps the animation looking the same on a 60 Hz device and a 120 Hz device.

time is a Compose state, so writing to it invalidates the surrounding composable and triggers a redraw. The Canvas then reads time indirectly through phase = time * phaseSpeed + phaseOffset and rebuilds every layer from scratch. The per layer multiplier on phaseSpeed is what creates parallax: the front layer advances at 2.2 radians per second, while the back layer at LAYER_COUNT = 4 advances at 2.2 * (1 + 3 * 0.35) = 2.2 * 2.05, roughly twice as fast. Because the back layer also has higher frequency and lower amplitude, that faster phase reads as small, busy ripples in the distance, while the slow front swells dominate the foreground.

Tweaking amplitude, phase speed, frequency, layer count

BASE_AMPLITUDE_DP = 18f controls how tall the front layer's swells get. The source comments mark 8 as calm and 80 as a storm. Because amplitude is multiplied by (1f - layerIndex * AMPL_FALLOFF), raising the base value scales every layer proportionally rather than just inflating the front.

BASE_PHASE_SPEED = 2.2f is the radians per second for the front layer. Drop it toward 0.2 and the field looks like a slow drift you would see at dawn. Push it past 4.0 and the waves start to feel rushed, almost vibrating. Phase speed does not change wave shape, only how fast the existing shape slides across the screen.

FREQ_RAMP = 0.18f is the spacing between layer wavelengths. At zero, every layer would have the same wavelength and the parallax would collapse into a vague shimmer. At larger values, back layers turn into very short ripples, which reads as a horizon of small chop behind larger foreground swells.

AMPL_FALLOFF = 0.08f is the depth cue. Smaller values keep back waves nearly as tall as front waves, which flattens the perceived depth. Larger values bury the back layers into a thin band along the baseline.

LAYER_COUNT = 4 is the most visually obvious knob. With 1, you get a single moving line. With 4, you get the layered ocean shown in the example. Pushing it to 14, as the source comment suggests, builds a deep field with so much overlap that individual layers stop being distinguishable and the result reads as a single textured surface.

Conclusion

In this article, you've explored how the Wave Field animation produces depth from a stack of plain sine waves. You saw how the canvas is sampled at fixed pixel steps into a Path, how each layer derives its own frequency, amplitude, and phase speed from a single layer index, and how withFrameNanos accumulates real time so the animation runs the same on any refresh rate.

Understanding these internals helps you decide where to spend cycles. The fixed sample step is the reason this animation can carry many layers without dropping frames, and the per layer ramps are why a handful of constants generate something that looks hand tuned. When the composition feels off, the fix is almost always one of the ramps, not the wave function itself.

Whether you are building a hero background for a landing screen, a loading state that needs to feel alive, or a custom data visualization that wants gentle motion, the same pattern applies: sample a function into a Path, animate one or two parameters with withFrameNanos, and let layered variation do the visual work.

As always, happy coding!

Jaewoong (skydoves)