Building a fluid gradient with CoreAnimation & SwiftUI: Part 1
Gradients have constituted an incredibly handy tool of software design for years: they can provide depth, contrast, and vividness to the user interface when used correctly. Different gradient styles – linear, angular, and radial – have been mastered by designers and software developers alike to create the most beautiful, engaging and visually pleasing effects. But – beware – those can easily turn into clutter if used in excess.
In this series of articles, you're going to learn how to build a completely new type of gradient that can be practically added to any design element to increase prominence, add vividness or just for mere decoration. We'll be using CoreAnimation and integrate it in SwiftUI to build it in a performant way that can be animated and implemented natively in any app based on either UIKit (such as on iOS/iPadOS, watchOS and tvOS) and AppKit (macOS).
This article will focus solely on getting the gradient to work statically, while the next one will focus on animation and the final one will get this view working fully with SwiftUI and show it within a simple demo app mocking a real-world scenario.
The code and logic in here might be a bit hard to follow, so make sure to follow it through patiently. Let's get started!
The final code for this project was made available as a Swift Package in this GitHub repository.
Layering
There are a few ways of creating these types of gradients. One of them is through a 3D mesh – often called a gradient mesh or a mesh gradient – which is a lot more complex to implement and a lot more expensive to the CPU and GPU, but also provides the desired effect of fluent, smoothed colors and curves.
Our implementation will use CoreAnimation layers representing blobs and a blur effect are very basic and cheap to the system. This same visual could be created more easily by a SwiftUI ZStack hierarchy, blur effect and spring animations, but by writing the code in CoreAnimation, we can reduce our CPU usage from an average of 5% to less than 1% on an iPhone 13, with the exact same number of blobs and frame sizes.
In our code, we'll be creating two CALayers and stacking them up within a root layer to which we will apply a blur effect:
- The first layer will contain our base blobs, with the colors we want to be the most prominent in our gradient visual.
- The second layer will be optional and contain highligh blobs, and will be displayed with an overlay blend mode. This is a technique that helps create brighter spots within the gradient.
- The blur effect, will be used to smooth out the gradient and create the final fluid look. The trick here is getting blur radius very high – and make it dependant on the view size so that the eye cannot make out the individual blobs.
Ideally, we would use a third CALayer
with a gaussian blur background CIFilter
for the blur layer. However, background filters are not supported in iOS, and we will blur our view with SwiftUI instead.
Let's begin!
Understanding CoreAnimation
CoreAnimation is a framework that provides a high-performance, hardware-accelerated graphics for creating and managing animations. That means it mostly runs directly on the GPU, without burdening the CPU. It is used by both UIKit and AppKit to animate their views and layers, and it is also used by SwiftUI to animate its views.
Its central building block is the CALayer
, which serves as the base class for all other CoreAnimation elements, such as CAShapeLayer
and CAGradientLayer
. In this tutorial we'll be using the CAGradientLayer
to display the gradient blobs.
It's important to note that the use of CALayer in AppKit and UIKit differs a bit:
- In AppKit, an NSView's layer is an optional property, and is
nil
by default. You can set it to an instance of CALayer to enable CoreAnimation for that view. - In UIKit, a UIView's layer is a non-optional property. It is created automatically when the view is created, and cannot be set to another value afterwards.
Because of that, our code will be a bit different for each platform. Let's start our project by creating a subclass of CALayer:
/// An implementation of ``CALayer`` that resizes its sublayers
public class ResizableLayer: CALayer {
override init() {
super.init()
#if os(OSX)
autoresizingMask = [.layerWidthSizable, .layerHeightSizable]
#endif
sublayers = []
}
// Required by the framework
public override init(layer: Any) {
super.init(layer: layer)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
public override func layoutSublayers() {
super.layoutSublayers()
sublayers?.forEach { layer in
layer.frame = self.frame
}
}
}
This class is a subclass of CALayer that will resize its sublayers frames to match its own. This is great because we'll want our root layers to make their children – the blob layers – fill the container. We'll elaborate on that in a second.
Also, mind how autoresizingMask = [.layerWidthSizable, .layerHeightSizable]
is only called on macOS. This is because on iOS, this property of the CALayer is not available. That's why we also implement layoutSublayers()
to make sure the sublayers are resized on iOS.
Making our view
Now, let's begin creating the view that manages and displays our gradient. We'll start by creating a new file called FluidGradientView.swift
and create a new View
class called FluidGradientView
:
import SwiftUI
#if os(OSX)
import AppKit
public typealias SystemColor = NSColor
public typealias SystemView = NSView
#else
import UIKit
public typealias SystemColor = UIColor
public typealias SystemView = UIView
#endif
/// A system view that presents an animated gradient with ``CoreAnimation``
public class FluidGradientView: SystemView {
// Code will go here
}
Note how we're using custom aliases for both SystemColor
and SystemView
. This is because we want to be able to use the same code for both frameworks, since they're very similar apart from naming, and we want to avoid having to write if clauses everywhere.
Next, let's define some of the properties we'll need along the way:
let baseLayer = ResizableLayer()
let highlightLayer = ResizableLayer()
and then write the initializer that will set up our layers:
init(blobs: [Color] = [],
highlights: [Color] = []) {
super.init(frame: .zero)
highlightLayer.backgroundFilters = ["overlayBlendMode"]
#if os(OSX)
layer = ResizableLayer()
wantsLayer = true
postsFrameChangedNotifications = true
layer?.delegate = self
baseLayer.delegate = self
highlightLayer.delegate = self
self.layer?.addSublayer(baseLayer)
self.layer?.addSublayer(highlightLayer)
#else
self.layer.addSublayer(baseLayer)
self.layer.addSublayer(highlightLayer)
#endif
}
// Required by the class
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
The code above does a few things, so let's break it down:
- First, we add a background filter to set the highlight layer to the overlay blend mode.
- Then, we set up our layers. On macOS, remember, we need to initialize the layer ourself. We set it to a
ResizableLayer
, and then set our delegates. We also add our layers as sublayers of the root layer in the desired order.
For macOS, make sure to also add the following bit of code to conform the view to a CALayerDelegate
and NSViewLayerContentScaleDelegate
. Not implementing it will cause the code to fail on build.
#if os(OSX)
extension FluidGradientView: CALayerDelegate, NSViewLayerContentScaleDelegate {
public func layer(_ layer: CALayer,
shouldInheritContentsScale newScale: CGFloat,
from window: NSWindow) -> Bool {
return true
}
}
#endif
Interfacing with SwiftUI
Fantastic! Our view works and should compile now.
Before progressing any further, let's make sure we can use it in SwiftUI. We'll create a new file called FluidGradient.swift
and add the following code:
import SwiftUI
public struct FluidGradient: View {
private var blobs: [Color]
private var highlights: [Color]
private var blur: CGFloat
@State private var blurValue: CGFloat = 0.0
public init(blobs: [Color],
highlights: [Color] = [],
blur: CGFloat = 0.75) {
self.blobs = blobs
self.highlights = highlights
self.blur = blur
}
public var body: some View {
Representable(blobs: blobs,
highlights: highlights,
blurValue: $blurValue)
.blur(radius: pow(blurValue, blur))
.accessibility(hidden: true)
.clipped()
}
}
Here, blur
is the exponent used to calculate the blur radius as a power of blurValue
. The blurValue
is the blur coefficient that will be updated, according to the view's frame size, by the view representable – we'll deal with that later.
Now, we need to write our Representable
and Coordinator
. Let's do it within an extension of our view.
#if os(OSX)
typealias SystemRepresentable = NSViewRepresentable
#else
typealias SystemRepresentable = UIViewRepresentable
#endif
// MARK: - Representable
extension FluidGradient {
struct Representable: SystemRepresentable {
var blobs: [Color]
var highlights: [Color]
var blurValue: Binding<CGFloat>
func makeView(context: Context) -> FluidGradientView {
context.coordinator.view
}
func updateView(_ view: FluidGradientView, context: Context) {
context.coordinator.create(blobs: blobs, highlights: highlights)
}
#if os(OSX)
func makeNSView(context: Context) -> FluidGradientView {
makeView(context: context)
}
func updateNSView(_ view: FluidGradientView, context: Context) {
updateView(view, context: context)
}
#else
func makeUIView(context: Context) -> FluidGradientView {
makeView(context: context)
}
func updateUIView(_ view: FluidGradientView, context: Context) {
updateView(view, context: context)
}
#endif
func makeCoordinator() -> Coordinator {
Coordinator(blobs: blobs,
highlights: highlights,
blurValue: blurValue)
}
}
class Coordinator {
var blobs: [Color]
var highlights: [Color]
var blurValue: Binding<CGFloat>
var view: FluidGradientView
init(blobs: [Color],
highlights: [Color],
blurValue: Binding<CGFloat>) {
self.blobs = blobs
self.highlights = highlights
self.blurValue = blurValue
self.view = FluidGradientView(blobs: blobs,
highlights: highlights)
}
/// Create blobs and highlights
func create(blobs: [Color], highlights: [Color]) {
// Create blobs and highlights on view
}
}
}
In the code above, we're using SwiftUI's Coordinator pattern to manage our view. There's a Representable
that conforms to NSViewRepresentable
or UIViewRepresentable
depending on the platform. On our coordinator, we also create a create()
method that will create the blobs and highlights, and an update()
method that will update the speed and the blur coefficient. Don't worry about their implementation for now.
Now in our app, we can use our FluidGradient
like this in our ContentView
:
import SwiftUI
struct ContentView: View {
var body: some View {
FluidGradient(blobs: [.red, .green, .blue],
highlights: [.yellow, .orange, .purple])
.background(.quaternary)
.cornerRadius(16)
.padding(16)
}
}
...yet it doesn't do anything. Well – of course! We don't have the blobs yet. Let's do that now.
The BlobLayer
Since our blob layers require a bunch of specific setup code, let's make it a custom subclass of CAGradientLayer
that we can instantiate later.
public class BlobLayer: CAGradientLayer {
init(color: Color) {
super.init()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
// Required by the framework
public override init(layer: Any) {
super.init(layer: layer)
}
}
Great! Now we have a custom initializer that takes a Color
and... does nothing with it. Let's fix that.
init(color: Color) {
super.init()
self.type = .radial
#if os(OSX)
autoresizingMask = [.layerWidthSizable, .layerHeightSizable]
#endif
// Center point
let position = CGPoint(x: CGFloat.random(in: 0.0...1.0),
y: CGFloat.random(in: 0.0...1.0))
self.startPoint = position
// Radius
let size = CGFloat.random(in: 0.5...2)
let ratio = CGFloat.random(in: 0.5...1)
let radius = CGPoint(x: size,
y: size*ratio)
self.endPoint = position.displace(by: radius)
}
Now, we're setting the type of our gradient (a radial gradient), setting our autoresizing mask and setting our start and end points. CAGradientLayer
works by having the start point be the center of the radial gradient, and the end point be the edge of the gradient, so it's calculated taken the position as a basis.
The coordinates in a CAGradientLayer
are specified in a unit scale cartesian plan, so we just need to work within the 0-1 range and there is no need to resize them if our layer resizes.
To calculate the end point coordinate, we're doing three things:
- First, we get a random x radius for the circle.
- Then, we get a random ratio for the radii, and use it to calculate the y radius based off the x radius.
- Finally, we displace the start point by the radius, and that's our end point.
The displace()
method should look something like this:
extension CGPoint {
/// Build a point from an origin and a displacement
func displace(by point: CGPoint = .init(x: 0.0, y: 0.0)) -> CGPoint {
return CGPoint(x: self.x+point.x,
y: self.y+point.y)
}
}
Setting the color
Now, let's write code for setting the colors of the radial gradient that will make our blob. We'll need to do this every time we update the color of the blob, so it's best to have it as a function.
/// Set the color of the blob
func set(color: Color) {
// Converted to the system color so that cgColor isn't nil
self.colors = [SystemColor(color).cgColor,
SystemColor(color).cgColor,
SystemColor(color.opacity(0.0)).cgColor]
self.locations = [0.0, 0.9, 1.0]
}
We want the gradient to be mostly solid and a little feathered around the edges. That's why we're also setting its colors' locations.
Now, all you gotta do is call this method in the initializer.
// Set color
set(color: color)
Creating the blobs
Finally, we can create and add the blobs to our layers. Back to FluidGradientView
, we can add the following code:
/// Create blobs and add to specified layer
public func create(_ colors: [Color], layer: CALayer) {
// Remove blobs at the end if colors are removed
let count = layer.sublayers?.count ?? 0
let removeCount = count - colors.count
if removeCount > 0 {
layer.sublayers?.removeLast(removeCount)
}
for (index, color) in colors.enumerated() {
if index < count {
if let existing = layer.sublayers?[index] as? BlobLayer {
existing.set(color: color)
}
} else {
layer.addSublayer(BlobLayer(color: color))
}
}
}
Since we're assuming the blobs will be ordered consistently throughout, we're removing the blobs at the end of our layer if the number of colors is less than the number of blobs. Then, we're iterating through the colors and adding a blob for each color. If the blob at that index already exists, we're simply updating its color, so that the view stays consistent during color changes.
Now, let's just call it in the initializer:
// Create blobs and highlights
create(blobs, layer: baseLayer)
create(highlights, layer: highlightLayer)
And also call it when our colors change in SwiftUI:
/// Create blobs and highlights
func create(blobs: [Color], highlights: [Color]) {
guard blobs != self.blobs || highlights != self.highlights else { return }
self.blobs = blobs
self.highlights = highlights
view.create(blobs, layer: view.baseLayer)
view.create(highlights, layer: view.highlightLayer)
}
Also, let's take this moment to add some more required methods:
#if os(OSX)
public override func viewDidMoveToWindow() {
super.viewDidMoveToWindow()
let scale = window?.backingScaleFactor ?? 2
layer?.contentsScale = scale
baseLayer.contentsScale = scale
highlightLayer.contentsScale = scale
// updateBlur()
}
public override func resize(withOldSuperviewSize oldSize: NSSize) {
// updateBlur()
}
#else
public override func layoutSubviews() {
super.layoutSubviews()
layer.frame = self.frame
layer.layoutSublayers()
// updateBlur()
}
#endif
Here, we're running separate things if on macOS or iOS:
- If on macOS, we're setting the contents scale of the layers to the window's scale factor
- If on iOS, we're setting the frame of the layer to the view's frame and manually updating the layout of the sublayers (which is done automatically on macOS)
We'll also use these methods to update our blur values – which will be done soon – but first let's run our app and see what we've got!
Hooray! The gradient blobs are blending and being displayed correctly.
Let's fix the blur. Since its default value is 0, the current blur radius is also 0. We want that value to be modifiable by our UIKit/AppKit view, since this way we can avoid using a GeometryReader
and causing more unnecessary overload. We can do that by creating a delegate.
protocol FluidGradientDelegate: AnyObject {
func updateBlur(_ value: CGFloat)
}
Then, let's add a delegate property to our view and create updateBlur()
weak var delegate: FluidGradientDelegate?
/// Compute and update new blur value
private func updateBlur() {
delegate?.updateBlur(min(frame.width, frame.height))
}
We're using the minimum of the width and height of the view to calculate the blur radius base value. Now you can uncomment the updateBlur()
calls in the layoutSubviews()
, resize()
and viewDidMoveToWindow()
methods.
Next, we can conform our Coordinator
to the delegate.
class Coordinator: FluidGradientDelegate {
...and in the view initializer, set it to self
:
self.view.delegate = self
Now let's just have the coordinator update the blurValue
accordingly.
func updateBlur(_ value: CGFloat) {
blurValue.wrappedValue = value
}
Now, this is the final result:
Much better!
We're not done yet
The possibilities with what we have coded until now are endless! The gif below was produced entirely with our code and randomly selected color sets from a fixed pool.
As you can see in the gif, the gradients produced by our code come out very vivid and colorful, even with a random choice of colors. Now imagine what you could use them for in your app! You could integrate create a gradient with your brand colors, or match them to the prominent colors of an image.
Also, as much as using CoreAnimation is effective, we could go further and try to rewrite this as a Metal shader.
How you want to use it is all up to you. In the next tutorial, we'll dig into how to animate this gradient easily – at a very low performance cost.