You're probably getting tired of hearing me saying this, but Core Image is yet another super-fast and super-powerful framework from Apple. It does only one thing, which is to apply filters to images that manipulate them in various ways.
One downside to Core Image is it's not very guessable, so you need to know what you're doing otherwise you'll waste a lot of time. It's also not able to rely on large parts of Swift's type safety, so you need to be careful when using it because the compiler won't help you as much as you're used to.
To get started, import CoreImage by adding this line near the top of ViewController.swift:
import CoreImage
We need to add two more properties to our class, so put these underneath the currentImage
property:
var context: CIContext!
var currentFilter: CIFilter!
The first is a Core Image context, which is the Core Image component that handles rendering. We create it here and use it throughout our app, because creating a context is computationally expensive so we don't want to keep doing it.
The second is a Core Image filter, and will store whatever filter the user has activated. This filter will be given various input settings before we ask it to output a result for us to show in the image view.
We want to create both of these in viewDidLoad()
, so put this just before the end of the method:
context = CIContext()
currentFilter = CIFilter(name: "CISepiaTone")
That creates a default Core Image context, then creates an example filter that will apply a sepia tone effect to images. It's just for now; we'll let users change it soon enough.
To begin with, we're going to let users drag the slider up and down to add varying amounts of sepia effect to the image they select.
To do that, we need to set our currentImage
property as the input image for the currentFilter
Core Image filter. We're then going to call a method (as yet unwritten) called applyProcessing()
, which will do the actual Core Image manipulation.
So, add this to the end of the didFinishPickingMediaWithInfo
method:
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
applyProcessing()
You’ll get an error for applyProcessing()
because we haven’t written it yet, but we’ll get there soon.
The CIImage
data type is, for the sake of this project, just the Core Image equivalent of UIImage
. Behind the scenes it's a bit more complicated than that, but really it doesn't matter.
As you can see, we can create a CIImage
from a UIImage
, and we send the result into the current Core Image Filter using the kCIInputImageKey
. There are lots of Core Image key constants like this; at least this one is somewhat self-explanatory!
We also need to call the (still unwritten!) applyProcessing()
method when the slider is dragged around, so modify the intensityChanged()
method to this:
@IBAction func intensityChanged(_ sender: Any) {
applyProcessing()
}
With these changes, applyProcessing()
is called as soon as the image is first imported, then whenever the slider is moved. Now it's time to write the initial version of the applyProcessing()
method, so put this just before the end of your class:
func applyProcessing() {
guard let image = currentFilter.outputImage else { return }
currentFilter.setValue(intensity.value, forKey: kCIInputIntensityKey)
if let cgimg = context.createCGImage(image, from: image.extent) {
let processedImage = UIImage(cgImage: cgimg)
imageView.image = processedImage
}
}
That's only five lines, none of which are terribly taxing.
The first line safely reads the output image from our current filter. This should always exist, but there’s no harm being safe.
The second line uses the value of our intensity
slider to set the kCIInputIntensityKey
value of our current Core Image filter. For sepia toning a value of 0 means "no effect" and 1 means "fully sepia."
The third line is where the hard work happens: it creates a new data type called CGImage
from the output image of the current filter. We need to specify which part of the image we want to render, but using image.extent
means "all of it." Until this method is called, no actual processing is done, so this is the one that does the real work. This returns an optional CGImage
so we need to check and unwrap with if let
.
The fourth line creates a new UIImage
from the CGImage
, and line five assigns that UIImage
to our image view. Yes, I know that UIImage
, CGImage
and CIImage
all sound the same, but they are different under the hood and we have no choice but to use them here.
You can now press Cmd+R to run the project as-is, then import a picture and make it sepia toned. It might be a little slow in the simulator, but I can promise you it runs brilliantly on devices - Core Image is extraordinarily fast.
Adding a sepia effect isn't very interesting, and I want to help you explore some of the other options presented by Core Image. So, we're going to make the "Change Filter" button work: it will show a UIAlertController
with a selection of filters, and when the user selects one it will update the image.
First, here's the new changeFilter()
method:
@IBAction func changeFilter(_ sender: Any) {
let ac = UIAlertController(title: "Choose filter", message: nil, preferredStyle: .actionSheet)
ac.addAction(UIAlertAction(title: "CIBumpDistortion", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "CIGaussianBlur", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "CIPixellate", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "CISepiaTone", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "CITwirlDistortion", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "CIUnsharpMask", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "CIVignette", style: .default, handler: setFilter))
ac.addAction(UIAlertAction(title: "Cancel", style: .cancel))
present(ac, animated: true)
}
That's seven different Core Image filters plus one cancel button, but no new code. When tapped, each of the filter buttons will call the setFilter()
method, which we need to make. This method should update our currentFilter
property with the filter that was chosen, set the kCIInputImageKey
key again (because we just changed the filter), then call applyProcessing()
.
Each UIAlertAction
has its title set to a different Core Image filter, and because our setFilter()
method must accept as its only parameter the action that was tapped, we can use the action's title to create our new Core Image filter. Here's the setFilter()
method:
func setFilter(action: UIAlertAction) {
// make sure we have a valid image before continuing!
guard currentImage != nil else { return }
// safely read the alert action's title
guard let actionTitle = action.title else { return }
currentFilter = CIFilter(name: actionTitle)
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
applyProcessing()
}
But don't run the project yet! Our current code has a problem, and it's this line:
currentFilter.setValue(intensity.value, forKey: kCIInputIntensityKey)
That sets the intensity of the current filter. But the problem is that not all filters have an intensity setting. If you try this using the CIBumpDistortion
filter, the app will crash because it doesn't know what to do with a setting for the key kCIInputIntensityKey
.
All the filters and the keys they use are described fully in Apple's documentation, but for this project we're going to take a shortcut. There are four input keys we're going to manipulate across seven different filters. Sometimes the keys mean different things, and sometimes the keys don't exist, so we're going to apply only the keys that do exist with some cunning code.
Each filter has an inputKeys
property that returns an array of all the keys it can support. We're going to use this array in conjunction with the contains()
method to see if each of our input keys exist, and, if it does, use it. Not all of them expect a value between 0 and 1, so I sometimes multiply the slider's value to make the effect more pronounced.
Change your applyProcessing()
method to be this:
func applyProcessing() {
let inputKeys = currentFilter.inputKeys
if inputKeys.contains(kCIInputIntensityKey) { currentFilter.setValue(intensity.value, forKey: kCIInputIntensityKey) }
if inputKeys.contains(kCIInputRadiusKey) { currentFilter.setValue(intensity.value * 200, forKey: kCIInputRadiusKey) }
if inputKeys.contains(kCIInputScaleKey) { currentFilter.setValue(intensity.value * 10, forKey: kCIInputScaleKey) }
if inputKeys.contains(kCIInputCenterKey) { currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey) }
if let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent) {
let processedImage = UIImage(cgImage: cgimg)
self.imageView.image = processedImage
}
}
Using this method, we check each of our four keys to see whether the current filter supports it, and, if so, we set the value. The first three all use the value from our intensity
slider in some way, which will produce some interesting results. If you wanted to improve this app later, you could perhaps add three sliders.
If you run your app now, you should be able to choose from various filters then watch them distort your image in weird and wonderful ways. Note that some of them – such as the Gaussian blur – will run very slowly in the simulator, but quickly on devices. If we wanted to do more complex processing (not least chaining filters together!) you can add configuration options to the CIContext
to make it run even faster; another time, perhaps.
SAVE 50% All our books and bundles are half price for Black Friday, so you can take your Swift knowledge further without spending big! Get the Swift Power Pack to build your iOS career faster, get the Swift Platform Pack to builds apps for macOS, watchOS, and beyond, or get the Swift Plus Pack to learn advanced design patterns, testing skills, and more.
Link copied to your pasteboard.