Swift version: 5.10
The Vision framework has built-in support for detecting text in images, although realistically it’s limited to printed text in clear fonts – don’t expect to be able to throw raw handwriting at it and get useful results.
To get started import the Vision framework, then set up an instance of VNRecognizeTextRequest
so that it processes any text that is found. Your request will be handed an array of observations that you need to safely typecast as VNRecognizedTextObservation
, then you can loop over each observation to pull out candidates for each one – various possible piece of text that Vision thinks it might have found.
If we wanted to just pull out the best candidate of each observation then print it out, we’d make a request like this:
let request = VNRecognizeTextRequest { request, error in
guard let observations = request.results as? [VNRecognizedTextObservation] else {
fatalError("Received invalid observations")
}
for observation in observations {
guard let bestCandidate = observation.topCandidates(1).first else {
print("No candidate")
continue
}
print("Found this candidate: \(bestCandidate.string)")
}
}
Next, put that request into an array, and set Vision off in a background queue to scan your image. For example, this uses the default .userInitiated
background queue, then loads and scans an image from the app bundle called testImage
:
let requests = [request]
DispatchQueue.global(qos: .userInitiated).async {
guard let img = UIImage(named: "testImage")?.cgImage else {
fatalError("Missing image to scan")
}
let handler = VNImageRequestHandler(cgImage: img, options: [:])
try? handler.perform(requests)
}
Make sure you have an image called “testImage” in your asset catalog, and that code should work out of the box.
There are two further parameters you might want to tweak to make your text recognition more useful. First, by default the recognitionLevel
property of your VNRecognizeTextRequest
is set to .accurate
, which means Vision does its best to figure out the most likely letters in the text. If you wanted to prioritize speed over accuracy – perhaps if you were scanning lots of image, or a live feed, you should change recognitionLevel
to .fast
, like this:
request.recognitionLevel = .fast
Second, you can set the customWords
property of your request to be an array of unusual strings that your app is likely to come across – words that Vision might decide aren’t likely because it doesn’t recognize them:
request.customWords = ["Pikachu", "Snorlax", "Charizard"]
These custom words automatically take priority over the built-in dictionary, so use this wisely.
Rather than scanning images in your app bundle, you could load an image that was scanned using VNDocumentCameraViewController – see my article How to detect documents using VNDocumentCameraViewController for more information.
SPONSORED Take the pain out of configuring and testing your paywalls. RevenueCat's Paywalls allow you to remotely configure and A/B test your entire paywall UI without any code changes or app updates.
Sponsor Hacking with Swift and reach the world's largest Swift community!
Available from iOS 13.0
This is part of the Swift Knowledge Base, a free, searchable collection of solutions for common iOS questions.
Link copied to your pasteboard.