NEW: Join my free 100 Days of SwiftUI challenge today! >>

How to use VNRecognizeTextRequest’s optical character recognition to detect text in an image

Swift version: 5.1

Paul Hudson    @twostraws   

The Vision framework has built-in support for detecting text in images, although realistically it’s limited to printed text in clear fonts – don’t expect to be able to throw raw handwriting at it and get useful results.

To get started import the Vision framework, then set up an instance of VNRecognizeTextRequest so that it processes any text that is found. Your request will be handed an array of observations that you need to safely typecast as VNRecognizedTextObservation, then you can loop over each observation to pull out candidates for each one – various possible piece of text that Vision thinks it might have found.

If we wanted to just pull out the best candidate of each observation then print it out, we’d make a request like this:

let request = VNRecognizeTextRequest { request, error in
    guard let observations = request.results as? [VNRecognizedTextObservation] else {
        fatalError("Received invalid observations")

    for observation in observations {
        guard let bestCandidate = observation.topCandidates(1).first else {
            print("No candidate")

        print("Found this candidate: \(bestCandidate.string)")

Next, put that request into an array, and set Vision off in a background queue to scan your image. For example, this uses the default .userInitiated background queue, then loads and scans an image from the app bundle called testImage:

let requests = [request] .userInitiated).async {
    guard let img = UIImage(named: "testImage")?.cgImage else {
        fatalError("Missing image to scan")

    let handler = VNImageRequestHandler(cgImage: img, options: [:])
    try? handler.perform(requests)

Make sure you have an image called “testImage” in your asset catalog, and that code should work out of the box.

There are two further parameters you might want to tweak to make your text recognition more useful. First, by default the recognitionLevel property of your VNRecognizeTextRequest is set to .accurate, which means Vision does its best to figure out the most likely letters in the text. If you wanted to prioritize speed over accuracy – perhaps if you were scanning lots of image, or a live feed, you should change recognitionLevel to .fast, like this:

request.recognitionLevel = .fast

Second, you can set the customWords property of your request to be an array of unusual strings that your app is likely to come across – words that Vision might decide aren’t likely because it doesn’t recognize them:

request.customWords = ["Pikachu", "Snorlax", "Charizard"]

These custom words automatically take priority over the built-in dictionary, so use this wisely.

Rather than scanning images in your app bundle, you could load an image that was scanned using VNDocumentCameraViewController – see my article How to detect documents using VNDocumentCameraViewController for more information.

LEARN SWIFTUI FOR FREE I have a massive, free SwiftUI video collection on YouTube teaching you how to build complete apps with SwiftUI – check it out!

Available from iOS 13.0

Similar solutions…

About the Swift Knowledge Base

This is part of the Swift Knowledge Base, a free, searchable collection of solutions for common iOS questions.

Buy Testing Swift Buy Practical iOS 12 Buy Pro Swift Buy Swift Design Patterns Buy Swift Coding Challenges Buy Server-Side Swift (Vapor Edition) Buy Server-Side Swift (Kitura Edition) Buy Hacking with macOS Buy Advanced iOS Volume One Buy Advanced iOS Volume Two Buy Hacking with watchOS Buy Hacking with tvOS Buy Hacking with Swift Buy Dive Into SpriteKit Buy Swift in Sixty Seconds Buy Objective-C for Swift Developers Buy Beyond Code

Was this page useful? Let us know!

Average rating: 5.0/5