NEW: Start my new Ultimate Portfolio App course with a free Hacking with Swift+ trial! >>

Help with TensorFlowLite Model for image classification?

Forums > SwiftUI

I've been trying to add a plant recognition classifier to my app through a Firebase cloud-hosted ML model, and I've gotten close - problem is, I'm pretty sure I'm messing up the input for the image data somewhere along the way. My classifier is churning out nonsense probabilities/results based on this classifier's output, and I've been testing the same classifier through a python script which is giving me accurate results.

The input for the model requires a 224x224 image with 3 channels scaled to 0,1. I've done all this but can't seem to figure out the CGImage through the Camera/ImagePicker. Here is the bit of the code that processes the input for the image:

if let imageData = info[.originalImage] as? UIImage {
                DispatchQueue.main.async {

                    let resizedImage = imageData.scaledImage(with: CGSize(width:224, height:224))

                    let ciImage = CIImage(image: resizedImage!)
                    let CGcontext = CIContext(options: nil)

                    let image : CGImage = CGcontext.createCGImage(ciImage!, from: ciImage!.extent)!

                    guard let context = CGContext(
                        data: nil,
                        width: image.width, height: image.height,
                        bitsPerComponent: 8, bytesPerRow: image.width * 4,
                        space: CGColorSpaceCreateDeviceRGB(),
                        bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue
                    ) else {
                        return
                    }

                    context.draw(image, in: CGRect(x: 0, y: 0, width: image.width, height: image.height))
                    guard let imageData = context.data else { return }

                    print("Image data showing as: \(imageData)")
                    var inputData = Data()
                    do {
                        for row in 0 ..< 224 {
                            for col in 0 ..< 224 {
                                let offset = 4 * (row * context.width + col)
                                // (Ignore offset 0, the unused alpha channel)
                                let red = imageData.load(fromByteOffset: offset+1, as: UInt8.self)
                                let green = imageData.load(fromByteOffset: offset+2, as: UInt8.self)
                                let blue = imageData.load(fromByteOffset: offset+3, as: UInt8.self)

                                // Normalize channel values to [0.0, 1.0].
                                var normalizedRed = Float32(red) / 255.0
                                var normalizedGreen = Float32(green) / 255.0
                                var normalizedBlue = Float32(blue) / 255.0

                                // Append normalized values to Data object in RGB order.
                                let elementSize = MemoryLayout.size(ofValue: normalizedRed)

                                var bytes = [UInt8](repeating: 0, count: elementSize)
                                memcpy(&bytes, &normalizedRed, elementSize)
                                inputData.append(&bytes, count: elementSize)
                                memcpy(&bytes, &normalizedGreen, elementSize)
                                inputData.append(&bytes, count: elementSize)
                                memcpy(&bytes, &normalizedBlue, elementSize)
                                inputData.append(&bytes, count: elementSize)

                            }
                        }
                        print("Successfully added inputData")
                        self.parent.invokeInterpreter(inputData: inputData)

                    } catch let error {
                        print("Failed to add input: \(error)")
                    }
                }
            }

I feel like I've exhausted all the few iOS image classification examples out there, so any help goes a long way!

   

Hacking with Swift is sponsored by Instabug

SPONSORED Catch bugs as soon as they happen and know exactly why a crash occurred. Instabug's SDK grabs all the logs they need to fix bugs, crashes and performance issues in minutes instead of days. Get screenshots, device details, network logs, repro steps, and tons of other critical insights needed to resolve issues and prioritize product backlogs straight from your dashboard. It only takes a minute to integrate!

Get started now

Sponsor Hacking with Swift and reach the world's largest Swift community!

Reply to this topic…

You need to create an account or log in to reply.

All interactions here are governed by our code of conduct.

 
Unknown user

You are not logged in

Log in or create account
 

Link copied to your pasteboard.