UPGRADE YOUR SKILLS: Learn advanced Swift and SwiftUI on Hacking with Swift+! >>

How to add the first tests to an Xcode project

Recorded – watch the full episode on YouTube.

What advice do you have for folks who want to get started writing tests where there are none?

Ellen Shapiro: My policy on testing is to start with the stuff that people are going to come after you with torches and pitchforks for, if it doesn't work. You have to look at the most mission critical pieces of this application. If you're using an e-commerce application, if you have everything working but when you go to hit buy nothing happens, that's a huge problem. That is something that will tank your business because if everything else works fine and then you can't actually purchase anything, you don't get any money and then your business goes bankrupt.

“My policy on testing is start with the stuff that people are going to come after you with torches and pitchforks for”

And so that's something where, when I say people will come after you with torches and pitchforks, I don't necessarily just mean your users. I also mean your managers. I had a side project called Hum for several years, and during one of our earliest releases I messed up a Core Data migration. And so the application was just absolutely crashing on launch during that migration. And people got very angry about it because they were putting their notes for their songs and their recordings in there. And once that happened, I wrote a test for migrating every single version of the database to a new version, just to make sure like okay, let's try and make sure that this sort of automatic migration is actually automatic. And that gave me much more confidence that at least for that one particular thing, I was not going to screw that up again.

And it's definitely something where if it doesn't work, your users or your managers are going to come after you. Things you broke before are another good place to start. And just stuff where you look at it and you feel like it's super unstable. You feel like, “I don't really know how this works.” This is another place where you can start from the outside and work inwards where maybe you write an integration test for something where you don't necessarily understand the whole thing, but you know what you're supposed to put in and you know what you're supposed to get out.

And so you can at least write an integration test for that. And then you can sort of get into that gooey middle part and go, “okay, what’s going on here?” And be able to refactor with the confidence to say, okay, now I have all these unit tests that are able to test each little piece of this whole puzzle, but are all of those passing and is that integration test that I wrote earlier passing.

Paul Hudson: You mentioned quite a lot in there, and one of the things you've picked up on is that just getting somewhere, getting some confidence in yourself that this is actually working correctly, as you intended. It's gone wrong in the past, a bug happened, you've fixed the bug hopefully. That's a good starting point to write a test, presumably, because then you can say, it happened before, it broke before, but now I have the confidence that it is fixed, it is better, it's an improvement. There might be a thousand other bugs, but you still know that bug is fixed and tested and you've chipped away slowly at the testing code base.

Ellen Shapiro: I think that's a really, really great place. And one thing that I think a lot of people get stuck on is once they start, they want to be like, okay, I want to have a hundred percent test coverage everywhere. And it's something where there's the old phrase, never let the perfect be the enemy of the good. It's something where if you had 0% test coverage and now you have 10% test coverage, that's way better. It's something where you have made a huge start. You've made a huge improvement and the more that you can improve the better, but don't just spend every waking second trying to improve your test coverage number. And also personally, I find that percentages are not really a very useful metric.

Paul Hudson: No, particularly that one, test coverage is almost meaningless, but it's nice to have. Better than nothing, but you couldn't totally scam it, you know?

Ellen Shapiro: Oh, absolutely. I think my favorite example of this was I once inherited a codebase where there was a test that literally had a comment that said this branch does nothing. It's just here to improve the code coverage for this. I deleted that test immediately. This was a code base where there was something in the README that was like, code coverage must be over 95%. It was like, “okay, well then I guess we have to hit this branch that only gets hit during debug and validate that it only gets hit during debug.” I get very frustrated with that stuff. People who are like, “well, I have to test every single guard statement returns when something's not correct.” No, you don't. That's why the guard statements are there.

“If it crashes loudly and immediately, you are much more likely to find the problem than if you just guard and bail out and then stuff doesn't work.”

Paul Hudson: You've got a test there to say try and dequeue this thing from a table cell or a storyboard. And that should never fail – if it can't find this thing in the table cell, you've got really big problems, right? You can't fix that at runtime, it's just fundamentally broken and that's what guard is doing. That's what your return is doing – getting out of there.

Ellen Shapiro: I think that's something where I really prefer to take stuff like that, pull it out into an extension, throw a fatal error in the extension if it doesn't work. And then just sort of basically be like, okay, if this didn't work, that's going to be an immediate crash because if it crashes loudly and immediately, you are much more likely to find the problem than if you just guard and bail out and then stuff doesn't work.

Paul Hudson: Speaking of fatal error, Phillip Lashoff asked a question: can you test a fatal error?

Ellen Shapiro: That's a good question. No, you can't. If you want to have something that you can test, you have to make it throw. And so that's why I think in general, particularly in SDK's you don't want to use a fatal error just because usually somebody else calling into your code wants to do something other than crash. And having something that you can test, you need to be able to test not only is it having an error, is it the error that you expect it to be? And so that's something where using something that throws is much better. And that's what you can test. I reserve fatalError() for where there is absolutely no reason this should fail unless I have made a typo or just something really, really weird has happened.

Stuff like loading a screen from a storyboard or something like that. Stuff like trying to load an image from the asset catalog. Like if I'm trying to load an image that doesn't exist out of the asset catalog, I would really like to know about that at the time I try to load it so that I'm like, “oh wait, I wasn’t supposed to do this.” I think that to me is where fatalError() comes in a lot more handy, is just places where there is absolutely no way that this should be happening.

Paul Hudson: Well to be fair Swift is trying to be safe, isn't it? Swift is trying to say, listen making an NSRegularExpression can throw, but if you've hand-typed that regular expression, it's either right or it's wrong. And if it's wrong, you don't want to find out at runtime on the user's device, you want to find out now, crash, crash loudly, scream at me?

Ellen Shapiro: Yeah. And there are some people who are perfectly fine with not finding that out at runtime. But I think those are the type of things, basically places where if I make a typo or if the name of a class changes or something, and then things don't change appropriately. I want to find out about that as early as possible in the development process. When you hit a fatal error when you're testing it crashes the entire test suite, it doesn't just keep going. And that's something that it's also the same if you have a force unwrap – if you hit a force unwrap in your test suite and it isn't there, like nope, your whole test suite crashes. It doesn't just fail the test. And so that's why XCTUnwrap() is now a thing. That shortened up a bunch of the work around that. But it's definitely something where it's like yeah, if the absence of something is such a huge problem that you need to know about it immediately, then you can probably wrap it in a fatal error.

If it's something where there's a reasonable possibility that this could not be there. That's when throws is probably your better choice, because then you can check that when whatever it is, isn't doing what you needed to do that it's handling it appropriately and giving you the appropriate error.

This transcript was recorded as part of Swiftly Speaking. You can watch the full original episode on YouTube, or subscribe to the audio version on Apple Podcasts.

Listen on Apple Podcasts

Hacking with Swift is sponsored by Essential Developer

SPONSORED Join a FREE crash course for mid/senior iOS devs who want to achieve an expert level of technical and practical skills – it’s the fast track to being a complete senior developer! Hurry up because it'll be available only until April 28th.

Click to save your free spot now

Sponsor Hacking with Swift and reach the world's largest Swift community!

BUY OUR BOOKS
Buy Pro Swift Buy Pro SwiftUI Buy Swift Design Patterns Buy Testing Swift Buy Hacking with iOS Buy Swift Coding Challenges Buy Swift on Sundays Volume One Buy Server-Side Swift Buy Advanced iOS Volume One Buy Advanced iOS Volume Two Buy Advanced iOS Volume Three Buy Hacking with watchOS Buy Hacking with tvOS Buy Hacking with macOS Buy Dive Into SpriteKit Buy Swift in Sixty Seconds Buy Objective-C for Swift Developers Buy Beyond Code

Was this page useful? Let us know!

 
Unknown user

You are not logged in

Log in or create account
 

Link copied to your pasteboard.