TDD Should be Fun

May 07, 2016 0 Comments

TDD Should be Fun

Sometimes Test Driven Development (TDD) can seem like a drag. Are you writing mocks that are several times more complicated than the code you will test? Does your test suite take minutes (or hours) to run? Does refactoring your code fill you with dread because of all the tests to rewrite? If any of this sounds familiar then it may be time to try a new strategy.

When it is working at its best, practicing TDD feels like playing a computer game. Not an FPS like Halo or Call of Duty, but a strategy game like StarCraft 2 or Total Annihilation. One that takes some thought and planning to win.

And I approach TDD like I'm playing a game. In this game you lose if you stop practicing TDD. You 'win' when you finish something while still practicing TDD and feeling good about the code. That 'something' you finish might be anything from a module to a library to an entire application. It doesn't matter what it is particularly, so long as you finish it.

Why do people give up on TDD? Sometimes it's because tests become too complicated and writing them feels like a chore. Sometimes it's because the tests take too long to run, and it feels like they're slowing you down. In both these cases though, what sucks the fun out of TDD is that the feedback loop increases. The time between starting a new test and the red or green bar gets too long. You don't feel like you're winning any more.

Below are some strategies and tactics I use to keep TDD fun. I wrote them with JavaScript in mind. The underlying principles apply to any programming language though.


Strategies are about the 'big picture'. They affect how you approach the whole project as opposed to an individual test.

Design with tests in mind

TDD is a tool for writing code. It is not a substitute for software design. Neither is TDD the same thing as testing. I think of TDD as a programming technique that just so happens to produce a suite of automated tests as a byproduct. It is not a magic wand that designs a system without me having to think about it.

Test-driven development is a way of managing fear during programming.[1]

So, to practice TDD well, I need to design the system with tests in mind. This doesn't mean that I need to have a 300-page design document before I write a single line of code. But, it does mean that I have to understand what I am trying to build and have some idea of how the pieces will fit together. Designing with tests in mind usually means writing (and testing) smaller pieces of code. It also means thinking carefully about side effects (more on that later).

Understand the different types of test

Most of the time in TDD we write unit tests-tests that verify small units of code in isolation. These are not the only type of test though. Integration tests and functional tests are also valuable, but you have to know when to use them. If you're hazy on the differences, then it's worth learning. I recommend starting with Eric Elliot's helpful introduction.

Functional tests test end-to-end functionality, usually by simulating clicking and typing in a browser. I often see beginners writing functional tests in their first attempts at TDD. Unfortunately this sets them up for a world of hurt. Functional tests are usually slow to run, and complicated to create. People spend a lot of time setting up headless browsers and test harnesses. And the feedback loop slows to a crawl. TDD becomes a confusing chore.

Integration tests check that separate bits of a codebase work together. We use them more often than functional tests, but they can be tricky. Integration tests work best when testing separate parts of your own codebase. They are also useful for testing that your code works with third-party libraries. But this is usually where side effects sneak in.

To be clear, I am not saying that you should never use functional tests or integration tests. They are both important. But do know where and when to use them. Sometimes that means writing tests outside of your TDD practice.

Know when not to use TDD

Sometimes TDD is not the best tool for the job. For most projects, it's awesome, but there are cases where it's not. It may need changes or some lateral thinking to make it work... or it may not be worth doing TDD for that project. For example, imagine you are creating a module that is a thin wrapper around a REST API (or something similar). In that case, pretty much all your tests will be integration tests, and will be slow. You can still practice TDD, but keeping it fun might involve breaking the rules. You might only run one test at a time or only test certain subsets of the project. Or, you might skip TDD entirely and write tests as a separate development task.

Balance the tradeoff of test creation versus test runtime

Generally, we want tests to run fast so we have a quick feedback loop. We don't want to wait around for a bunch of slow tests to finish. Sometimes writing fast tests is complicated though. You have to think carefully about what bits to mock or stub, and even just writing out test data can be tedious. So there is a tradeoff between the time and effort it takes to run a test and the time it takes to create the test. Both should be as short as possible, but sometimes you have to trade one ofif against the other. If it's taking hours to figure out how to configure a test so that it can run offline, maybe it's not worth the effort. Maybe for this test it's worth it to just make sure that you have network access when it runs.


Tactics are lower-level than strategy. They help get things done, and support the big-picture strategy. But, if the strategy is off, tactics alone won't be enough to save you.

Don't waste time searching for the perfect test framework

It is tempting to noodle around trying out all the various test runners to see which one suits you best. The truth is, all the popular ones are popular for a reason-they work. Each one is different, yes, but they're all more than capable of getting the job done. Mr Elliott and Mr Bevacqua argue that Tape is the best, and I agree with them. But, I still use Mocha because of that switch that makes my test report a Nyan cat, which makes TDD more fun. And you know what? Mocha works just fine.

Write and test pure functions

Adopting a functional programming style that emphasises pure functions makes testing much easier. To write pure functions, you have to know where the side effects in your code are. You also need to know how to factor them out if necessary. Side effects just so happen to be most of the things that will make your tests slow. This includes network access, file IO, database queries, and so on. If you can factor these out (with stubs or mocks or whatever), then your tests will run faster, and be more fun.

Prefer 'equals' assertions

Most unit tests that I write follow a predictable pattern. It looks something like this:

describe('#functionIWantToTest()', function() { it('should return foo when passed bar', function() { var input = 'bar', expected = 'foo' actual = functionIWantToTest(input); expect(actual).to.equal(expected); }); }); 

That last line rarely changes except to swap equal with deep.equal. This keeps the test simple to read, and simple to reason about. Defining actual and expected makes it easier to discover what went wrong when a test fails. Keeping things simple keeps things fun.

If 'equal' and 'deepEqual' were the only assertions available anywhere, the testing world would probably be better off for it.[2]

Prefer stubs over mocks

Stubs and mocks are not the same thing. "Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test."[3] Mocks on the other hand, are "objects pre-programmed with expectations which form a specification of the calls they are expected to receive."[4] In other words, Mocks are fake objects with tests inside them to make sure you're calling the API right.

Sometimes Mocks are handy. Most of the time though, they are an unnecessary complication. It feels like it's saving you time when really it's just papering over fuzzy thinking. Stubs have just enough code to get the job done, and no more. At first glance, a stub may seem like more work. Using some mocking library seems like it will save effort. The mock library takes the real object and copies the relevant bits for you-easy. In practice, I've found that this black magic rarely works as expected. I end up spending inordinate amounts of time working out what the mocks are actually doing. Instead, I could have been writing stubs and testing code. Writing stubs increases my understanding of what is actually going on. If a stub gets too complicated, it's usually a sign that I should be breaking the code into smaller pieces.

Run unit tests on the command line

Running tests in a browser has many disadvantages:

  • Browsers are slow to load. Even when using an automatic watcher to reload the page, the command line still feels faster.
  • If you're trying to automate testing, it's complicated to set up a headless browser. And again, is always slower than just running the tests in Node.
  • When testing in the browser it's tempting to use window and document global variables. Unfortunately, relying on these makes code less portable and more difficult to reason about. If you can factor those out with stubs, your tests will be faster and your code will be easier to understand.

I will admit, this tactic is hard for beginners as it requires a change of mindset. Portable code keeps business logic separate from presentation. But writing code like this is a skill that takes practice to learn. I suggest two sub-tactics to get started down this path:

  1. If you're just starting out, try using tools like jsdom or cheerio to stub the DOM and/or jQuery. This way you can still create tests that check DOM changes for you. But, you will be able to avoid the overhead of a full browser environment.
  2. Once you're used to stubbing out the DOM, challenge yourself to see how much code you can write without it. In a similar way, see how much you can achieve by only adding or removing classes to change state.

Just to be clear, I am not saying you should never test in a browser. You should test in browsers often. But do it as part of a broader testing (and continuous integration) plan, not TDD.

Don't be afraid of synchronous file reads in tests

I need to say this carefully, because it is borderline Node heresy. Sometimes the fastest, simplest way to write a test will be to load data from a file synchronously. For example:

var fs = require('fs'); describe('#functionIWantToTest()', function() { it('should return a big array when passed a big JSON thing', function() { var input = fs.readFileSync('/path/to/big-JSON-thing.json'), expected = fs.readFileSync('/path/to/big-array.json'), actual = functionIWantToTest(input); expect(actual).to.equal(expected); }); }); 

If you can help it, never use fs.readFileSync in your application code. But for testing, in my opinion, it is OK. You have to read the data from disk at some point. Either it's from your test code file, or from another file. Yes, in theory, other tests could be running while waiting for the data to read from disk. But, that also adds complexity and time to creating the test. I would rather keep my tests simple. Save that kind of performance optimisation for the application code.

I realise this might sound contradictory. So far most of this advice has been about keeping tests fast. But this is a classic trade-off-time to write tests versus time to run tests. If your tests are getting slow, then by all means go through and refactor your tests. Remove the sychronous calls and replace them with asychronous ones. Just be sure that file IO is actually the source of the problem before you do.

Remember the refactoring step

I have heard people argue that TDD makes them feel less creative. I suspect this is because many people don't always follow the TDD process in full. Kent Beck describes the TDD process as follows:

  1. Red-write a little test that doesn't work, perhaps doesn't even compile at first
  2. Green-make the test work quickly, committing whatever sins necessary in the process
  3. Refactor-eliminate all the duplication created in just getting the test to work

I suspect the way a lot of people actually practice TDD (including myself on a bad day) is like this:

  1. Red-write a medium complexity test that doesn't work;
  2. Green-make the test work by writing the most elegant code I can come up with; and then
  3. Completely skip the refactoring step.

I find that working in this way does stifle my creativity because with TDD I work with single units of code. If I write an 'elegant' solution straight away, I limit the 'elegance' to that single unit. Sticking to the rules encourages two things:

  1. It encourages me to make my code only as complicated as it needs to be, no more.
  2. If I am refactoring as a separate step, it encourages me to look at the broader codebase-not just one unit.

Done right, refactoring is one of the more enjoyable parts of programming. Deleting huge swathes of code; eliminating duplication; making things run faster-these are a coder's most refined delights. And remember, you can refactor tests too. Don't fudge steps 2 and 3 thinking it will save you time. It may save a small amount in the short term, but you will build up more technical debt. Not to mention missing the fun of refactoring.


This is actually more of a strategy than a tactic, but I wanted to save it until last. Perhaps it's because I'm Australian but it seems to me that a lot of people take testing and TDD way too seriously. To (badly) paraphrase Jesus though: TDD is for the programmer, not the programmer for TDD.[6] If TDD helps you have more fun coding, then that's awesome. If it doesn't, then it's OK to leave it alone. People are different, and that's A Good Thing.

I hope these tips have been helpful. If you have any questions or corrections, then please do let me know via Twitter.

TDD Should be Fun

Tag cloud