Test Driven Development - Why?


Automated tests

The core of test-driven development is a suite of automated tests. Why bother writing them at all?

  1. How does a developer know that the feature they wrote works? They could test it manually after they finished writing it. But what if:
    • It's a feature relying on a complex and time-consuming set of steps to even get the software into the right state before the feature can be tested?
    • It needs to be tested in X browsers across Y devices?
    • The product is a back end system without a user interface, only accessible through a machine readable protocol?
    • It's a feature that is statistical in nature, requiring hundreds or thousands of steps before success or failure can be evaluated?
  2. Any change to a system can cause an previously developed feature to break. The only way to be certain this had not happened is to test all the features and edge cases after each change.
Common for both of the points is that manual testing would take a significant amount of time. It is just not feasible to manually completely tests any but the most trivial products with an appropriate level of depth and scope. Automated tests become insurance against unintended change - if a feature is changed or broken by accident, the tests are going to catch it before the customer.

A significant benefit of automated tests is that they provide documentation on how the product should function. This is especially important in complex products developed and maintained over multiple years where developers are not able to keep all of the product logic in their heads. Being able to read the tests and understand how the product is intended to work without having to spend time talking to analysts is a significant time saver.

Why write the tests first?

If the change to the production code has been made before writing the test, the developer can't be sure whether the test is passing because the production code is correct or due to a defect in the test. Therefore it is critical that the test is written first. Taking this further, if every change is driven by a failing test, test coverage will naturally be high and the development team will gain trust in the test suite and be able to deploy changes to customers quickly.

It also forces the developer to think about the actual business requirement. Writing a failing test requires absolute understanding of what needs to be done. This encourages the development team to clarify the requirements with the business to be sure that what we are developing is what they wanted. Failing tests focus developers’ attention and give a clear indication when a feature is done. From a business perspective, test-driven development enables changes to be made effectively and with confidence, being able to react to market changes, customer feedback, competition and regulatory changes quickly.

It's still just a tool

Having said all of the above, we do keep in mind that TDD is just a tool and should be used when appropriate. There are instances when it's not a good use of budget, for example when creating throwaway prototypes or exploring and learning how to use unfamiliar tools.

From the perspective of our customers, it's a question of whether they'd like a product that's maintainable and robust, expecting it to have a long lifetime, or a prototype that they can show to investors in order to decide whether to pursue the idea further.