Everything That's Not Tested Will Break

May 08, 2018 0 Comments

Everything That's Not Tested Will Break



Full disclosure: I'm a big fan of testing, especially TDD. And, mostly, I am talking about professional software development (you actually get paid for what you code) This article is for all of those that either do not test or are not allowed to test. It still amazes me how many companies, departments or teams do not test code properly.

But why should I test? Because, everything that is not tested will break. It's just a matter of time.

To be very clear, I'm not saying, 100% test coverage is the goal because there is always some code that simply cannot break - like a simple entity class. But, everything else should be covered at some point. I'm also not saying that all untested code will have bugs, but once you have a bug, it's very likely to be located in untested pieces of your code base. Because of the fact that you don't know where the bug will happen, writing tests will reduce the probability of that happening.

To do that, there are plenty of battle-tested techniques I've seen so far:

Compiler-Driven-Design (CDD)

I mentioned that in my WTF article. When you are practicing CDD, you must some very optimistic person. You lookup stuff on Stack Overflow, copy-paste, modify and comment-out code until it compiles.

Job done. Commit. Let's watch some cat videos.

Believe-Based-Design (BBD)

This is a variant of CDD where you are very very sure that there are no bugs in your code by looking at it and because you ran it on your PC once. It will definitely run in production.

Let's watch some cat pictures and rely on the idea of...

Let the Customer Test

Great Idea! If you're lucky and a customer actually exists, let him test the software, finding and reporting bugs. Isn't that, what warranty is about? Just make sure that the customer feels bad because you reject bugs for not using your issue template.

Manual Testing

Now we're getting pro. You actually perform some testing yourself! How exciting?

While watching cat pictures, you click on the button you've just implemented and it works! Nice job. Let's implement the next feature.

After the customer called and you yelled at him for being stupid because you did test it, you hang up.

What you don't know by now is, that after implementing the second feature, you actually broke the new button code.

By learning it the hard way, you plan to test all the old features when a new one is implemented. You write down all the things to test as a reminder for the next time. Let's call it a test specification.

But as the product grows, it just takes too much time testing which you just cannot afford. You're manager yells at you because you're not writing features anymore.

Then, you're product is about to be released, the customer is waiting but you can't finish all features including tests. So you skip tests but you'll get a bad feeling that something's wrong.

And you're right, as soon as the customer uses your product, it breaks badly. To find the cause, it takes you a couple of days but it fixing it takes two weeks.

We just saw some of the pitfalls of software development. What can we do? In my humble opinion: Testing.

But testing what?

Testing Scopes

There are different scopes you can test:

Testing Scope Abstraction Level Cost  Description
Unit Tests Low $ They cover "units" of code, usually but not necessarily a class. Unit tests cover most parts of the system, especially all the edge cases and error conditions as they are easier to model. It's like taking a magnifying glass and check welding spots of a car.
Integration Tests Medium $$ They take larger parts (i.e. multiple units) of the system as a whole. They usually test the interaction between multiple classes and external interfaces including HTTP, data bases, message queues etc. Those interfaces will be either mocked by in-memory and in-process replacements that allow running the tests on your computer without any external systems in-place. Integration tests can be sped up to run within a unit-test suite.They are like making sure, the doors actually fit into the chassis.
System Tests High $$$ They test the whole system as a black box by interacting with external interfaces as UI, HTTP, MQ etc. Those tests are the most expensive to write and they are longer running which is why they usually only test the main business use cases from end to end. It very important to have an environment identical or at least very close to your production system including server specs, OS, data base etc. System tests check, that your car is actually able to drive, that the breaks are actually stop the car when your push down the pedal without anything else breaking.
Acceptance Tests Feature $$$$ They cover especially the main use cases that are relevant to the customer or product owner. The idea is, that those tests can be written by the customer and once they are green, the feature is accepted. You could view it as something like: My car can get me from home to work in 30 minutes with a mileage of 8 l/100km (50 mpg).

There are couple of other tests, like performance, endurance and smoke tests, just to name a few, which I will not cover here.

From top to bottom, the effort and thus the cost to write tests usually increases and the level of detail decreases. This is mentioned in Martin Fowlers article Testing Pyramid.

Testing scope is an important but difficult beast to master. It does not make sense to cover 100% code in all scopes, it's just too much effort. So you always need to decide what to cover in what scope. It's always a trade off and a matter of experience but it's highly unlikely that you can build software in just a single scope. Since unit tests are much easier to write, cheaper and faster you would want to cover as much code as possible with unit tests (with an emphasis on possible because there is a reason why you need higher abstraction tests like integration tests).

See "Two unit tests and no integration test."

Automation & Speed

The most important thing about testing is, it needs to be automated and it needs to be fast.

Automation helps to remove mistakes that humans do. It needs to be fast to allow running the tests at each and every code change to get immediate feedback. Unit test should run in milliseconds each, a few seconds for the whole suite. Integration tests should also run in a few seconds up to 30 seconds. System tests may run a few minutes, but only if it is really necessary.

Automation has the advance, that you can also run test coverage analysis that will show you uncovered lines or branches in you code. As stated above, do not try to achieve 100% code coverage. Use a coverage tool as a help to see that you might have missed something like some else branch, e.g, and apply some common sense.

Just as an indication, in my team, we usually have something between 60% and 80% coverage.

Regression Testing

Automation helps you especially with regression bugs. After implementing new features, modifications or refactoring, you can just let your test suite run to find out if you broke something that has worked before. This is a large difference compared to manual testing. Testing a single feature manually after or while developing it might seem enough. But from time to time, a new feature will break some old features.

You may have tested something very thoroughly and then, after everything seems OK, you just change one little innocent line of code or some configuration value that won't affect anything. But this also introduces bugs from time to time, especially when you don't expect it.

Reducing stress

To, me this is the most important and in my opinion most underrated benefit of an automated test suite. It will cover you when it gets stressful, when a release is close and when managers scream at you. Many times I have been in such situations. Very early in my career, I did not have a comprehensive test suite and I needed to quickly fix something that could have affected timing and may causes concurrency issues. I remember testing it very quickly, then shipping it to the customer, having drops of sweat on my forehead. I tried very hard to think about all the consequences it might have had in my code base what could have broken.

And obviously, it failed - in production.

A couple of years later, I was in a similar situation. However, I developed the whole software with TDD, resulting in a huge test suite. From the day I put the software in production, there were no bugs. After adding an urgent new feature, all tests were green and I was very relaxed when shipping it. Nothing broke.

Customer satisfaction & Maintenance Costs

Most customers and - unfortunately - most managers as well that are not familiar with software and software projects in particular would probably not want to spend money on testing if you ask them. However, if you ask them "Should this piece of software be of high quality" most of them would probably say "yes". If you ask them, if they want to pay for maintenance, bug fixing in a warranty phase of a project or reduced production features in a product, they will say "no".

So, after a while, you learn a couple of things:

Tag cloud