I use automated testing to some extent in almost every single one of my projects. This includes personal, freelance/consulting, and full-time projects. I never strive for 100% coverage. I believe testing the least amount of things possible is the best approach. Tests must be maintained alongside your code, they’re an investment. I’ve seen many testing suites break down and fail because they were too verbose, annoying to update, or didn’t stop production bugs in critical areas due to a focus on total coverage.
It’s really hard to give specific advice without knowing specific business requirements or what a codebase looks like. Thankfully, there are a few common areas to check for testing when looking at a codebase.
The most important areas to test are those that are business-critical. If you’re working on an e-commerce site, this would be anything surround the checkout, cart, and adding/removing products. If any of these areas were to go down, you’ll definitely be getting a PagerDuty request to fix a P1 issue. Say goodbye to relaxing at home on the weekend!
Another way to add new testing coverage is to look into existing error reporting data. I’m generally able to trace a large amount of user errors to a single feature or chunk of a codebase. These features/chunks can have tests added for expected behavior. This is easily followed by bug fixes & refactors to stop these errors from being thrown in production.
One last thing that may warrant test coverage is a function or component that has many uses or has a large surface area. If something were to break in these functions/components there will be many issues downstream. This may also be a sign that abstraction is needed. It may be time to break logic down into smaller, more testable chunks.
Alright, this is all a bit abstract, I think a few examples would help. Within a React/Redux eCommerce app I’d start with the following:
- Test Redux Actions for “Checkout Process” (Enzyme)
- Test User Flow for “Checkout Process” (Cypress)
- Test Redux Actions for “Cart Actions — Product Add/Remove” (Enzyme)
- Test User Flow for “Cart Actions — Product Add/Remove” (Cypress)
- Test Redux Actions for “User Login” (Enzyme) [has a different priority if this is required for purchase, this assumes guest checkout is possible]
- Test User Flow for “User Login” (Cypress)
This would prevent the most direct customer facing issues that would impact them from making a purchase. Any new tests can be added using the above criteria.
Sometimes it may seem like adding and maintaining these tests is a lot of work and is all for nothing. You still see bugs come through in production regardless of tests, there’s always issues to fix. It may seem like your tests don’t even work at times. I know it can also be exhausting having to fight for testing time in some organizations, it can seem like a lost cause in many cases.
Let me explain a recent situation that really sold me on automated testing. I was already a staunch advocate, I didn’t think it was possible to sell me the concept any further.
Seemingly out of nowhere we started to see a lot of new high occurrence errors. We recently pushed out an update that we suspected had a chance to cause issues due to the amount of areas this new feature touched. Multiple P1 and P2 issues were being file daily, we were playing whack-a-mole trying to keep up and fix them.
This went on for 2 weeks and didn’t seem like it was slowing down any time soon. I unfortunately was on-call during these 2 weeks, I suffered the loss of many nights and my weekends. I was also unable to accomplish any normal day-to-day work as all of my time was taken up dealing with priority issues within our production environment.
Why was this happening? We have tons of automated test coverage that runs automatically within TravisCI, they were all passing, or so we thought…
During a checkup of our CI environment (we’ve been making many improvements to our testing pipeline) it was discovered that due to a misconfiguration, Travis was only running 20% of our tests but was returning that they were all passing. Once re-configured it was revealed that many of our tests had been failing this entire time.
Wow. After this was discovered, we were able to quickly fix issues based on failing tests. Our error rates seemed to drop off completely, we were back to normal rates. No more massive influx of P1/P2 issues, no more lost nights and weekends.
This 2 week period while terrible to experience, validated to everyone the benefits of testing. Especially to those outside of the development team. The same people that were complaining about features not being released quickly were now demanding that testing be extended at the cost of frequent feature releases. Testing is now a concrete standard within the organization.
The issue we experienced also showed the importance of doing regular maintenance and checkups of your testing / CI system. It also shows the importance of doing local testing for large features rather than depending on automated testing systems. Things can always go wrong, it doesn’t hurt to double check before major deployments.
I’m always looking to improve my articles, if you would like to leave feedback (I would love it if you did!) you can find the Google Form here. It’s very short, I promise!