How I Kept Motivated To Use Test Driven Development

November 18, 2018 0 Comments

How I Kept Motivated To Use Test Driven Development



I love both Mocha and BechmarkJS. I also think the Chrome performance analysis tools are fabulous. However, with a focus around test driven development on one of my larger efforts; I found myself spending more and more time managing two sets of test suites using differing APIs and an awful lot of time digging though the Chrome performance console. As a result, I started to neglect building new tests and avoided performance tuning in favor of adding functionality … which made my neglect of building tests even worse! To keep my motivation up, I wanted to reduce the work required to manage test suites so that I was not faced with a negative incentive related to test driven development and more rapidly zero in on the impact of performance tuning efforts.

I originally started by building a wrapper around BenchmarkJS to support some unit testing capability. But, I found the Benchmark architecture ill suited to the task. (Note, the authors can’t be blamed for this since that was not what one of their functional objectives). I tried inverting my approach and built a wrapper around Mocha. I got lucky, this turned out to be far simpler and perfectly aligned with my needs.

With Benchtest, performance testing can be added to browser based Mocha testing in as little as one line of code and a modification to the titles of tests. Just wrap your call to inside of a call the benchtest! Then add a # sign to the end of any test titles you wish to benchmark.



it("sleep 100ms #", function(done) {
const startTime =;
while ( < startTime + 100) { ; };


it("sleep 100ms", function(done) {
const startTime =;
while ( < startTime + 100) { ; };

With Node.js, it takes as little as three lines of code.

const benchtest = require("benchtest");
afterEach(function () { benchtest.test(this.currentTest); });
after(() =>;

Once your tests are enhanced, browser based results will be augmented with ops per second, +/- number of ops, and the sample size used.

no-op # Infinity sec +/- 0 100 samples
sleep 100ms # 10 sec +/- 2 11 samples 108ms
sleep 100ms Promise # 10 sec +/- 2 11 samples 101ms
sleep random ms # 21 sec +/- 99 100 samples 45ms
loop 10000 # Infinity sec +/- 0 95 samples
use heap # Infinity sec +/- 0 95 samples

On Node.js or in your browser console a Markdown table can be generated:

| Name                  | Ops/Sec  | +/- | Sample Size |
| --------------------- | -------- | --- | ----------- |
| no-op # | Infinity | 0 | 99 |
| sleep 100ms # | 10 | 7 | 21 |
| sleep 100ms Promise # | 10 | 26 | 21 |
| sleep random ms # | 21 | 96 | 35 |
| loop 10000 # | Infinity | 0 | 85 |
| use heap # | Infinity | 0 | 96 |

Benchtest takes a number of start-up options similar to Bechmark.js to control the minimum and maximum number of test cycles. You can also specificy a sensitivity that will tell Benchtest to stop testing if results are within a specified percentage of each other. See the documentation for more info.

There are plenty of people that claim performance testing and functional unit testing are two separate things and should not be tied together. I tend to disagree. I find it useful to focus on functional testing first and then just add a # to the end of my test case names to get some performance data. I can then use this to rapidly test tuning attempts. I also find it useful for doing cross-browser comparisons as in:

Optimizing Array Analytics In JavaScript: Part One — Iterating Functions

Optimizing Array Analytics In JavaScript: Part Two — Search, Intersection, and Cross-Products

If you do think functional and performance testing should be kept completely separate, there is nothing to stop you from continuing to maintain distinct files but leverage a single testing tool, Mocha, with the light-weight wrapper provided by Benchtest.

Give this article a clap to spread the word if you find the concept of Benchtest useful!

Tag cloud