Mocking HTTP requests with Nock

July 16, 2018 0 Comments

Mocking HTTP requests with Nock



This is a ‘how to’ article on using Nock to mock HTTP requests during tests.

We will look at:

  • Why mock HTTP requests during testing?
  • What is Nock?
  • Code examples of both nock and nock.back

When dealing with code that depends on external services maintaining test coverage and writing reliable tests can be challenging. Tests that make real HTTP requests to external services can be error-prone for a variety of reasons. Examples include the exact data returned changing on each request, network connectivity problems, or even rate limiting.

Unless the test is explicitly designed to test an external service’s availability, response time, or data shape, then it should not fail because of an external dependency.

Intercepting and controlling the behaviour of external HTTP requests returns reliability to our tests. This is where Nock comes in.

Nock is an HTTP server mocking and expectations library for Node.js.
Nock can be used to test modules that perform HTTP requests in isolation.
Nock works by overriding Node’s http.request function. Also, it overrides http.ClientRequest too to cover for modules that use it directly.

Nock allows us to avoid the mentioned challenges by intercepting external HTTP requests and enabling us to either return custom responses to test different scenarios, or store real responses as ‘fixtures’, canned data that will return reliable responses.

Using canned data does come with risks, as it can go stale if not refreshed periodically. Without specific additional tests, or pinned API versioning, this could mean that a change in the shape of the data an API returned would not be caught. It is the developer’s responsibility to make sure practices are in place to avoid this.

An example from my current employer is seen in our end-to-end tests. These use Nock fixtures, as when run during our continuous delivery pipeline they would occasionally suffer from timeout failures. However, each time a developer runs these tests locally the fixtures are automatically deleted and regenerated, keeping them up to date.

Nock is currently used in two main ways:

  • Mocking individual responses specified by the developer, uses nock
  • Recording, saving, and reusing live responses, uses nock.back

Either can be within individual tests. If both are used within the same test file then currently the nock.back mode must be explicitly set, and reset, before and after use. We will look at this in detail later.

Let’s set up a project, add Nock, then look at nock and nock.back with some code examples.

We will be building this example project. It contains some simplistic functions that call a random user generator API, perfect for testing out Nock. It uses Jest as its test runner and for assertions.

Three functions exist in the example to be tested: getting a random user, getting a random user of a set nationality, and getting a random user but falling back to default values if unsuccessful. One example:

const getRandomUserOfNationality = n =>
.then(res => res.json())
.catch(e => console.log(e));

As we are using nock.back then a nock.js helper file is used, we will look at this later.

The Nock docs explain this pretty well. Many options are available to specify the alteration of the request, whether in the request matched or the response returned. Two examples of this would be the response returned from a successful request, and forcing a 500 response to test a function’s fallback options.

All that would need to be added to an existing test file to start using nock is const nock = require('nock'); / import nock from ‘nock';.

In the first test we use a string to match the hostname and path, and then specify a reply code and body. We then add our assertion to the Promise chain of our function call. When the outgoing request from getRandomUser() is made it matches the Nock interceptor we just set up, and so the reply we specified is returned.

it('should return a user', () => {

.reply(200, {
results: [{ name: 'Dominic' }],
  return query
.then(res => res.results[0].name)
.then(res => expect(res).toEqual('Dominic'));

Similarly, we mock a call with a specific nationality, though this time we use a RegExp to match the hostname and path.

it('should return a user of set nationality', () => {
.reply(200, {
results: [{ nat: 'GB' }],
  return query
.then(res => res.results[0].nat)
.then(res => expect(res).toEqual('GB'));

It’s important to note we are using afterAll(nock.restore) and afterEach(nock.cleanAll) to make sure interceptors do not interfere with each other.

Finally we test a 500 response. For this we created an additional function that returns a default value if the API call does not return a response. We use Nock to intercept the request and mock a 500 response, then test what the function returns.

it('should return a default user on 500', () => {
  return query
.then(res => expect(res).toMatchObject(defaultUser));

Being able to mock non-200 response codes, delaying the connection, and socket timeouts, is incredibly useful.

nock.back is used not just to intercept an HTTP request, but also to save the real response for future use. This saved response is termed a ‘fixture’.

In record mode if the named fixture is present it will be used instead of live calls, and if it is not then a fixture will be generated to be used for future calls.

In our example project only one HTTP call is being made per test, but nock.back fixtures are capable of recording all outgoing calls. This is particularly useful when testing a complex component that makes calls to several services, or during end-to-end testing where again a variety of calls can be made. A main advantage of using fixtures is that once created they are fast to access, reducing chances of timeouts. As they use real data then mocking the structure of the data is not necessary, and any change can be identified.

As mentioned earlier, it is important to delete and refresh fixtures regularly to ensure they do not go stale.

A current ‘feature’ of nock.back is that when used in the same test file as standard nock interceptors they can interfere with each other, unless any nock.back tests are bookended per test as so:

// your test

This ensures that any following tests do not unintentionally use the fixtures just generated. If not done then, for example, the 500 response would not be given in our previous test, as the fixture contains a 200 response.

We must first set up a nock.js helper file. In the example this is doing three things:

  • Setting the path of where to save our fixtures to
  • Setting the mode to record so that we both record and use fixtures when tests are run, rather than the default dryrun that only uses existing fixtures but does not record new ones
  • Using the afterRecord option to perform some actions on our fixtures to make them more human readable

This is then accessible in test files using const defaultOptions = require('./helpers/nock); / import defaultOptions from './helpers/nock';.

nock.back can be used with both Promises or Async/Await, examples are given of each. Here we will look at the latter.

it('should return a user', async () => {
  const { nockDone } = await nock.back(
  const userInfo = await query.getRandomUser();
results: expect.any(Object),

We first mark the test as async, to allow us to use await. We set the mode to record. We pass in the name of the file we would like to save our fixtures as, and the defaultOptions set in our nock.js helper to make them more human readable. On finish this provides us with the nockDone function, to be called after our expectations are done.

After calling getRandomUser() we can now compare its result in our expectation. For simplicity of demonstration we just assert that it will contain results, that itself contains an Object.

After this we set the mode to wild, as in this case we want to ensure the fixture is not used by other tests.

The fixtures themselves can be seen in the directory specified in the nock.js helper, and are themselves interesting to look at.

Nock provides a very powerful tool for adding reliability to tests that call external services, and allowing greater test coverage as tests that may previously have been seen as too flaky to implement can be reassessed.

As with any mocks, it is the developer’s responsibility to make sure that mocking does not go too far, and is still possible for the test to fail on a change in functionality, or it is of no value!

Thanks for reading, I hope you found this useful 😁

You may also enjoy:

If you liked this, or anyone else’s post you’ve read today, did you know you can press and hold 👏 to clap up to 50 times?

Tag cloud