Node.js TypeScript #14. Measuring processes & threads with Performance Hooks

May 20, 2019 0 Comments

Node.js TypeScript #14. Measuring processes & threads with Performance Hooks

 

 

When writing Node.js code, we can approach tasks in many ways. While all of them might work fine, some perform better than others, and as developers, we should strive to make both easily readable and fast applications. In this article, we learn how to measure the performance of our code using Performance Hooks, also called the Performance Timing API. As examples, we use child processes and worker threads. Keep in mind that the Performance Timing API is still experimental and might slightly change in the future versions of Node.

The overview of the Performance Hooks

When thinking of measuring the time that our application needs to perform tasks, the first thing that comes to mind might be the Date object.

502

It is indeed a valid solution but might prove to be lacking in certain complex situations.

If you would like to know why the number above isn’t exactly 500, check out The Event Loop in Node.js

Entries in the Performance Timeline

The above functionality can also be achieved with the performance hooks. First, let’s investigate this piece of code:

The   creates a PerformanceMark entry in the Performance Timeline. We can use such marks later to measure the time between them.

The   adds a PerformanceMeasure entry to the Performance Timeline. We provide it with the name of the measurement and the names of performance marks.

Listening for changes

The above does not yet give us information about the duration of the measurement. To listen for changes in the Performance Timeline, we use Performance Observers.

With Performance Observers, we can attach callbacks to changes made to the Performance Timeline. As we notice in the previous paragraph, there are different types of entries that we can add to the Performance Timeline, and we can specify what do we want to listen for. In the example above, we explicitly say that we provide a callback for the PerformanceMeasure entries creation, and the function that we provide is called if even like that occurs.

The   function provides us with a list of all Performance Entry objects that are of interest for us in chronological order.

There are also other more generic methods:   and 

Due to that fact, that the PerformanceObserver creates additional performance overhead, we should use the   function as soon as we no longer need it.

Let’s put all of the code together:

setTimeout: 500.628893

Measuring the duration of functions

Performance Hooks give us more possibilities. For example, we can wrap a function to measure the time of its execution. Let’s compare the time that takes to import two modules: Child Process and Worker Threads.

From the first part of this series, we know that TypeScript compiles the imports that we write in our code to CommonJS. It means that we need to attach a timer to the   function.

We can access the  function like that thanks to the way that Node.js handles modules. Using the knowledge from the first part of this series, we can figure out that our code is executed similarly to this:

The   method wraps our function with a timer. Now, every time we use , its time is measured. Our PerformanceObserver must subscribe to the function event type to access the timing details.

require(‘childprocess’) 0.801596
require(‘worker
threads’) 0.458199

Unfortunately here we notice that the PerformanceEntry interface that   provides is a bit lacking. Every element of the   array contains the arguments passed to the function we pass to  , but the PerformanceEntry does not include that, and this is the reason for  .

Comparing child processes and worker threads

In the previous parts of this series, we use child processes and worker threads to calculate a factorial. Using all of the knowledge from this article, let’s compare the time it takes for this task to complete.

main.ts
child.ts
worker.ts

Result from child process: 2432902008176640000 child process: 960.897252 Result from worker thread: 2432902008176640000

worker thread: 934.154146

As we can see, the difference is noticeable, but not that big. The things get very interesting if we don’t use TypeScript either for the child process nor the worker thread.

Now that is a significant performance boost! The above is because the compilation of the additional files starts when we execute the child process or the worker thread. The conclusion is that we should precompile our Child Process and Worker Thread files if we use TypeScript. We can do this, for example, in the package.json:

With the above, we can still write all of our code in TypeScript and have the best possible performance when splitting our application into multiple processes or worker threads.

Summary

In this article, we covered the Performance Timing API. We learned how to measure the time of the execution of different parts of our application. Thanks to that, we were able to find a potential performance boost by precompiling files for worker threads and child processes. Using that knowledge, we can write even better and faster code.

Series Navigation
wpDiscuz


Tag cloud