Logging: Best Practices for Node.JS Applications - Bits and Pieces

September 16, 2019 0 Comments

Logging: Best Practices for Node.JS Applications - Bits and Pieces



Use Bit to share, install and collaborate on individual JS modules and UI components. Stop wasting time configuring packages, managing multiple repositories or maintaining cumbersome monorepos.

Components with Bit: Easily share across projects as a team

What are the logs and their importance?

Logs are the events that reflect the various aspect of your application, it is the easiest mode of troubleshooting and diagnosing your application, if written correctly by the team.
When you start a Node.Js server and the database is not running or stopped abruptly due to some issue or the server port is already in use, if there are no logs you would never know why the server failed.

Many times, as a developer, you need to debug some issue, we happily use debugger and breakpoints to understand where and what broke.

What would you do when your application is running on production? Can you attach a debugger there and replay around the bug? Obviously no. Hence, this is where logging will help you.

You will achieve ideal logging setup only when, without using the debugger, you can find an issue by going through logs and you know what and where things went wrong.

Best practices

1) Three important parts of logs

The application logs are both for humans and for machines. Humans refer to logs for debugging the issue and machines use the logs to generate various dashboards and graphs and for data analysis to produce various conclusions on the customer usages.
Every log should contain the three most important parts:

  • Source of log
    When we have a microservice architecture, this becomes really crucial to know the source of the log, the information like service name, zone, hostname, etc. (for more information about managing common code in microservices — read here)
    The detailed metadata about the source is mostly handled by the log shipper agent, which will push the logs from all the microservices to the centralized logging system. Filebeat by ELK stack is one of the best options to consider for log shipper.
  • Timestamp
    The moment when an event occurred or log was generated is really important. Hence, make sure every log has the timestamp so that we can sort and filter.
  • Level and context
    When going through the logs to find the error, if the log doesn’t provide enough information and if you have to refer back to the code to understand, it would be very frustrating. Hence while logging we should pass enough context.E.g. a log without context would look like the following:

    The operation failed!

    The better example with meaningful context would be:

    Failed to create user, as the user id already exist

2) What and how to log?

  • Log methods and inputs:
    While debugging the issue if we get to know which function was called and what parameters were passed, it will really help.

While going through the logs >>>> and <<<< will give a high-level insight of entry of function and exit of function. This is inspired by git merge conflict.

You should use Aspect js to automate the function level logs.

  • The log should not evaluate to throw an exception
    In line #7, the userService.getUser() can return null and .getId() can throw an exception, hence please avoid these kinds of scenarios.

  • The log should not produce a side effect
    The log should be stateless and should not produce any side effect. For example, the following log in line #7 will create a new resource in the database.

  • Log errors with details
    When there is an Error be descriptive, mention what was attempted and why it failed.
    Log what failed and what you are doing next.

In case you are throwing back the error in catch section, please log which operation failed and mention that you are throwing the error as is.

3) Sensitive information

The series of logs should reflect the activities of a user in the application so that debugging can be easier and should log the errors so that the action can be taken as soon as possible. The logs contain pieces of information like which function was called with what inputs, where and what error occurred, etc.

While logging we have to make sure we do not log the sensitive information like username and password, financial information like the card number, CVV number, etc.
As developers, we should ask and consult with the product team to prepare a list of sensitive information and mask it before logging.

3) Correct usage of log levels

If production applications have a reasonably high number of user transactions, per day the ideal log setup might generate GBs of logs, hence we would need to categorize our logs in multiple groups. So that depending on the audience, we can switch the log level on run time, and get only appropriate logs.

For example, if a product manager wants to see how many customers transactions are successful or failed in our logging dashboard, he should not be shown cluttered information of function calls on various which is only for developers. And when there is a bug in production, the developer should see the detailed logs of the various functions that were successfully executed and which failed. So that the issue can be identified and fixed asap.

To achieve this kind of setup, we need to have a better understanding of each log levels.
Let’s discuss the most important levels and their usages:

  • INFO: Some important messages, event messages which describe when one task is finished. For example: New User created with id xxx
    This represents the just for information logs on progress.
  • DEBUG: This level is for developers, this is similar to logging the information which you see while you are using debugger or breakpoint, like which function was called and what parameters were passed, etc. It should have the current state so that it can be helpful while debugging and finding the exact issue.
  • WARN: These logs are the warnings and not blocking the application to continue, these provide alerts when something is wrong and the workaround is used. e.g wrong user input, retries, etc. The admins should fix these warnings in the future.
  • ERROR: Something wrong happened and should be investigated on priority. E.g. database is down the communication with other microservice was unsuccessful or the required input was undefined.
    The primary audiences are system operators or monitoring systems.

Ideally, the production app should have close to zero error logs.

4) Don’t use console.log

Most of the developers have used the console module as the first tool to get the logs or debug the code, as it’s easy to use, available globally and no setup is required. In Node.Js the console is implemented differently than in browsers, the console module prints the message in stdout when using console.log and if when you use console.error it will print to stderr.

The console.log, console.debug and console.info prints in stdout, hence we will not be able to switch off or on debug and info. Similarly, console.warn and console.error both prints in stderr.
The switching of various levels is hard on the production app.

We also need different kinds of configuration like standard format, JSON output format to send to ELK stack, and these are not available in console out of the box.
To overcome all these issues we will use Winston logging framework, there are few other options are also available like Bunyan, Pino, etc.

Why we need a logging library like Winston?
As in the last section we discussed a few flaws of the console, let’s list down few important features provided by Winston:

  • Levels: Winston comes with few sets of log levels and also prints the level as the part of the log, which will enable us to filter the logs in our centralized dashboard.
    e.g {message: “something wrong”, level: “error"}
    You can also create custom levels if needed.
  • Formats: Winston has some advanced configurations, like giving colors to logs, formatting in JSON, etc.
  • Dynamically changing the log levels: We will have warning and error enabled in our production application, and when needed we need the functionality to change the log level to debug and back to error, as per need without restarting the application. The Winston logger comes with this feature out of the box.

We can also expose an API to change the level dynamically, expose REST API and in handler execute the line #13 to change level.

  • Transport: For our production environment, we would like to have a centralized logging system where all our microservices push the logs, and we will have the dashboard, filtering and searching of logs. This is the standard ELK setup or equivalent.

Winston comes with configuration to write our logs to file so that any log shipper agent can push the logs to the centralized system.
The discussion is out of the scope of this post though, we should discuss this in detail in a separate post.

5) Performance impact

If the application’s frequency of writing log is high it might impact the performance of the application directly.
Debug and info level contributes to more than 95% of logs, that’s why we should only enable the error and warning level, and change the level to debug when we want to figure out the issue and switch it back to the error.

The logs are a life savior when there is an issue on the app. If you currently don’t have logging practices, frame a logging practice and add the logs to be mandatory in the code review checklist.

Happy logging.

Tag cloud