Using Immutable.JS with React and Redux

July 20, 2018 0 Comments

Using Immutable.JS with React and Redux



A guide on when to use Immutable.js and how to use it with Redux

React and Redux are now mainstream web development tools, and they've brought the concept of immutability with them.

When I started getting into web development, immutability was something I understood in an abstract sense, but had never used in code.

My goal is to show you:

  • what immutability is
  • why it's important
  • why ImmutableJS is a great solution and
  • how to use ImmutableJS (and when not to).

What's Immutability?

According to Merrian-Webster, immutable means "not capable of or susceptible to change." You can think of immutability as a label you can apply to things that absolutely do not change, and in fact are not capable of changing.

So what does this mean in code? Well let's imagine you have a constant variable (we're using ES6 Javascript here):

const constantVar = 'foo';  

Now what happens if we attempt to reassign it?

constantVar = 'bar'; 

You should get a syntax error: "constantVar" is read-only!

That variable cannot be changed by you or anything else*, and is thus immutable.

* Okay so it's Javascript... but let's assume that for now

Why Immutability?

So why is this important? At face value, never being able to change things in your program sounds like a bad idea.

How can we have any effect on our page if the values never change?

That's where React and especially Redux come in.

Let's imagine in React you have a simple parent component that renders a few child components.

class Parent extends React.Component { render() { return ( <div> Parent <Child /> <Child /> </div> ); }  
} class Child extends React.Component { render() { return <div>Child!</div>; }  

Now say you want to pass data down to several children (who might then pass it onto their children) and you want that data to stay in sync.

Well the obvious thing is just to pass an object down:

Passing down an Object

const data = { potentially: { deep: "data" } }; class Parent extends React.Component { render() { return ( <div> Parent <Child name="One" data={data} /> <Child name="Two" data={data} /> </div> ); }  
} class Child extends React.Component { render() { return ( <div> <p>Child {}!</p> <p>{}</p> </div> ); }  

So this works fine (and you can play with it here) but what if you want to change some of that data in a child?

If you're a bit new to React, you might think "let's change the props object!" You will then probably get beaten over the head with the mantra "Never Change Props!" So why is that such a bad idea?

Let's take a hypothetical situation:

Changing Props (please don't do this)

const data = { potentially: { deep: "data" } }; class Parent extends React.Component { update = () => { this.forceUpdate(); }; render() { return ( <div> Parent <Child name="One" data={data} update={this.update} /> <Child name="Two" data={data} update={this.update} /> </div> ); }  
} class Child extends React.Component { changeData = () => { =; this.props.update(); }; render() { return ( <div> <p>Child {}!</p> <p>{}</p> <button onClick={this.changeData}>Change</button> </div> ); }  

Access it here if you want to play with it.

Here we have both children allowed to change the data object, and a force update button to see the results (React really doesn't like this pattern).

Note that this method is very fast. We have just one reference to this object so it updates everywhere easily!

So what are the issues here?

  1. If we have a large application, and we have a bug, it's very difficult to track down what made the change to our object.
  2. If we have any sort of asynchronous call, we might have our data object suddenly change when we didn't want it to.

Let's take a closer look at the second case:

Async Changes

const data = { potentially: { deep: "data" } }; class Parent extends React.Component { update = () => { this.forceUpdate(); }; render() { return ( <div> Parent <Child name="One" data={data} isAsync update={this.update} /> <Child name="Two" data={data} update={this.update} /> </div> ); }  
} class Child extends React.Component { changeData = () => { if (this.props.isAsync) { window.setTimeout(() => { =; this.props.update(); }, 1500); } else { =; this.props.update(); } }; render() { return ( <div> <p>Child {}!</p> <p>{}</p> <button onClick={this.changeData}>Change</button> </div> ); }  

Check out the sandbox here.

Click child One's button, then child Two's. The data changes to "Two" but then after the call comes back returns to "One." Changing the props object like this is what's called a side effect.

If we set the isAsync flag, our little changeData function will mess with our props object later, perhaps when we don't want it to!

These errors occur most commonly when making network requests (GET/POST/etc if you're using REST) and are some of the most difficult to debug.

One way to help prevent this scenario in React is to pass a callback to your children which they use to change your data. This makes sure you have one function that's doing the changing and can more easily isolate any issues. Let's also put that data into state so we don't have to force update:

Passing Callbacks

class Parent extends React.Component { state = { data: { potentially: { deep: "data" } } }; changeData = newData => { this.setState({ data: {, potentially: {, deep: newData } } }); }; render() { return ( <div> Parent <Child name="One" data={} changeData={this.changeData} isAsync /> <Child name="Two" data={} changeData={this.changeData} /> </div> ); }  
} class Child extends React.Component { changeData = () => { if (this.props.isAsync) { window.setTimeout(() => { this.props.changeData(; }, 1500); } else { this.props.changeData(; } }; render() { return ( <div> <p>Child {}!</p> <p>{}</p> <button onClick={this.changeData}>Change</button> </div> ); }  

Check it out here

Much nicer! However this requires a few things: 1. We have to have a root component to hold our data 2. We have to pass the callback to whatever manipulates the data 3. We have to ensure that's the only way it gets changed

Unfortunately, this can have some serious restrictions on how you build your app, and might make refactoring later difficult.

What if you had a large app that needed some piece of data everywhere? For example, if you are creating a Reddit clone, having access to the logged in user everywhere is pretty important. This is where Redux comes in.


Redux is one way to create an application-wide state that any component can access and update. It's more of a design pattern than an actual library, and it relies heavily on the concept of immutability.

Now I should note that if you just want to pass data through your application, React's new Context API does a pretty great job. On a smaller application, Redux's formalism may be overkill. However, Redux really shines when we get some of those nasty side effects and debugging starts to become a chore.

Redux (if used correctly) keeps a record of every change to your data, allowing you to essentially "travel back in time" to see what caused your bug.

How does it do this? Immutability!

Our app is getting a bit big now, let's look at an example Redux module.

import { createStore } from "redux"; const initialState = { data: { potentially: { deep: "data" } }  
}; const CONSTANTS = { setData: "SET_DATA"  
}; const reducer = (state = initialState, action) => { switch (action.type) { case CONSTANTS.setData: return { ...state, data: {, potentially: {, deep: } } }; default: return state; }  
}; export const setData = newData => ({ type: CONSTANTS.setData, payload: { data: newData }  
}); export const getData = state =>; export const store = createStore(reducer);  

Full example is here

Now we have all the data for our app done up neatly in this module. The way Redux maintains immutability is through a contract with the developer. It assumes that what you return from your reducer is a clone of the previous state, with all the changes applied.

That is why we are using the spread operator (...) all over the place, to clone the previous object. However this quickly gets out of hand if you have deeply nested data. Using the normalization pattern is a great fix for that (and I highly recommend it). However, other tools can also make your life easier, without the additional work. This is where ImmutableJS comes in.


You might have noticed that we used a const primitive as our first immutable example, but then used a const object when using immutability with Redux. If you're familiar with ES6, you'll know that's actually a problem.

You see, the const declaration protects against reassignment but not against mutation. That's an important distinction, and one of the reasons we need libraries like ImmutableJS. It's also the reason we had to spread each level of our object. Otherwise we'd keep around old references and we'd lose our immaculate history of the past (which lets us do time travel debugging).

Cloning objects like this is a royal pain, and it quickly becomes difficult to determine exactly what is going on. It's also a bit slow. ImmutableJS was designed to address these issues.

ImmutableJS is a library that provides its own data structures and helper functions for manipulating immutable data. Just like any other library on npm, you'll install it and then import what you need for your project. It has a large community of users and has the support of industry giant Facebook (whose engineers created it).

ImmutableJS flat out doesn't let you change data created with it. All of its methods for changing data are designed to return a clone of the data, and they use some very cool structural sharing to make things really fast (it doesn't have to copy everything every time). It also allows for some neat things like lazy evaluation if your app does a lot of sequenced data manipulation.

How do you use it?

Let's convert our Redux module into an ImmutableJS one.

import { createStore } from "redux";  
import { fromJS } from "immutable"; const initialState = fromJS({ data: { potentially: { deep: "data" } }  
}); const CONSTANTS = { setData: "SET_DATA"  
}; const reducer = (state = initialState, action) => { switch (action.type) { case CONSTANTS.setData: return state.setIn(["data", "potentially", "deep"],; default: return state; }  
}; export const setData = newData => ({ type: CONSTANTS.setData, payload: { data: newData }  
}); export const getData = state => state.getIn(["data", "potentially", "deep"]); export const store = createStore(reducer);  

Try it out here

You can see our nested spreading has been replaced by a single call to setIn. Also note that we didn't have to change any of the code in the front end, since we set up a selector getData and action setData and exported them, all of our data manipulation logic stays right here.

This looks really nice, but it hides a few of the issues that using ImmutableJS causes. This is where the catch comes (at least for Javascript devs). 1. ImmutableJS objects are not Javascript's objects. Which means that you can't use things like rest, spread, passing data directly to React components, etc.
2. ImmutableJS uses a very similar API to ES6 Maps and Sets, which were designed with C# developers in mind. While very clear (and quite comfortable for C# devs), it can feel obtuse to Javascript junkies.

There are also a few gotchas that can really waste your time, so let me list them out now.


Not using fromJS()

If you use Map() or Set() from the Immutable package, it only creates an ImmutableJS structure for the top level data. What this means is that it creates an Immutable Map that consists of keys that reference plain Javascript objects. This is only an issue if you have nested data, but it's something to be aware of.

For example: if you have a nested object and try to create it like this: Map({a: {b: 'b!'}}) then try a setIn(['a', 'b'], 'foo') it'll throw an error because you're trying to use set, which is an ImmutableJS function, on the contents of key b, which is a Javascript object.

fromJS()on the other hand, takes a plain Javascript object or array, and converts it into the proper ImmutableJS data structure all the way down. So both keys a and b in our example would be ImmutableJS Maps and we'd have no errors.

Trying to use combineReducers() with ImmutableJS state.

If you have reducers like the one I used that return an ImmutableJS object, combineReducers() (the standard way to do it in redux) will error out.

To solve this, you can use something like redux-immutable or wrap your ImmutableJS state inside of a Javascript object before returning it. If you're starting a new project, I suggest the former. If you're adding onto an old, the latter will likely be far easier.

Handy Methods

ImmutableJS has some very nice features to offset some of those tradeoffs however. Here are a few of my favorite.


Equals allows you to do a deep compare on two immutableJS objects map1.equals(map2) etc. This will check actual data instead of just comparing the object references. Let's see an example in code:

 const objA = {a: {deep: 'object'}}; const objB = {a: {deep: 'object'}}; objA = objB import { fromJS } from 'immutable'; const immA = fromJS({a: {deep: 'object'}}); const immB = fromJS({a: {deep: 'object'}}); immA.equals(immB) 

This can be very handy when you really need to check complete equality.


Seq is a data structure that lets you make use of something called lazy evaluation. To steal from their docs here's how it works:

const { fromJS, Seq } = require("immutable");  
const oddSquares = fromJS([1, 2, 3, 4, 5, 6, 7, 8]);  
const lazySeq = Seq(oddSquares) .filter(x => x % 2 ! 0) .map(x => x * x); lazySeq.get(1); 

So what's special about this? Well lazySeq doesn't actually compute anything until it's called. Then, when it's actually called via .get, it only does what's needed. So in this case that means calling filter three times and map once.

One of the biggest advantages of this, is that no intermediate data structures are created. Instead of having it make a new array for the filter and run filter on it, then pass it to the map to run the map operation, they're all done on one data structure which makes for a more efficient program.

I won't go further into it here, but lazy evaluation is extremely powerful for intense data processing applications, so if that's your game you should look into this.


Before I found out about merge I thought ImmutableJS was just as bad as spreading everything. So many setIn() and getIn() calls! `

merge() and mergeDeep() are amazing -- they allow you to merge JS object into ImmutableJS maps, which makes updating a whole subtree of data extremely easy:

import { fromJS } from 'immutable';  
const mapA = fromJS({a: { updateMe: 'term', ignoreMe: 'otherterm'}, b: 'top term'});  
const newMapA = mapA.mergeDeep({a: {updateMe: 'newTerm'}}); 

Specialized functions for everything

Take a look at their docs, they have all sorts of nifty optimized functions. If you use ImmutableJS heavily it can be a great environment, but it does have a learning curve. I highly recommend reading the "how to read the docs" section.

If you'd like to play around with the examples above, here's a sandbox

Best Practice: Memoization

ImmutableJS is often touted as being fast, but if you aren't careful it can actually be slower than just using the spread operator. The reason for this is the often (mis)used toJS() function.

In fact, not only is the function toJS() itself slow, it returns a fresh new object every time, which means that if you're using it in mapStateToProps it will cause your component to re-render every time, no matter if the state changed or not!

In general, it's not fun to use the ImmutableJS API for all data everywhere in your app, and at some point you need to turn that data into data structures that React and your HTML understand.

The way to solve this is to: 1. Delay converting to Javascript as long as possible (selectors are a great place for this) 2. Memoize your toJS() results

Memoization is an incredibly powerful tool, it creates a little cache for a function, then if the function is called with the same parameters, it returns the result from last time instead of recalculating everything.

This does mean that the function has to be pure (no side-effects, same results with same inputs every time), but memoization is perfect for our selectors.

First let's see what our module looks like when passing some more complicated data around:

import { createStore } from "redux";  
import { fromJS } from "immutable"; const initialData = { child: "One" }; const initialState = fromJS({ data: { potentially: { deep: initialData } }, unrelatedData: { nothing: "yet" }  
}); const CONSTANTS = { setData: "SET_DATA", setUnrelatedData: "SET_UNRELATED_DATA"  
}; const reducer = (state = initialState, action) => { switch (action.type) { case CONSTANTS.setData: return state.setIn( ["data", "potentially", "deep"], fromJS( ); case CONSTANTS.setUnrelatedData: return state.set("unrelatedData", fromJS(; default: return state; }  
}; export const setUnrelatedData = () => ({ type: CONSTANTS.setUnrelatedData, payload: { data: { something: "totally unrelated data" } }  
}); export const setData = newData => ({ type: CONSTANTS.setData, payload: { data: newData }  
}); export const getData = state => { console.log("getData called"); return state.getIn(["data", "potentially", "deep"]).toJS();  
}; export const store = createStore(reducer);  

See the full thing here.

Note the toJS() method, this is called every time the selector is called. which, with standard connect, means every time the Redux store changes! That can be a lot in a complex app which can really slow things down.

To give you an idea of how much, I've made it console log every time that function is called. Go ahead and open up the sandbox and run it. Change a child's data and notice how many times getData is called. Then click the "Set Unrelated Data" and notice how many times it's called.

In both cases, getData is called two times! That's once for each of the child components we have rendered. As you might imagine, this can tend to get out of hand, especially if you have a large application that's changing the Redux store a lot.

Let's see if we can reduce those calls a bit with some very simple little modifications.

import memoize from "fast-memoize"; const getDataSlow = state => { console.log("getData called"); return state.getIn(["data", "potentially", "deep"]).toJS();  
}; export const getData = memoize(getDataSlow);  

Here's the code

Now run things again and click on the same things you did last time.

Notice any difference? You can click around all you want, but getData is only called four different times total! The argument being passed to getData is the global state object, and there are four variations of that state: 1. Initial setup 2. Child One clicked 3. Child Two Clicked 4. Unrelated data changed

In a larger app you'll want to minimize the possible varieties of state the function memoizes on (so you don't cache results for every single possible state of your app). I'll explain that later. But in our app it works great!

This, in my opinion, is the secret sauce that makes large apps continue to be blazing fast while using immutability and Redux. Memoization is a fantastic technique I highly recommend everyone check out, though do be careful of the cache overhead! Sadly, memory isn't infinite.

Other Alternatives

If your project doesn't support IE11 (or is fine with a bit slower performance there) and doesn't require a lot of the more powerful tools that ImmutableJS provides, Immer is a neat library that allows you to use mutations and preserves the immutability of the object for you. I wrote about it a while ago here.


Immutability is a powerful ally to any programmer in the web development world, and it especially aids those using the React and Redux combo. With memoization, it makes all of your state transitions blazing fast with minimum re-rendering or copying.

Immutable data can provide a great debugging experience when used with things like the Redux Dev Tools. Though if you use ImmutableJS make sure you set the dev tools up properly so you can see your state the way you expect.

While immutability is awesome, I'd advise you to play around with it a bit before you fully commit to any particular library or methodology. It can be difficult to get your head around at first, so starting out just spreading things in a test project can be a good way to understand the benefits libraries like ImmutableJS provide.

Here's a closer to "production ready" version of the code I've been writing so far. It organizes things a bit differently so you can expand your app and not have too many headaches. It uses redux-immutable and shows how to downselect the input to your memoization function. Go play around with it!

Let me know if this article was helpful on twitter @Tetheta and let me know if there's other things you'd like to see covered more in the web development world.

Additional Resources

ImmutableJS Docs

Redux's guide to ImmutableJS (great read, highly recommended)

A great read on memoization

A great blog by the Auth0 team on ImmutableJS and Functional Programming

Special thanks to Blake Sawyer for editing

Tag cloud