Is a technique to smartly load the images of your app by demand, using small placeholders while the original image is being lazy loaded. When it finishes, do a soft transition from the placeholder to the original picture. You probably already saw this effect here on Medium.com, the blurry images being changed by their respective originals:
This strategy makes the page load faster and using less resources, as you load images with original size only after the first render and only what the images which the user can see at the moment (is inside the navigator viewport).
- How to setup (with code examples);
- The IntersectionObserver Web API and how to use it to lazy load the images;
- Three placeholders strategies/types: LQIP, SVG Trace, Image Primitives;
- The best choice for specific situations
Everything is going to be demonstrated together with a live example app and it open source project .
Note I: The code examples used here are in React and Sass, but the overall concept is easily replicable in any other framework or in vanilla JS, together with CSS.
Before analysing any strategy, we need to setup the application to proper do progressive image loading. Two things which all strategies have in common is the image transition after loading the original picture and how/when to trigger the original image loading.
For the transition, is very simple, we can use the placeholder filling all its container and, after finish the loading, add the original image with absolute position below the placeholder. Then use a CSS transition on the opacity of placeholder and voila!
I’ve made a React component to do that, but this idea is also simple to recreate with vanilla JS:
With the component ready, we just need to setup the CSS transition on the element. When the img.thumb receive the “hide” class, it will trigger the CSS transition, making it go from 1 to 0 opacity:
Note, we scale the image by 10%, as the blur filter makes the borders partially transparent, we can hide this effect by clipping it (only for the LQIP strategy).
For the lazy loading part we need to check whenever the image is being displayed to the user (if image is inside the viewport). For that I’ve used the IntersectionObserver API. Still an experimental Web API, but you can use a polyfill to run “safely” in your application or, as the last resort, just fallback to no IntersectionObserver and do load the image on the document ready state.
The usage isn’t so hard, but for the example I’ve made a register function using Observables (RxJS) and a React High Order Component (HOC) to pass the property intersecting to the observed components.
First I’ve just made a small function that register the scroll area and exposes an Observable of the isIntersecting value from that entry:
And then the High Order Component register it sub component and pass, through the property API, all the intersection changes (true if is on the viewport, false if isn’t).
Finally, we change the previously demonstrated ProgressiveImageLoading component to trigger the loading not on the componentDidMount method, but on the componentWillReceiveProps. After doing this, we encapsulate it using the HOC we just created:
export default withIntersectionObserver(ProgresiveImageLoading);
After all the setup we’re now able to use different strategies for the placeholders. Each one have their good/bard parts, but I’ll try to clarify which one is the best of each occasion on my usage experience.
Each one of this big players have different approaches, but in the end they have the same intention, load fast all the page structure and content while deferring the image (sometimes larger than the HTML+CSS of that page) to the end.
If you use Webpack you can easily use the url-loader to inline the base64 thumbnails directly in the src image attribute, making the placeholder to be load with the html, being much faster in terms of first render. One way to add the blurred effect is using the CSS filter blur, other way is using canvas as Medium does.
For generating the thumbnails I used a NodeJS lib called sharp. You can easily list all images using node-glob and process your project images setting the maximum desired height. On my example I’ve used a 64px thumbnail, but you can use even smaller ones.
Analysing the image output we can see it in a very light size:
If your site have a lot of images, this is the best solution. As this tiny images can be inlined in base64 format, your page first render will perform much better, than directly loading all the original images + you just spend your user network when they reach the image itself, triggering the original image loading. This also is the best solution when you don’t have direct access of the file in you project (e.g.: user uploads images to your app), as this can be easily and quickly processed by your server.
This strategy is about transform the image into a bi-color SVG that traces the image silhouette.
In terms of size, this is a “middle term” option. If you resize your original size to a smaller version before processing it into a trace SVG, you still can achieve good visual results, like I’ve done in the example, doing the tracing over the 128px version of the original image, resulting in a 21kb (6.8kb gzipped). Best results can be achieved processing even smaller images before tracing them, but you can have a visual downgrade on the final result. In the end, the threshold is defined by your gut feeling and your app necessity (performance vs. tracing quality).
Analysing the SVG file we notice that it is generated using a path with a background fill (rect):
As said before, this is a middle term solution. If you don’t have so many images in you site/app, this definitely is a good solution. It gives a nice visual effect not importing the original size of the picture and fits really well in retina screens, as the output is a vectorized image. Also doesn’t add much nodes into the DOM( as SVG elements are treated also as document object models). Although it have a similar final size, after compression, compared to the primitives, is a good solution for generating user uploaded files, as the algorithm to do it don’t consumes much resource from the backend app.
This strategy consists in recreating the image using primitive forms, as triangles, ellipses, rectangles, etc. The final SVG is defined by how much of this forms you wanted to use to recreate the original image. So more forms means better quality but a bigger SVG.
For me, this is the best visual effect placeholder. As the thumb is a big blurred mix of the original colors and the SVG tracing isa bi-color silhouette, both sometimes not letting much clues about what the image is before it loads, with primitives you can easily spot what the original image is if you defined a good amount of forms. Even with a “not so small” version of the original image (512px), with 500 forms I’ve been able to reproduce a high quality placeholder of 42kb (7.8kb gzipped).
To generate this kind of placeholder, I’ve used the Go library primitive, as I wasn’t able to find a solution NodeJS. This library is pretty good not only in results but also in performance, and is really easy to use (comes as a bash client).
If we analyse the SVG output, we can notice that the image is composed by diverse polygons (in this case 500 triangles), as explained before:
If you app/site have big picture sessions, like banners, heros, and you want to maintain an high visual quality, this is the best option. Although you can’t load it faster than a smaller version like the image tracing, or even present directly in the first render, inlining it like the LQIP, you don’t have an “ugly visual effect” in this kind of placeholders.
The biggest downside of this is the resources consumption to generate such file. If you have direct access to the image files in your project you can just run the algorithm and commit the files, but if you try to do this on the server, processing user uploaded images or precaching placeholders for images coming from API calls (or any other resource types), this is going to have a high cost.
Combining the above three libraries, I’ve made up a NodeJS script to run recursively under the assets directory of the project, processing each one and generating the respective placeholders. You can check the script on Github. Feel free to change and modify on your demand.
The images, in average, are more than half the entire page size, and this is a bottleneck to diverse applications. If you’re doing performance upgrades in your app, this is a good start. Smartly loading images doesn’t affects only the visual user experience of your application but the very resources (network/memory) consumed from his device.
I expect that you liked this article, so to show some support, don’t forget to give some claps 👏 and share with your front-end mates!