Electron memory usage

This past week I was working on an image browser I’ve been building in Electron, namely trying to figure out why it would hold onto a bunch of memory.

I use this browser for reference images – high res illustrations, renders or photographs – I’ve collected. As you navigate directories, the browser loads all the images into a mosaic tile display. And because this is local, it’s snappy.

Image Browser Screenshot

The problem is, when I would throw in a search that pulls up all the images, the RSS memory shoots up: from a 300MB steady state to ~800MB-1GB. Navigating away doesn’t flush this memory. More notable context: this application has very little state (UI state + list of current images), the images are the only thing of relevance.

The cheap answer: it’s Electron. And I gave up a few times along the way: there are many layers of huge technologies, doing their own optimizations and, not being familiar with any of them, makes it difficult to reason about what’s happening.

The first thing I did: open the developer tools and try everything!

Composite Dev Tools Summary

Click to enlarge

Things I’ve learned here:

  • Memory Patterns:
    • Memory doesn’t grow indefinitely with use - not a memory leak.
    • The activity monitor shows the app holds onto memory, even when navigating way from
      searches/directories with images.
    • If I refresh with cmd-R, everything is flushed, including memory used.
  • Heap:
    • The javascript heap is “only” 7MB, including what seems to be closures, compiled
      code, and representations of DOM elements.
    • Allocations shows the largest object types have barely a few megabytes
    • Between loading a heavy directory, I see kilobytes of allocations, not megabytes
  • Layers have ~150MB? What’s up with that
  • Site data is all empty, clearing does nothing
  • Frames has reference to every image I’ve loaded so far?

My hunch is it’s a caching issue or unreleased garbage. Navigating across pages retains the memory,
it grows as images load, but levels off. Given this is a loosely coded prototype, I’m skeptical I
haven’t been shooting myself in the foot without realizing it.

I spent a lot of time hunting dead ends, thinking it was me retaining references without realizing
it. For example: why don’t listeners drop when navigating away? Am I handling onclick functions
correctly? Maybe, because I’m throwing closures at every problem, I’m creating lots of compile code
junk. No luck varying my code however.

process.memoryUsage, forced gc (global.gc)

Here I looked into trying to force a GC to see what was being retained.

This requires --expose-gc to be enabled, e.g. by running electron --js-flags="--expose-gc", and then I can add the following to the renderer:

function logBytes(x) { console.log(x[0], x[1] / (1000.0*1000), "MB")  
} function getMemory() { Object.entries(process.memoryUsage()).map(logBytes)  
} setInterval(getMemory, 1000)  

Showing

rss 859.414528 MB  
heapTotal 13.651968 MB  
heapUsed 7.385920 MB  
external 142.377168 MB  

This is another view at the JS heap really, which we yet again see is relatively tiny. Nevertheless, I tried a periodic forced garbage collection:

function gc() { global.gc() console.log("gc")  
}  
setInterval(gc, 10000)  

This dropped down the heap (in one attempt from 17MB to 10MB), but otherwise no effect. I think between this, process.memoryUsage() and the allocations we can have a good sense the problem is not the js heap, or unretained references.

I also found a suggestion from another user with image caching problems to use clearCache, which lead me to running the following on the main process (after creating the window):

let remote = require('remote')  
function clearCache() { remote.getCurrentWindow().webContents.session.clearCache(function(){ console.log("Cache Cleared") })  
}  
setInterval(clearCache, 10000)  

Which had zero effect. Likewise using session.clearStorageData() had no effect.

Looking at the documentation for session shows it looks it is a property of WebContents, which renders and controls the webpages. Even though that seems promising, session “Manage browser sessions, cookies, cache, proxy settings, etc.”, which sounds much more like the context for web requests (like login session in cookies?), and not rendering.

This is confirmed by the electron source for session which points to the chromium source for HttpCache, where the caching logic follows RFC 7234: HTTP/1.1 Request/Response Caching.

I’m not totally sure this caching excludes images, but in my case, files are being read from disk, so in retrospect, it makes sense I didn’t see a difference from calling these methods.

External Memory

At this point, I wanted to know what was up with the 142MB of external memory from process.MemoryUsage(). From the nodejs documentation

heapTotal and heapUsed refer to V8’s memory usage. external refers to the memory usage of C++ objects bound to JavaScript objects managed by V8.

Perhaps the open files, like images, were being buffered? Maybe this reference was held open because
of something in chromium, like those frames in the dev tools that had the full image.

I took at look at what options are available for memory under node --v8-options, and eventually
turned on --tracegc and --tracegcverbose to get this log:

# Start 
Memory allocator, used: 8192 KB, available: 2121728 KB
New space, used: 793 KB, available: 213 KB, committed: 2048 KB
Old space, used: 1314 KB, available: 28 KB, committed: 1424 KB
Code space, used: 1232 KB, available: 0 KB, committed: 1648 KB
Map space, used: 190 KB, available: 0 KB, committed: 532 KB
Large object space, used: 0 KB, available: 2121207 KB, committed: 0 KB
All spaces, used: 3530 KB, available: 2121449 KB, committed: 5652 KB
External memory reported: 8 KB
Total time spent in GC : 1.9 ms
... # Load search page (problem)
Fast promotion mode: false survival rate: 6% 10447 ms: Heap growing factor 1.1 based on mu=0.970, speed
ratio=16419 (gc=772398, mutator=47) 10447 ms: Grow: old size: 7078 KB, new limit: 13154 KB (1.1) 10448 ms: Mark-sweep 7.0 (19.0) -> 6.9 (12.5) MB, 11.4 / 1.5 ms testing GC in old space requested
Memory allocator, used: 12800 KB, available: 2117120 KB
New space, used: 4 KB, available: 1003 KB, committed: 1024 KB
Old space, used: 5231 KB, available: 532 KB, committed: 6032 KB
Code space, used: 1551 KB, available: 0 KB, committed: 2160 KB
Map space, used: 295 KB, available: 0 KB, committed: 1044 KB
Large object space, used: 0 KB, available: 2116599 KB, committed: 0 KB
All spaces, used: 7082 KB, available: 2118134 KB, committed: 10260 KB
External memory reported: 140376 KB
Total time spent in GC : 34.2 ms 12651 ms: Memory reducer: high alloc, background 12651 ms: Memory reducer: started GC #1
# Following GC
Fast promotion mode: false survival rate: 5% 12702 ms: Heap growing factor 1.1 based on mu=0.970, speed_ratio=3159 (gc=347424, mutator=110) 12702 ms: Grow: old size: 28995 KB, new limit: 39950 KB (1.1) 12703 ms: Memory reducer: finished GC #1 (will do more) 12703 ms: Mark-sweep 31.5 (52.3) -> 28.4 (37.8) MB, 10.2 / 1.8 ms (+ 39.2 ms in 115 steps since start of marking, biggest step 2.5 ms, walltime since start of marking 52 ms) finalize incremental marking via task GC in old space requested
Memory allocator, used: 38696 KB, available: 2091224 KB
New space, used: 100 KB, available: 906 KB, committed: 1024 KB
Old space, used: 20641 KB, available: 571 KB, committed: 25488 KB
Code space, used: 5746 KB, available: 0 KB, committed: 6256 KB
Map space, used: 810 KB, available: 0 KB, committed: 1556 KB
Large object space, used: 1796 KB, available: 2090703 KB, committed: 1832 KB
All spaces, used: 29095 KB, available: 2092180 KB, committed: 36156 KB
External memory reported: 6351 KB

My impression here is used refers to currently live (not garbage), available is something
about available virtual memory address space, and committed is what has already been requested
of the address space.

The things that stick out to me here:


  • All spaces (mostly driven by old space) continues to grow (larger than my previous heap measures?)
  • External memory jumps up and then back down

Thus began the Googling. From a link I comment on a github issue (which I sadly cannot find anymore), I learned
RSS is a bad measure, allocators try not to return memory back to the OS when possible (this makes
sense if requesting memory is so expensive). A tour of V8: Garbage
Collection
was helpful here. Amongst
many other things, it mentions V8 lazily sweeps:

All this memory doesn’t necessarily have to be freed up right away though, and delaying the
sweeping won’t really hurt anything. So rather than sweeping all pages at the same time, the
garbage collector sweeps pages on an as-needed basis until all pages have been swept.


I resigned to believing that these images get buffered in external memory, V8 freely takes as much
memory as the OS will let it even if it is for garbage, and then even when cleaning up garbage, it
does not release requested memory back to the OS.

Note: In hindsight, now that I know what was actually wrong, I think it’s fair to question these
very hand-wavy assumptions.

Reading the Documentation

A few weeks passed and I came back to the problem, with this nagging feeling that I didn’t really
understand what was happening. After many failed attempts at trying to get
node-heapdump set up, I did another sweep of Googling
for similar problems, leading me to read various pages of the available Electron
APIs
:

  • I dug into the on-disk storage to see if there was any disk cache that was getting pulled into
    memory.
  • For a brief moment, I thought NativeImage might be
    an optimization for my use case, instead of file:// references.
  • I retried getting the memory usage for both main and renderer process.

I found Electron has it’s own Process APIs on top of
Node’s, including process.getProcessMemoryInfo() and process.getSystemMemoryInfo(). I
turned these on (same way as process.MemoryUsage(), but they return KB):

# Start 
workingSetSize 58.612 MB
peakWorkingSetSize 0 MB
privateBytes 21.852 MB
sharedBytes 86.444 MB
total 8388.608 MB
free 3179.868 MB # Continued usage
workingSetSize 1113.160 MB
peakWorkingSetSize 0 MB
privateBytes 496.812 MB
sharedBytes 621.408 MB
total 8388.608 MB
free 16.728 MB # Tabbing to a different app, but otherwise doing nothing
workingSetSize 705.212 MB
peakWorkingSetSize 0 MB
privateBytes 489.312 MB
sharedBytes 211.476 MB
total 8388.608 MB
free 980.384 MB

From this I found:


  • Electron will truly eat free memory (documentation defines as “The total amount of memory not
    being used by applications or disk cache.”) until it gets pressure from the OS. Maybe this is a
    combination of disk cache from loading the images because working set size and previous measures
    of RSS don’t match up with how much of free gets used
  • Tabbing out seems to return a fair amount of memory.

I was ready to resign to my previous hypothesis again: this is just an aggressive allocator.

Web Frames


The next morning, still frustrated, I did a last check on links I had saved, and went through the
devtools again. I had no luck before but I thought I’d try figuring out what’s up with those
“frames” and their links to full res images.

Here, How Chromium Displays Web Pages was
enlightening, especially from understanding the layers of abstraction in Chromium (and Electron),
useful context. Though what came of this was me looking further in to WebFrames, ultimately
Electron’s WebFrame API.

What luck: here I have another resource usage API webFrame.getResourceUsage():

const {webFrame} = require('electron') 
function getMemory() { // format omitted (pads + limits to 15 characters for the output) function logMemDetails(x) { function toMb(bytes) { return (bytes / (1000.0 1000)).toFixed(2) } console.log( format(x[0]), format(x[1].count), format(toMb(x[1].size) + "MB"), format(toMb(x[1].liveSize) +"MB") ) } console.log( format("object"), format("count"), format("size"), format("liveSize") ) Object.entries(webFrame.getResourceUsage()).map(logMemDetails) console.log('------')
} setInterval(getMemory, 5000)

# Start object count size liveSize images 0 0.00MB 0.00MB scripts 0 0.00MB 0.00MB 
cssStyleSheets 1 0.01MB 0.01MB
xslStyleSheets 0 0.00MB 0.00MB fonts 0 0.00MB 0.00MB other 0 0.00MB 0.00MB # After searching object count size liveSize images 694 1022.10MB 879.37MB scripts 0 0.00MB 0.00MB
cssStyleSheets 1 0.01MB 0.00MB
xslStyleSheets 0 0.00MB 0.00MB fonts 0 0.00MB 0.00MB other 1 0.00MB 0.00MB

Okay, this is finally looking REALLY promising. Luckily for me, Electron also exposes
webFrame.clearCache():

// Manually clear by pressing c 
document.addEventListener('keydown', e => { if (e.key == 'c') { webFrame.clearCache() console.log('CLEARED') }
})

Result, printing with RSS now:

# Just images, after exiting search, invoking webFrame.clearCache 
rss 885.157888 MB
object count size liveSize
images 695 2083.41MB 1940.65MB
... CLEARED
rss 533.31968 MB
object count size liveSize
images 0 0.00MB 0.00MB
...

Yay, it’s something! My suspicion is, this being a single page app, there is no navigating away to
flush the cache from the current page. I’ve since added this call on every directory change, and
I’ve seen better drops in RSS: usually from 900MB to 300MB, but in the test I ran for the blog post,
I’m actually still seeing 150MB kept open in external memory. From extra tests I ran, I think these
buffers are temporary however.

From the Electron
source for web
frame
,
this method actually deals with the Webkit
WebCache
.
There’s a SetCapacity() method that isn’t currently exposed by Electron. I wonder if it would be useful
here, to stop images from ever being cached beyond what’s currently displayed. I looked a bit
further to try to find the logic for image caching, but called it a day.

I at least have a decent idea where this memory is coming from. The experience has given me a few
thoughts (mainly more questions):

  • Is Memory (RSS) even a performance measure?:

    I only noticed this is because I went looking for it. As a user, I can’t tell when this cache
    flushing enabled or disabled. Desert Golf, with 18MB of RSS, taxes my computer more than this
    application.

    Now, I get frustrated with the Slack desktop app as much as the next guy, but I feel like I’m
    missing something with RSS as a performance measure. I keep seeing screenshots of excessive memory
    use, but if it’s available, what’s the harm? I can think maybe this takes away from disk cache,
    maybe it causes extra paging (what if you never reach your memory limits though?), maybe causes
    useful memory to be thrown out of CPU caches because of the excessive memory/garbage created> I’m
    not sure how much a in-memory cache of unused images would affect these things though. If anyone
    has thoughts I would really appreciate learning more about this.

  • Electron experiences: The good and The Layers of Complexity:

    My experience with electron has been really positive. I set out with the idea that this would
    be a prototype, and I would move to something else when I need to. So far I haven’t needed to, and
    the space of cross-platform GUIs is not as easy, approachable or carefree as flippant comments
    about Electron make it seem.

    The biggest problem for me was navigating the separate layers of complexity: Chromium, Node, V8
    and Electron. Each of these are huge domains in themselves, and (clearly from the rambles of this
    post), I fumbled around looking for what exists where, who owns what. Now, I have a much better
    sense, and could say that I mistakenly looked at views of the heap usage again and again when it
    clearly wasn’t the problem. But to be fair, I think the prospect of having to deal with another
    performance issue, say CPU, would be daunting.

    I think it’s fair to say such a simple app, could be easily moved to a lighter platform.

  • I should have asked:

    As I can see now, I was a developer working in a domain I don’t have experience in. Clearly there are
    people who work on the internals of Chromium, Node and Electron. I really should have sought those
    people out, and asked on IRC instead of flailing around, not understanding the fundamental
    overview of the pieces involved.

  • These numbers don’t line up:
    You’ll notice in the last log output, RSS is ~900MB and image
    live size is over 1.9GB. Not to mention, if I do the same search the app does, I see

    > find . -iname 'tb*' | xargs -I % cp % ~/Desktop/tmp/ 
    > du -chs ~/Desktop/tmp/
    78M ~/Desktop/tmp/
    78M total

    I’m not sure how 78Mb of images becomes hundreds along the way. The external memory still nags at
    me too.