I have a small VM where I host a growing number of small TW’s. Last night I found that my VM is running out of memory during routine processes, and the main hog is TW.
In terms of stored size, all my TW instances are small, human-written text; largest size so far is about 1.5M.
At runtime, I see that an instance of a ‘tiddlywiki’ process takes up about 33M. Okay.
But, what’s available on the OS side shrinks by over 100M with each instance. I haven’t been able to track down those other 60-odd megs per instance, but if I’m running ‘top’, it’s clear that the amount of available memory is closely followed by the start and stop of a TW instance.
Looking ahead: if a TW instance needs, say, 40M, and this VM isn’t doing much else, then it should be able to handle around 40-50 instances of TW. More conservatively let’s call it 20. Instead, we’re hitting trouble when the number of instances passes about six or seven.
Two questions…
-
Are there ways to make instances of TW share memory? If we all read from the same base libraries, could that 30M-ish size go down some?
-
That aside: if running one TW instance does take 30-35M, where are we using up about 100M total? Is there a number somewhere that I might turn down? Or maybe I’ve done something wrong to be causing this?
Thanks.