How to add which fields to make a single tiddlywiki lazy load these tiddlers

Dear friends

In a single-file tiddlywiki, which field’s tiddler should be set to render these Tiddlers only when they click on entries in the standard search or advanced search, thereby speeding up the browser loading speed of the single-file tiddlywiki

I hope tiddlywiki only renders tiddlers located in the story river

This will significantly accelerate the loading speed of larger single-file tiddlywiki

I hope this will significantly speed up the loading of the large single-file tiddlywiki. The loading time of DOMContentLoaded is less than 100ms, and even better, less than 50ms, because my single-file tiddlywiki needs to be updated frequently. Each time the loading speed of a single file on tiddlywiki accumulates, it affects the speed at which I take notes

Or are there any open service interfaces for the single-file tiddlywiki? The front-end and back-end of a single-file tiddlywiki can be separated. I can try to develop a fast server for lazy loading of tiddlywiki based on the python language

Could the interface be “$tw.< Method Name >”?

I have tried a local server specifically designed for single-file tiddlywiki based on python, which combines asynchronous loading, gzip, and browser caching. However the loading time of DOMContentLoaded has not been significantly improved

Any reply would be highly appreciated

This request sounds confused to me. Perhaps I’m missing something.

Logically, lazy loading’s limited1 to server-side processing. A single-file wiki has no option but to download everything, to parse the small HTML wrapper, and to parse the tiddler store. But initially it only renders the main UI and the default tiddlers. The others, as you would like, are not rendered until something launches them in the story river.

If you have a single-file wiki plus external content using _canonical_uri, that external content (usually images, pdfs, etc.) should only be downloaded when it’s needed for rendering.

I think that’s all you can do for lazy loading of a single-file wiki right now.

But It’s interesting to imagine a save mechanism that looked to the list of default tiddlers and their recursive dependencies, and combined those—together with the core UI—into one bundle that was immediately parsed, storing the rest in an unparsed blob that is only parsed once the initial rendering is complete. I have almost never had performance issues with my (usually fairly small) wikis; the only time I did, folks here were able to give my some simple hints to fix it. So I have not spent any real time looking at performance characteristics. I don’t know how much this imaginary mechanism might be able to speed things up. If most of the load time for a large single-file wiki is over-the-wire, this couldn’t help much. But if there was a large time spent in JS parsing, and JS runtime, then it might be worth investigating.




1 And all allowable alliterative allusions :wink:

  • How many tiddlers do you have?
  • How big are the tiddlers in average?
  • How big is the whole html file?

There is some lazy-loading info, but we first need to make sure, that you really need it.

We are talking about 10000 to 20000 tiddlers, that a single file wiki can usually handle well, if the wiki is not bloated with unnecessary plugins and slow custom filters.

For single file, there is no actual lazy-loading.

Instead what you do is externalise the images that take up lots of space. You do this by saving it locally, preferably to a relative directory, .files .

To manually create an external image just create the tiddler with the appropriate image content type, and add a _canonical_uri field with a URI pointing to the actual image location. (from TiddlyWiki.com)

The reason you prefer the ./files directory is that that is the directory that is used to serve files if you switch to using node.js.

I will add there is a great tool to take a standard wiki with imbedded images and a “batch” process to convert them to external files generating the zip file to unzip in the correct location.

  • With this you can just add images as desired and occasionally export images to keep the wiki small.

Here Externalising Image Files with JSzip using tools you probably already have as discussed here Towards a friendlier workflow for images in notes - #39 by TW_Tones

From work of @Mark_S

Now my single-file version of tiddlywiki has over 1.1 million characters

99.9% of the content in my single-file tiddlywiki is plain text, including bilingual notes in both Chinese and English. Currently, I haven’t integrated a large number of png images into the single-file tiddlywiki, except for “$:/favicon.ico”

Perhaps use “_canonical_uri” to externalize infrequently used large plain text tiddlers


There are 5,018 entries. It is expected that the number of entries for this large-scale and frequent update of my single-file tiddlywiki will reach around 10,000

tiddlers with several hundred to tens of thousands of characters in the main text account for 40%, those with over 2,000 characters account for 59%, and those with tens of thousands of characters account for 1%

7.24MB

Use $: / plugins/tiddlywiki/jszip exported as zip, then imported into the single file tiddlywiki, do not seem to support the search the contents of the zip, so can optimize the loading speed of the single file tiddlywiki, However, the cost of doing so is more limited tiddlers retrieval

I can try the node.js version of tiddlywiki again. I feel that the version control function of the node.js version is not as strong as that of the single-file version. 99.9% of the content in my single-file tiddlywiki is plain text

Given 1.1 million characters, and 5018 tids, that’s an average of 226characters per tid. I’d have expected a much higher proportion were small - in the “tens to a couple hundred characters” range.

With node the various git saver modules are still listed. I don’t know if they work but if they do then I think the built-in version control function is much the same as the single file

Beyond that, for a roll-your-own version control outside of TW, I think the version control options with node are vastly superior. Mainly because VC in general (and git specially) and their surrounding tools tend to be designed for multi-file projects. The downside is that it means learning those basics of a version control tools!

However, every time I add an important Tiddler of several thousand words, there is a difference of a few milliseconds in the DOMContentLoaded of a single tiddlywiki file. I still need to add a lot of such notes [mainly high-reuse records of various programming notes]. Currently, my note focus is on tiddlers of several thousand words. It is hoped that other tiddlers can be lazily loaded, but advanced search can take effect on them

The key metric is: DOMContentLoaded. I hope that the maximum loading time for a single file does not exceed 500ms, and it is best to be 100ms or 50ms

Could using git in node.js to manage the files in the './file 'section of tiddlywiki completely replace the native version control mechanism of tiddlywiki

I previously learned the basic usage of git, such as’ git push ', ‘git commit’, ‘git pull’… These commands need to be submitted and uploaded to the local repository instead of the remote repository

if you have a splash screen set up to provide notice at load time, pushing tiddlywiki to more than 20mb has being fine. once loaded it runs well. I would continue to use it as long as you are happy and you can return and address it if and when it gets too slow.

  • if saving interferes you can turn off autosave

it’s easy to externalise, move to node, convert to skinny tiddlers, target oversized inclusions etc… only when you need to.

If a single-file tiddlywiki is split into multiple single-file TiddlyWikis, how can the two-way links across single files be kept updated synchronously? $:/plugins/flibbles relink may can’t do it

I tested out lazy loading on one of my node wiki’s, but the loss of search was a showstopper for me, and I switched to using _canonical_uri for all binary content. (on that one, ./files has 11 meg for 7 files, whilst ./tiddlers has 1.2meg for 236 files, and 126k characters by your filter count (very neat that one btw!))

In terms of initial load time, DOMContentLoaded on a node TW in my local house network is about 1.3 seconds. So yeah, that’s awful compared to what you’re used to (it’d undoubtably be faster if I ran node on my desktop with an SSD, vs it running in a VM on another machine, which mounts the data over NFS from a third system, and the data ultimately being stored on spinning rust HDDs)

BUT: once they’re loaded they’re clientside, and the only activity when saving a new tid is the data of that changed tid. I just added a 78kb file to a new tid as a test, and it took 15ms for the PUT to be completed - so plenty fast :slight_smile:

Yup. I have ./tiddlers and ./files and tiddlywiki.info all version controlled by git, outside the awareness of TW itself in any way. I just run the equivalent of git add * and git commit -a -m "latest commit" when needed (usually every few days by default, sometimes extra if I’m playing with a plugin and want a rollback point)

My current git log summary looks like this (5 days? I’m due a new commit!)

(git log --graph --abbrev-commit --decorate --format=format:'%C(bold blue)%h%C(reset) - %C(bold green)(%ar)%C(reset) %C(white)%s%C(reset) %C(dim white)- %an%C(reset)%C(auto)%d%C(reset)' --all)

afaik, they can’t. But this is also a different proposal to any that have been discussed so far. A node setup is still a single TW, from the browser perspective. it just has a different load and save mechanisms.

1 Like

Perhaps you can try using the Node.jS-based version of tiddlywiki, along with some other popular node.js technical frameworks, to solve this problem. Or directly modify the source code of the single file tiddlywiki

Do not directly hack the code of the single file wiki. This will lead to extremely hard to find problems, or it will be overwritten with the next save. If done wrong, it can brick your wiki or cause data loss, without you even noticing it.

You could have an index of your tiddlers and then search that. @Mohammad has some code that does indexing of TW files (to share searches across multiple TW’s, but I’m sure it could be adapted for this).

2 Likes

I could, but it was also before I understood (or maybe before I even encountered the concept of) _canonical_uri and how it worked. It’s the better system for this case and any similar TW I can imagine me setting up in the future.

Indexing across multiple TW is what @XYZ was just asking about though, so maybe they’ll find it of value

Well, it seems that I took a note before: Use $:/plugins/kookma/searchwikis, seem to across a single file wiki bidirectional links refer to automatic update still seems not feasible, if across the single file tiddlywiki two-way link in the update, need to manually update two single file tiddlywiki index list

The hard drive in my computer is an ssd. By repeatedly refreshing and monitoring the DOMContentLoaded loading time in the edge browser console, it fluctuates repeatedly between 800m and 1.2ms. May I ask if your solution is to use a node.js server to accelerate single-file tiddlywiki? Is this technology natively supported by tiddlywiki? If not, 'npm install… `

Just to clarify, externalising or making skinny tiddlers is mostly of value for media. The tiddler than does this can be tagged, given other fields etc… This is where you searchable metadata can reside, but I would avoid externalising text because it value is being in memory.

  • If you were to build multiple wikis you would still need to curate your data so you know which wiki to look in.

I expect we can implement methods to support this, I have played with this to a great extent. However there are aguably better solutions available by using MWS.

Personaly with a splash screen I have little that worries me about load time, once loaded even large wikis often run very fast. There are also many ways to improve performance without major changes, but externalising media makes a lot of sense when an image can be 2Mb and 2,000,000 characters of text holds a lot on information.

One approach I have explored for images is to ask an LLM what it contains and saving that text along side the image, to make it searchable.