How to add which fields to make a single tiddlywiki lazy load these tiddlers

could that be externalized through this plugin, or by using the field ‘_canonical_uri’ or some other method?

From other technology stacks: Use databases such as mysql to accelerate the note data on tiddlywiki, or use Neo4j graph database to visualize the node relationship of each bidirectional link on tiddlywiki, and also use vector databases and knowledge reasoning engines to process the note data in tiddlywiki. However, the use of artificial intelligence technology may be applicable to the note data in tiddlywiki, which is rarely updated

Sorry, I dont follow.

I tested the DOMContentLoaded of the externalization plugin again. The loading speed was the same as that of the single-file version, but the combined size of several files was a bit smaller than that of the single-file version

I exclusively use node TW with its paradigm of multiple files on the back-end.

Reading back to your original description of the problem though, I’m a little confused by your workflow - and maybe this is just my inexperience with single file tiddlywiki’s.

Surely the file only needs to be loaded rarely? Writing and saving changes don’t require a reload by my understanding (or just-now quick testing with a fresh empty.html).

In short, I’m wondering if you can explain your workflow that has you loading the single-file so often that it’s load times are an issue?

Good idea! I can frequently update a lot of tiddlers content from an empt.html file [approximately 5,000 tiddlers still need to be manually added], because empt.html has a very fast initial loading speed. After the update is completed, import the note data in empt.html back into the current tiddlywiki with a loading time of 800ms. The loading time will increase after import, but this still cannot solve the problem of slow loading of the initial single-file tiddlywiki. Perhaps it is necessary to manually set up the caching mechanism for single-file tiddlywiki file data in the edge browser

uh, that’s not what I suggested at all. My use of an empty.html was only checking that edits and saves could be made without reloads.

When I load a TW, I load it once. Then it stays loaded and no matter how many edits and additions I make, I dont reload again. It stays loaded for days at a time (however long I leave firefox running). An initial reload can take several seconds, but I dont care because it’s rare. But you’re writing as if you’re getting a reload time every time you save a note, and I’m trying to understand if and why that’s the case, or what I’m misunderstanding about your workflow.

As an aside, for my taste in node TW workflows, I’d be looking at ways to add 5000 tid’s by writing compatible .tid files directly to the ./tiddlers directory (possibly by some local operating system based scripting - that’s very dependant on where the origin of the data is), after which restarting the node TW will pick them all up for use. Depending on the origin of your data (and comfort writing a script to create those files), this may or may not be suitable. (I recently created about 30,000 (!!) tid files for a node TW for testing this kind of thing via some scripting)

We’re all wondering why you need to load so often :wink:

The loading time tends to be proportional to the total size of the TW file plus any additional rendering that has to happen.

For instance, if you open with a full story river including images and lots of tiddlers, that’s going to take longer. Also, if you have the “recent” tab opened, then there’s lots of rendering if you have a lot of “recent” items. So I try to select some other tab (like “open”).

You can reduce the size of your local wiki by about 2 megs by externalising the core. But you have to do this on every device where you will use the file. In theory, once you’ve separated the core, the browser will remember it and cache it which may or may not help with loading speed.

I need to update a large number of code blocks (there are thousands of them previously recorded in a folder), and each code block needs to occupy a separate tiddler. Every time a tiddler is recorded, its consistency must be guaranteed. I’m afraid that accidental mistakes might occur when recording dozens or hundreds of tiddlers at once, causing the notes to become invalid. Or, when making a few updates to a single file tiddlywiki, my laptop might encounter unexpected situations (such as black screen or freeze), which would require re-entering a large number of tiddlers manually.

Must it be every time though?

You can save your TW after each input, to ensure you have a point to return to if a later check shows there was an error, or if the system crashes.

But if you can check every 10th time, then you cut your reload issues to 1/10th their current scale - your timing goes from as high as 800ms, to an average better than 80ms. And if there was an issue, you only have to go back at most 10 previous TW saves.

This sounds like each one is already it’s own file? Which sounds to me ideally suited to using node. Convert each file as it currently is, to a node compatible .tid file (or other format if that’s suitable. (“code” is vague, but I understand you may not want to go into detail about their contents)). It would get the content into TW, and then you can edit within TW from there. Depending on the content and your workflows, this may shortcut a lot of tedious data entry.

Any text file (regardless of content) can be made into a valid node-compatible .tid file with three lines at the top (the third line is blank):

title: __unique_title_here__
type: text/plain

That would get everything into node potentially very quickly, after which further editing and change of type would be a shorter set of work, I think?

2 Likes