TiddlyWiki Integration / Automation

I’m trying to understand my options for integrating TiddlyWiki with my other software and automations. I’m using various RPA techniques to generate files to be imported into TiddyWiki and I’m trying to automate it as much as possible. Of note is that I’m using TiddlyWiki App (file-per-tiddler) as my platform of choice right now.

For instance I’m using PowerAutomate to export Outlook calendar information into a JSON that I manually import into TW each day. Weekly I’m running a job to build a JSON version of my company ActiveDirectory and then bring that into TW to see if any of my contacts (managed in TW) has had any change in their details. Monthly I’m also doing things like moving bank transaction logs into a budgeting wiki. There are other examples, but just to give a flavor.

As I’m using TiddlyWiki App which is file-per-tiddler based I’d hoped I could just put the file in the folder and it would pick up, but that seems to not work. I have to wait until I completely re-load the app, which generally I’m only doing weekly or so. Regular Node.JS does this within minutes and BOB does it instantly, but I’d like to stick with TiddlyWiki App if possible.

One alternative I’m trying to think how to build would be to design a button that would essentially run the import & overwrite steps “silently” when pressed - could label as “refresh” or something. Not automated, but less hassle than what I’m doing.

Is anyone one else doing anything like this or have some suggestions?

This is a classic personal workflow. For such requirements, I typically use certain HTTP APIs of TiddlyWiki to enable import features. For instance, if I want to save or clip content in the browser, I create a custom browser extension or app that fits my personal workflow.

Here I have experimented with tm-http-request to retrieve stocks related data from a google sheets using opensheets based on discussions here . Similarly can google sheet can be used as the middle man between the tiddlywiki and your data ?

Without specific knowledge of the implementation of the node server, and tiddlywik-app it is quite normal to minimise access to and from the file system, so to imagine the server to keep an eye on the file system for a file system change, outside the server, would be an expensive process given almost every time it would not see any change.

  • I would not be supprised if others have had to deal with this as I think a lot of people are driven to use tiddlywiki node server versions for similar reasons.

Consider this;

  • If its a manual process why not import to the wiki directly (In the browser or tiddlywink-app window).
  • If the process is automaticallyy updated files then perhaps you can add a process to refresh the wiki along with the transfer process.
    • I would Imagin there is a command line, server command, that can be issued to do exactly this. see this command as a good possibility.

I’m wondering if I’m misunderstanding you here, or if you’ve got a plugin on the node.js version which picks up changes to the filesystem? I have to restart my node version of TW for any changes on the filesystem to be reflected into the web view of things. Waiting minutes or hours makes no difference.

(I’d LIKE it to be instant, but even within a few min would likely be fine for my use cases)

In general, I think TW<->other software/automation is definitely best served by file-per-tid and TW picking up the changes from the filesystem. I’m hoping MWS will provide a nice trivial way to import to it’s DB for the same sort of automation use cases.

Under linux at least, the “inotify” system allows an application to request to be told (by the kernel) when various filesystem events occur, and so consumes no resources for the app. A more primitive system that simply polls the filesystem periodically and reloads anything that is newer on the filesystem than it’s internal memory cache, I doubt would be expensive, at least at any personal scale (maybe not if you were tiddlyhost or similar!). I’m positive Windows and MacOS have similar notify type APIs for this type of application need.

I wasn’t aware of this command before - just had a tinker with it, and it takes a file from an arbitrary location and saves it into the TW path (and adds a .meta if needed), but the node process still needs a restart to be aware of it.

my testing:

$ tiddlywiki ~/test/testingTW --load ~/mpv-shot0001.jpg
syncer-server-filesystem: Dispatching 'save' task: /home/nemo/mpv-shot0001.jpg
syncer-server-filesystem: Dispatching 'save' task: $:/StoryList

resulted in these files in the

-rw-r--r-- 1 nemo 607K Jul 14 19:33  _home_nemo_mpv-shot0001.jpg
-rw-r--r-- 1 nemo   50 Jul 14 19:33  _home_nemo_mpv-shot0001.jpg.meta

And the meta has this as it’s content:

title: /home/nemo/mpv-shot0001.jpg
type: image/jpg

Restarting the node service is what I currently do when I import a batch of tid and other files from an external process. I dont have any regular automated import needs like stobot mentions - mine (so far) are once-off bulk imports of notes from different note taking apps I’ve previously used (tw2, tomboy, macos stickies, linux stickies, etc)

(edit: appendum: because I’m bulk importing from prior notes, I’m also pre-processing them into valid .tid files (or txt/meta, or md/meta), so get by with just a simple file copy. For image/etc media files, I prefer to use _canonical_uri so as to load them when required from within the files/ path - so a bulk import via a bespoke local script ends up being the better plan for me there too - at least, for what I’m aware of so far of my bulk import options!)

It looks like you need the tiddlywiki app to add a menu item “reboot”.

This is why I wonder if you are best importing into your interactive wiki, or wonder why you are not because then the server will do all the work for you creating files etc…

maybe I just don’t know enough TW tricks, but I cannot think of a way that is easier than what I’m doing.

Relevant background though: I’m a linux admin and the old unix philosophy that “everything is a file” is pretty core to how I approach many IT problems, and when I went looking (about 18 months ago) for a better note taking system, my basic essential criteria was

  • wiki/hyperlinking nature (I’m addicted to wikis)
  • markdown or similar “readable” markup formatting (tomboy format was xml, and eugh to that)
  • per-file based (for import ease, and also git history saving. Stretch goal was that it used git internally to handle history)
  • relatively easy to setup / fun to use

Realising that the node implementation of TW covered the third point is the only reason I’m here. (noting that there were options that covered the first three points including stretch git goal, but failed the fourth. I figured from TW classic history that TW would still score very high on “fun to use”)

So far I’ve imported 78 .tid (or txt+meta or md+meta) files (mostly from historic tw classic), and have an estimated 650 to go (mostly from a ccTiddlywiki I still run, but also a few hundred from tomboy notes. Each presents their own challenges that I’m slowly working through)

If it was a matter of importing those into TW5 though the interface? I dont know what my options to do that even would be beyond wrapping everything into json (which is not native to my thinking - feels like extra work to create data I’m less confident is correct, just for TW to turn them into files anyway)

Incidentally, per-file also suits my “save history in git” preference - since git is file-based, tracking changes to content through git review tools becomes trivial (and because I’m keen on preserving the history that already exists, my import process actually involves me scraping historic backups of data and saving those into git with date-accurate historic dates …but that’s REALLY getting into my personal idiosyncracies!)

Thanks for the feedback everyone, sorry for the delayed response

@oeyoews: If there’s an easy way to implement this I’m interested, but unfortunately my programming knowledge is limited to data science / R and doesn’t translate well to the web.

@arunnbabu81: Adding google sheets sounds like more work than I’m doing today, but will keep it in mind.

@TW_Tones: BOB does this with websockets and I don’t think the demand is very expensive - it’s not like it’s just constantly polling, but no other solution has implemented this.

@nemo: I think you’re probably right and I probably mis-spoke. I was really thinking about the speed at which making edits in one browser window propagates to another, which is different than the file watching. I apologize for the confusion there.

@tomzheng: That would be better than nothing, but as I’m not the only source of new information I need to import, I’d be restarting it repeatedly throughout the day.

@TW_Tones: I know this was directed to nemo, but I currently am doing everything through drag and drop and that’s what I’m trying to improve upon.

I’ve been trying various AI generated ideas and almost thought a combination like:

<$action-sendmessage $message="tm-import-tiddlers"/>
<$action-sendmessage $message="tm-perform-import"/>

could be the answer, but it requires the use of a file-picker. I need to be able to either hard-code a list of file names to loop through, or point it to a folder to run.

ChatGPT thinks it can write a custom action-widget to perform this but I’m a little skeptical. Might try it to see. Open to other creative ideas.

I find that pushing data into a Node.js TiddlyWiki instance via the HTTP API is the most convenient way to operate. There is no need to restart the server, and the changes will automatically synchronise to any connected clients.

It is such a useful and common pattern that it would be helpful to have some examples in the docs, perhaps covering Bash and JavaScript. As to R, there appears to be libraries call httr and httr2 that can make outgoing HTTP requests. I got ChatGPT to write some sample R code but I’m unable to verify that it works.

1 Like

A little bit of quick and dirty testing, and this has worked for me from the linux commandline:

$ curl 'http://127.0.0.1:8080/recipes/default/tiddlers/A%20wild%20tid-from-bash' -X PUT -H 'Content-type: application/json' -H 'X-Requested-With: TiddlyWiki' -T "tid.json"

with tid.json looking like so (I put some extra custom fields in for demonstration purposes

{
  "title": "A wild testing tid",
  "color": "blue",
  "created": "20250715114555840",
  "modified": "20250715115147940",
  "my field name": "my field value",
  "tags": "scratch",
  "type": "text/vnd.tiddlywiki",
  "text": "Like magic, a tid arrives in your wiki from the ~~wilderness~~ land of shell",
  "revision": "0",
  "bag": "default"
}

Two things of note:

  • The json file format is not quite compatible with the JSON obtained by export tiddler > JSON file menu option - the saved version has surrounding square brackets ( [] ). Remove those from the saved file, and then it’s compatible for upload this way.
  • The title field in the JSON is ignored, with the one provided in the URL being respected, both for filename, and title value within the saved file (note the URL and title disagree in my example)

That makes me wonder, though, if the node version could get an option to watch for file changes in its known directories, performing as it does with the HTTP API when a file changes. I mostly do my edits on my Node wiki tiddlers from within the UI. But if they’re JS-heavy, I sometimes switch to vscode. It would be useful to not have to remember to restart in these cases.

You can use the --watch-path argument in newer versions of node.js and set that up as a dev script: Command-line API | Node.js v24.4.0 Documentation

Yes, watching the files is clear enough. But that method would simply restart the Node server. I was looking for something a little more subtle. I’ve never used the TW API except as called by the UI. But I believe Jeremy was saying that when it is used, all connected users get updates without a need for a server restart. I was hoping to get the same behavior when Node sees a file changed. Of course, if this involves JS tiddlers, perhaps a reload would still be required, but I would hope the same notification that happens of load of JS tiddlers would still apply.

Node.js offers APIs for watching file directories, but they are full of caveats and platform specific oddities (see the docs for fs.watch). I had expected that these issues would be resolved over time, but that hasn’t happened.

One possibility might be for the server to watch what used to be called a “dropbox” directory, and now perhaps better called an “inbox” directory. Any tiddler files deposited there would be dynamically imported into the wiki. Perhaps the file would be deleted once it has been imported. Something simple like that would open the door to a bunch of useful scenarios, and the semantics should be a good deal simpler.

1 Like

That would be VERY helpful for my integrations!

Yes, I’ve seen that, but I’ve built several tools that successfully used watch. However, they were all single-platform (Linux), although I often developed and tested them on either Windows or Mac. I don’t know if I only had simple requirements that didn’t trigger any of these complexities, if the support teams simply chalked off any errors to unknown, non-repeatable glitches, or if I merely got lucky.

That’s an interesting approach. I can see the advantages. But it wouldn’t help with the workflow I’ve developed, though: I do most of my editing inside the TW UI. When I have complex JS changes, where I really want a full-featured editor (brace matching, syntax highlighting, error-reporting, jump-to-declaration, etc.) I open the tiddler file in vscode, edit it there, restart the server and refresh the page. If file watchers worked as I would like, then I could skip the restart step, and I would (I hope) be notified in the UI of the need to reload.

But an inbox would definitely be useful.

  • See how powerful the interactive environment is, import and html get is also equivalant to drag and drop.

I just want to clarify that I have built large systems with per file/folder access both personal and professional and I understand where one would come from to desire this.

  • However I am a convert to the logic, structure, meta data, automation and parsing available within an intereractive wiki, turn it into a tiddler and process it.

It is importiant to understand that logicaly we have the tiddler, wether it is an independant file or an “object” within the interactive tiddlywiki. Keep in mind in an interactive wiki;

  • We have drag and drop
  • Import file or files
  • Http actions
  • Javascript access to the wiki store eg via bookmarklets
  • Command line access to actions against a wiki

Now as soon as data is imported be it a single or multiple tiddler we gain access to the wikis interactive environment, relavant examples include JSON Mangler, custom Parsers;

  • It is here we can build simple or sophisticated tools to process incoming data, and if built on top of a node wiki logical tiddlers become independant files.
  • Consider a tiddler a logical file with metadata, and on node it is also a physical file.

I will also add with TiddlyWiki its trivial for you to use a customised instance of TiddlyWiki to import and process data, to create native tiddler formats to be added to the wiki using that data.

  • If the data import process is once off or upfront task, a custom wiki allows you document the process, test and review and clone into multiple batch imports allows others to see how you did it.

You describe it as powerful, but I interpreted stobot’s desire for an improvement as an indication that for this use case, drag and drop is tedious.

Exactly. And physical files make it super easy for me to bulk import them via file creation outside the TW software itself.

I’m performing one-off conversion of files that are in a range of different formats, into either TW wikitext, or md+meta, or txt+meta. Doing that outside TW for a once-off import is relatively trivial. Doing it inside TW for a once-off import sounds like the worst thing to me, or at least, for me and my current skills. (if that’s not what you’re advocating, then I’m lost as to what you are advocating).

I plan on writing up my process and sharing my code. It’s just linux shell code, not wikitext code.