Now that I am almost ready to commit, I see that I will need to use .gitignore for the roles.csv file in each wiki directory other than that I should be good to go.
I developed TidGi (short for Tiddly Git) for auto git backup, it uses GitHub - tiddly-gittly/git-sync-js: JS implementation for Git-Sync, a script that backup notes in git repo to the remote. , so I just need to double click open the app, and use it like using Notion, no need to touch dirty things under the hood (terminal and git cli).
Here is a script I use to back up my long running Node.js TiddlyWiki. I made a directory in my wiki directory called wiki_backups and put that script in there. Then I run that from cron daily.
This keeps daily copies up to 12 latest of the wiki.
#!/bin/bash
dir=$(cd $(dirname $0) && pwd)
backup_file=$(date "+wiki-%Y-%m-%d.tar.gz")
pushd "$dir/.." >/dev/null
tar c wiki | gzip > "$dir/$backup_file"
echo "Backup saved to $backup_file"
popd >/dev/null
pushd "$dir" >/dev/null
ls -t1 wiki-*.tar.gz | sort -rn | tail -n +12 | xargs rm -fv
popd >/dev/null
using git wit tiddlywiki on node is a good idea but if you have separated data file and meta files for each tiddler, then you will get some annoying things to do by yourself when you rename a tiddler. for instance for the fubar tiddler, you will get something like fubar.tw5 and fubar.tw5.meta. If you rename fubar into horror you will have to rename the two files. But you should have renamed with git mv (and restarted node) and if you’ve done it with tiddlywiki, then you have to make it manually.
This is why I don’t git the tiddlers directory, but I maintain a git-operated copy of it where I manually put all the change in sync. That’s not very pleasant.
For my use-cases, that’s not a problem. Any such changes I do programmatically, so if I need a rename, I can do it for the matching .meta file alongside the main tidder.
I don’t quite follow what you do here, but it doesn’t sound pleasant! My workflow seems relatively simple for my usecases.
What does this mean? Why not just auto commit anything? My git is 2GB after importing the Evernote, so I don’t care about a single mess in it.
Why are you renaming files in your git repo? Git commits will snapshot your files at a specific time, and if you rename them later in tiddlywiki, or even manually outside of tiddlywiki, then the next git commit will show those files renamed. I just don’t understand what you are doing with git, because git commits should generally not be messed with once committed. Any changes should be done with a new commit, so I am just baffled about what you are referring to. Why are you using the git mv command. Those commands are there for those that really need them for specific purposes, but generally, especially when backing up a tiddlywiki, you should not be altering your commits, but rather simply making new commits with the changes. The way you are doing your backups makes me wonder why you are even using git in the first place. You might as well just do manual archive backups.
You can use the git mv (move) command to move files into subdirectories. So it does not mess with the commit history.
There is nothing wrong with git mv.
After moving a file git status identifies the action as renamed
PS E:\temp\asdf> git status
On branch master
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: test.txt -> aa/test.txt
If you move a file with the file-explorer from one directory to an other git status identifies it as 1 file “deleted” and a new file “untracked”:
PS E:\temp\asdf> git status
On branch master
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
deleted: new.txt
Untracked files:
(use "git add <file>..." to include in what will be committed)
aa/new.txt
So technically there is a difference. BUT in the end, in a visual history, git identifies both actions as “renaming” a file. So practically there is no difference
So @jypre a git mv is not necessary. You can move your files around, using the file explorer. It should be much less manual work.
Then 2 git commands should be just fine.
git add .
git commit -m "some useful description"
hope that helps
-m
Yes, I am aware of that, which is why I said
Those commands are there for those that really need them for specific purposes, but generally, especially when backing up a tiddlywiki, you should not be altering your commits.
I completely agree. There is no practical reason to make changes to files using git commands when backing up a Tiddlywiki. It just adds another unnecessary manual step, which the OP said is:
Well sorry for the sneeze late reply but I guess I forgot about this and guess what happened todayish? What’s so funny is when I go to look for the answer… previous self. (How do you do?) I never finished the thought apparently.
Somehow I was trying to update a tiddler to a few/more and lost the (not moved) data on save of the original BUT… was thinking that it was saving each tiddler on save individually, however, I guess because of the way some other save methods used to work at least for the full file that must’ve been where I was confused. Any way to do that because I guess this is going to be the most common issue (me).
I had some script stuff for looking for file changes so may hopefully look into that as well but thought since I’m here again and have a renewed interest I’d say something…
I saw this thread and was half way reading through the first dozen replies before realising it was over 2 years old. Oh! Time to scroll to the new stuff.
So, uhm, while you’re trawling your backups (good luck, I’ve been there too), I’ll make note of my over-thought node backup solution.
First up - it’s a git repo, and my normal rate of usage is updating two or three tids per day - one of them is the same one each day (wordle log). So I’ve a script which checks how many files have changed (ignoring “Draft of…” and StoryLIst and probably a few others). One point for each. And how many days (multiples of 24 hours) since the last git commit. One point for each of those. If there is at least one file, and the total points score six or more, then the final check occurs - that no files are newer than 6 hours. If that passes, then it runs a git commit (backdating the time to the last edit) with an automatic git message summarising changes.
The final six hour window is so my git commit is not in the middle of a bunch of work reorganising things but instead in what I have reasonable expectation to be a stable state of things.
In practice, this rolls over a new commit roughly every two days, which is slightly more often than when I was doing it manually
my last dozen commits thus look like this
* 23c0610 - (Thu, 8 Jan 2026 19:48:10 +1000) 6 files changed, 45 insertions(+), 14 deletions(-) - Nemo Thorx (HEAD -> main)
* ba0fcbb - (Tue, 6 Jan 2026 17:32:06 +1000) 5 files changed, 57 insertions(+), 23 deletions(-) - Nemo Thorx
* 42789bd - (Sun, 4 Jan 2026 22:16:25 +1000) 5 files changed, 36 insertions(+), 10 deletions(-) - Nemo Thorx
* 8c41096 - (Sat, 3 Jan 2026 01:23:13 +1000) 9 files changed, 56 insertions(+), 12 deletions(-) - Nemo Thorx
* 88eed66 - (Thu, 1 Jan 2026 02:14:02 +1000) 8 files changed, 66 insertions(+), 15 deletions(-) - Nemo Thorx
* 2926de4 - (Wed, 31 Dec 2025 01:53:21 +1000) 7 files changed, 71 insertions(+), 13 deletions(-) - Nemo Thorx
* afbc36d - (Tue, 30 Dec 2025 03:47:14 +1000) 6 files changed, 25 insertions(+), 8 deletions(-) - Nemo Thorx
* 7f9907e - (Sun, 28 Dec 2025 04:50:53 +1000) 9 files changed, 70 insertions(+), 18 deletions(-) - Nemo Thorx
* 4a0a21a - (Sat, 27 Dec 2025 01:23:43 +1000) 7 files changed, 92 insertions(+), 10 deletions(-) - Nemo Thorx
* b495e77 - (Fri, 26 Dec 2025 00:32:27 +1000) 6 files changed, 67 insertions(+), 16 deletions(-) - Nemo Thorx
* 1acf2cf - (Wed, 24 Dec 2025 00:05:49 +1000) 5 files changed, 43 insertions(+), 14 deletions(-) - Nemo Thorx
* 550b3fc - (Mon, 22 Dec 2025 11:04:47 +1000) 4 files changed, 24 insertions(+), 4 deletions(-) - Nemo Thorx
I’m happy to clean up my code and share it if anyone is interested. ( Linux bash script )
@nemo … I think it would be nice to make your script available. Since it seems you rely on it for every day work, it should be “battle tested”. So I think it would be nice if you would publish it in a new thread under the Tips & Tricks category. eg: “Batch Scripts for Node.js Backups” or something similar.
well… it’s battle tested inasfar as it works for my specific use case over the last couple of months (and running in tmux!) and is one function within a larger TW helper script (several other helper functions remain on the TODO list).
But yes, will aim to clean up the last key things for public visibility and get it into github and post a Tips for it 
TidGi Desktop’s latest pre-release supports AI generated backup commit message. And support search commits via a history GUI. And have configable 30min auto-backup.
So I can easily search history changes with human readable message.
Well I’m currently trying to backup from my mistakes moreso than fs stuff, so say i delete most of the tiddler trying to re-organize this is a new wiki i dont really know what im trying to backup.
I’m not even using a full linux build (right now trying to keep it simple enough to understand) so don’t want to rely on complicated stuff like zfs/mounted filesystems.
Right now rsync --backupdir option has variable to enumerate 10 changes with a mv sticking the first change of the day per file in the first hour in a folder for archival for shipping off to the backup server.
I’m thinking I’m lucky to have 1 good thought a day and dont expect it to take longer than an hour to save it, but the enumeration stuff above will keep 10 previous versions (per tid) so… it’s not part of a backup but I can manually go and fix a mistake as long as I haven’t changed the file more times than that and the daily first move/archive will mitigate against that is the goal.
So the idea is i can catch something daily in those 10 enumerations and have a worst case yesterday backup for each file.
And then probably rsync daily to a mirror for periodic backups separately for the “eveything else” scenario.
This is the most complicated “simple script” I had hoped for me sadly on linux so far i’m a beginner… probably lots of bugs it deletes your data but it seems to be working ok so far id put it out there if it sounds interesting but it’s a first draft and probably not pretty.
I’m surprised there isnt a “last tiddler save” or revert option i guess(?) in the nodejs version that would be really cool probably, probably stop a good majority of my faux pas by itself
potentially the reason there isn’t a nodejs option for this, is that it’s something any user of TW may desire, regardless of node vs single-file.
@Yaisog has this plugin which adds internal revisions - works on both single file and node (well, I assume anyway. I dont use it myself)
https://yaisog-patches.tiddlyhost.com
From discussion here: In-Wiki Incremental Revisions -- Now Even Better!
That seems a very powerful of an option there it’s complicated enough I don’t know how much time I’d want to try right now if nodejs may be a gotcha. I’ve got maybe 20 wiki tid’s if that, and i don’t really understand how tiddlywiki is still working without nodejs these days with all the extension combinations needed for each browser/client, but that’s interesting if just a simple revert/copy or some wouldn’t get too complicated to think about.
Plugins should work the same, since they operate browser side. Except for saving (whole file to local disk vs individual tid over the network) (and maybe some other corner cases that escape my awreness at the moment) TW in the browser doesn’t really care where the page came from - be it a local large single file, or over the network from a node instance that assembles basically the same thing from it’s local collection of many files.
In short, I’d be quite surprised if there WAS a difference in behaviour. I only added the disclaimer because I’m aware of the limits of my awareness on the topic in depth.
You should be able to test it by a simple drag-and-drop onto your TW (and if it doesn’t work, then removal should be similarly easy - and worst case, make a backup before testing this and then revert to that backup. )(Sometimes that’s what I use backups for. Not disaster recovery, just “easier than the other also-technically-possible ways to get a system back to an earlier state”)
It looks really complicated I’d hate to add that much complexity right now. Maybe if the backups get larger than a megabyte. …one thing I miss about the web page option… one day it’s overly complicated. I don’t know how nodejs works… but it does…