Auto Wiki by


15 hours later…


Auto Wiki? If you say so.

Move along, nothing to see here…


I was able to get it to load, and it’s actually pretty impressive.

Moreso than asking ChatGPT IMO.

When i typed in the repo I clicked on the recommended, and the browser I’m on currently is Chrome on windows 10.

Right now reading on the saver:



Hi @CodaCoder @Justin_H – I tried because I saw people talking about it. It is superficially impressive, and simultaneously a good demonstration of why Large Language Models don’t do what people think they do, and can be extremely dangerous outside of playing with them for fun.

For example, at the very top of the page there is a reasonably coherent summary of how TiddlyWiki is based on the concept. But that apparent understanding doesn’t extend to the rest of it. When it analyses the source files, it doesn’t realise that they are tiddlers, and so uses misleading filenames instead of the actual titles of the modules.

There’s an old saying that it takes 10 times more effort to refute BS than it takes to generate it. With AI it seems to be more like a factor of 1000. I dread receiving AI generated pull requests and bug reports, and having to enumerate their errors and shortcomings.

I hadn’t meant to crash the party with such a negative take, but this is an area I have increasingly strong feelings about.

I have a lot of experience with ChatGPT, and find it useful in certain narrow situations (eg I’ve been learning SQL with its help). But I’ve also been burned where it has given me superficially correct answers that have resulted in bugs that have taken significant effort to resolve. I think that for me to be effective there is no shortcut to me having to thoroughly learn the tools and techniques I want to use. ChatGPT can help me with that learning, but I can’t rely on it to be correct.

I’ve had fun with DALL-E image generation too, but I would never use the results in a professional setting because there’s a dreadful, schmaltzy and hollow aesthetic to the results that to me looks flat, dead and unimaginative. These systems seem to be gigantic homogenisers, flattening everything to the same plodding, unimaginative bland soup.

40 years ago, I was having great fun on my BBC Micro discovering what I later learned to be called Markov Chains. It’s a ridiculously simple technique for generating text that is statistically similar to a corpus of what we would now call training data. There’s a JavaScript implementation I call “Mimic” here - What I learned then was that the threshold for humans to perceive meaning and purpose is surprisingly low. When tweaked just right, Mimic is fantastic at, say, generating plausible London tube station names. Human brains seem to want to find meaning and purpose, and like pareidolia, we find it wherever we look.


No worries on my part, I share your sentiments about LLMs, however I can’t lie that the promise of making things easier and more accessible isn’t always a temptation. I guess AI content will be my appl of knowledge.

That being said, I personally don’t approve of AI art on the geound that I spent 10+ years grinding away at fine arts and digital arts, and with the situation of these models being trained on other artists works without permission left a very bad first impression on me.

But those things said, I think what interests me about this LLM though, is that in the end these are tools, maybe tools we can talk to to a basic degree, but tools all the same, just as you’re learning SQL and I have used GPT4 to learn batch, I hope one day there is a tool that can answer questions about tiddlywiki that isn’t, as you put it fairly perfectly, superficial.

Id love for TW to have the fanbase that something like obsidianMD does

I think I read two paragraphs before my hackles started to rise. And based on our previous conversation about these tools, I pretty much knew what your take would be. To me, it seems favour quantity over quality – and regarding accuracy, as you pointed out/alluded to, it’s “lost” when it comes to grasping the nitty-gritty of TiddlyWiki.

Thanks for the mention of Markov Chains and the link to mimic ← new to me. I did something similar on a Sinclair and a Commodore 64 back in the day. Enormous fun despite the limitations of those CPUs and RAM (read: :exploding_head:). Never did much with the BBC Micro despite having built them for a while back in (guessing) 83-5?

A worthwhile read from AI luminary Douglas Hofstadter:

He talks about how bad some of the LLM output can be.

But he also talks about how scary LLMs can be in this video:


I am not too experienced with comparing the different models on their ability to do programming, but with my experience using Poe, for example, with youtube-dl, it tends to hallucinate options that do not exist, so it is not surprising when it comes to TiddlyWiki.