MCP would allow to use natural language to perform actions with TW.
Prompt Example to an AI agent:
“Find all tiddlers that do have a fruit word in it, then send a list of all links to example@example.com and copy them all in google drive with a unique folder for each”
This will select tiddlers with “apple” in description field; also the ones tagged as “orange”; and also ones with “bannana” in its title. And then interact with other services like smtp and google drive.
Simple prompt—-> complex task. All possible due to MCP
I think an MCP server for TiddlyWiki would be very useful for people developing widgets, procedures, and similar components.
There’s often a lot of trial and error involved in getting everything right.
An LLM could help — it can read the TiddlyWiki docs and generate wikitext.
But the feedback loop isn’t closed yet. The human still has to paste the wikitext into a TiddlyWiki instance and bring the results back to the LLM.
An MCP server that can at least do two things would go a long way toward closing that loop and unlocking the potential for LLM-assisted TiddlyWiki development:
Edit or create a tiddler with a specified title and fields.
Return the rendered output of a specified tiddler.
For implementation, we could build on top of an MCP server such as Chrome DevTools MCP or Playwright MCP.
For example, use a browser instance running the wiki and perform actions within the wiki page by executing JavaScript code that emits TiddlyWiki Core Messages (see Core Messages).
I feel compelled to add a few notes and cautions about the overall process:
There’s no substitute for learning TiddlyWiki yourself. Personally, the difference was night and day after I worked through the excellent Grok TiddlyWiki by Soren Bjornstad. Before that, I now realize, I had simply been wandering in the dark.
Once you know TiddlyWiki well, AI-driven development can feel painfully slow for simple tasks. Still, it can serve as a helpful second pair of eyes—especially when you’re stuck on something small or silly.
For more complex challenges, a hybrid approach works best. Use tools like Perplexity.ai for targeted research and guidance, let you as a human lead the overall design, development and review. Finally, bring in AI agents to fill in the gaps only where it makes sense.
You can use browser MCP to let LLM test the result. And use copilot + claude 4.5 to write widgets. With proper infrastructure, browser page will auto refresh, and AI will test it with browser MCP.
https://simplifai.tiddlyhost.com/ already implements the gemini api so it could be modified to use gemini’s build in rag. It looks straight forward but would require some work, and there are questions like:
How would a tiddler be presented to the AI - would it be render, converted to markdown, reduce to plain text… or left as wikitext.
“All large language models, by the very nature of their architecture, are inherently and irredeemably unreliable narrators,” said Grady Booch, a renowned computer scientist. At a basic level, they’re designed to generate answers that sound coherent — not answers that are true. “As such, they simply cannot be ‘fixed,’” he said, because making things up is “an inescapable property of how they work.” - Grady Booch
Hi all! I implemented an MCP server plugin for TiddlyWiki5 Node.JS-hosted servers:
It lets AI agents access your tiddlers via list, search, read, write, and delete tools.
So far I’ve tested it with Gemini CLI, Claude Code, Claude.ai, and Simtheory and it’s working very well for my purposes. I have a personal Zettelkasten and journal TiddlyWiki and I was able to interactively chat with Gemini / Claude about my writing and have it add and modify tiddlers.