Develop MCP Server for TW?

A quick search in this forum about “MCP” didn’t return any entries, so here is the first one.

I think sooner than later, MCP Protocol will become as extended and core as REST Protocol, what do you think?

To dig further, this guy does have some interesting experiments with this new protocol John Capobianco’s Youtube

All I see is the Tron MCP.

What would you want a TW MCP to do?

Protocolize actions within wikis, so LLM can create, edit, modify, tag, link, whatever without hallucinating.

If TW had an MCP Server developed, you would be exposing TW tools so IA agents could effortlesly interact with your wikis.

Hallucination is part of LLM systems atm and has nothing to do with a communication protocol.

Agree,

MCP would allow to use natural language to perform actions with TW.

Prompt Example to an AI agent:
“Find all tiddlers that do have a fruit word in it, then send a list of all links to example@example.com and copy them all in google drive with a unique folder for each”

This will select tiddlers with “apple” in description field; also the ones tagged as “orange”; and also ones with “bannana” in its title. And then interact with other services like smtp and google drive.

Simple prompt—-> complex task. All possible due to MCP

I’m working on it on https://github.com/tiddly-gittly/TidGi-Desktop , I also want to create tiddler by AI voice command. STT → LLM → MCP → TidGi’s Nodejs wiki

2 Likes

Let me know how this goes. I was also exploring the possibility of something like an MCP agent integration for TW.

I think an MCP server for TiddlyWiki would be very useful for people developing widgets, procedures, and similar components.
There’s often a lot of trial and error involved in getting everything right.
An LLM could help — it can read the TiddlyWiki docs and generate wikitext.
But the feedback loop isn’t closed yet. The human still has to paste the wikitext into a TiddlyWiki instance and bring the results back to the LLM.

An MCP server that can at least do two things would go a long way toward closing that loop and unlocking the potential for LLM-assisted TiddlyWiki development:

  • Edit or create a tiddler with a specified title and fields.
  • Return the rendered output of a specified tiddler.

For implementation, we could build on top of an MCP server such as Chrome DevTools MCP or Playwright MCP.
For example, use a browser instance running the wiki and perform actions within the wiki page by executing JavaScript code that emits TiddlyWiki Core Messages (see Core Messages).

Here are the links to the relevant MCP servers:

I am happy to share a simple demonstration i put together that makes use of Playwright MCP.

The source code is here: GitHub - jjkavalam/tiddly-wiki-mcp

A video is here: https://www.youtube.com/watch?v=EDGX9WVcKp4

Feel free to remix as you wish. Greatly appreciate if you could share that too.

The gist of it is this custom agent definition: https://github.com/jjkavalam/tiddly-wiki-mcp/blob/main/.github/agents/Tiddlywiki.agent.md

I feel compelled to add a few notes and cautions about the overall process:

  • There’s no substitute for learning TiddlyWiki yourself. Personally, the difference was night and day after I worked through the excellent Grok TiddlyWiki by Soren Bjornstad. Before that, I now realize, I had simply been wandering in the dark.

  • Once you know TiddlyWiki well, AI-driven development can feel painfully slow for simple tasks. Still, it can serve as a helpful second pair of eyes—especially when you’re stuck on something small or silly.

  • For more complex challenges, a hybrid approach works best. Use tools like Perplexity.ai for targeted research and guidance, let you as a human lead the overall design, development and review. Finally, bring in AI agents to fill in the gaps only where it makes sense.

1 Like