Develop MCP Server for TW?

A quick search in this forum about “MCP” didn’t return any entries, so here is the first one.

I think sooner than later, MCP Protocol will become as extended and core as REST Protocol, what do you think?

To dig further, this guy does have some interesting experiments with this new protocol John Capobianco’s Youtube

All I see is the Tron MCP.

What would you want a TW MCP to do?

Protocolize actions within wikis, so LLM can create, edit, modify, tag, link, whatever without hallucinating.

If TW had an MCP Server developed, you would be exposing TW tools so IA agents could effortlesly interact with your wikis.

Hallucination is part of LLM systems atm and has nothing to do with a communication protocol.

Agree,

MCP would allow to use natural language to perform actions with TW.

Prompt Example to an AI agent:
“Find all tiddlers that do have a fruit word in it, then send a list of all links to example@example.com and copy them all in google drive with a unique folder for each”

This will select tiddlers with “apple” in description field; also the ones tagged as “orange”; and also ones with “bannana” in its title. And then interact with other services like smtp and google drive.

Simple prompt—-> complex task. All possible due to MCP

I’m working on it on https://github.com/tiddly-gittly/TidGi-Desktop , I also want to create tiddler by AI voice command. STT → LLM → MCP → TidGi’s Nodejs wiki

2 Likes

Let me know how this goes. I was also exploring the possibility of something like an MCP agent integration for TW.

I think an MCP server for TiddlyWiki would be very useful for people developing widgets, procedures, and similar components.
There’s often a lot of trial and error involved in getting everything right.
An LLM could help — it can read the TiddlyWiki docs and generate wikitext.
But the feedback loop isn’t closed yet. The human still has to paste the wikitext into a TiddlyWiki instance and bring the results back to the LLM.

An MCP server that can at least do two things would go a long way toward closing that loop and unlocking the potential for LLM-assisted TiddlyWiki development:

  • Edit or create a tiddler with a specified title and fields.
  • Return the rendered output of a specified tiddler.

For implementation, we could build on top of an MCP server such as Chrome DevTools MCP or Playwright MCP.
For example, use a browser instance running the wiki and perform actions within the wiki page by executing JavaScript code that emits TiddlyWiki Core Messages (see Core Messages).

Here are the links to the relevant MCP servers:

I am happy to share a simple demonstration i put together that makes use of Playwright MCP.

The source code is here: GitHub - jjkavalam/tiddly-wiki-mcp

A video is here: https://www.youtube.com/watch?v=EDGX9WVcKp4

Feel free to remix as you wish. Greatly appreciate if you could share that too.

The gist of it is this custom agent definition: https://github.com/jjkavalam/tiddly-wiki-mcp/blob/main/.github/agents/Tiddlywiki.agent.md

I feel compelled to add a few notes and cautions about the overall process:

  • There’s no substitute for learning TiddlyWiki yourself. Personally, the difference was night and day after I worked through the excellent Grok TiddlyWiki by Soren Bjornstad. Before that, I now realize, I had simply been wandering in the dark.

  • Once you know TiddlyWiki well, AI-driven development can feel painfully slow for simple tasks. Still, it can serve as a helpful second pair of eyes—especially when you’re stuck on something small or silly.

  • For more complex challenges, a hybrid approach works best. Use tools like Perplexity.ai for targeted research and guidance, let you as a human lead the overall design, development and review. Finally, bring in AI agents to fill in the gaps only where it makes sense.

1 Like

@jjkavalam NO “Tiddlywiki MCP”

You can use browser MCP to let LLM test the result. And use copilot + claude 4.5 to write widgets. With proper infrastructure, browser page will auto refresh, and AI will test it with browser MCP.

All tools are already there.

Check website online or offline as a badge widget is entirely done by github copilot + VSCode, Claude4.5 knows everything about tiddlywiki widget. No need for a MCP to add prompt to it.

1 Like

My solution is badly named. If you look at my demo project, you will find that it doesn’t really create a new MCP server.

But having said that, i have not compared what happens if we actually have an MCP - it may or may not be helpful.

Yes, testing AI support of tw is welcome.

I just let VSCode copilot write the code, and use chrome-dev-tool mcp to let it test the result it self.

And if AI happened to don’t know tw, it could use Context7 MCP to know it. While tw doc on Context7 is poorly writtened. So I usually setup a VSCode workspace with core tw repo (GitHub - TiddlyWiki/TiddlyWiki5: A self-contained JavaScript wiki for the browser, Node.js, AWS Lambda etc.) cloned locally. So I could search for related code examples.

This might change things Introducing the File Search Tool in Gemini API

Didn’t try it yet but seems like you can feed custom docs (TW documentation in this case) and make the LLM learn it through RAG

https://simplifai.tiddlyhost.com/ already implements the gemini api so it could be modified to use gemini’s build in rag. It looks straight forward but would require some work, and there are questions like:

How would a tiddler be presented to the AI - would it be render, converted to markdown, reduce to plain text… or left as wikitext.

Usually don’t need this. VSCode github copilot already read files.

Simply drag tiddlywiki project and your peojcte into vscode to form a workspace and AI can read both. And perform search.

Or simpler, just let it read GitHub - TiddlyWiki/TiddlyWiki5: A self-contained JavaScript wiki for the browser, Node.js, AWS Lambda etc. and it will auto search for anything in the repo.

only heard of it the other day

because i was removing a library for it from a project
( remote connections undesirable due to bandwidth costs )

compared to the average bear ?
A:“for my self”

but
dont ask me about “any” typescript compilation doc :carrot:

i guess this observation speaks for it’s self really :crossed_fingers:

ever the optimist eh!

:clown_face:

“All large language models, by the very nature of their architecture, are inherently and irredeemably unreliable narrators,” said Grady Booch, a renowned computer scientist. At a basic level, they’re designed to generate answers that sound coherent — not answers that are true. “As such, they simply cannot be ‘fixed,’” he said, because making things up is “an inescapable property of how they work.” - Grady Booch