WikiLabs - TW MPC Server Plugin - EXPERIMENTAL

Hi foks,
I want to introduce my take on a TW MCP server, that I personally use to develop new TW code and especially TW wikitext. I use this server in conjunction with TW devtools [1], which has detailed info about internals and variables “in context”.

Works with TW >= v5.4.0

Plugin: TW MCP Server — connects LLMs to your wiki

What is it?

TW MCP is a plugin that turns any TiddlyWiki into an AI-accessible knowledge base. It uses the Model Context Protocol (MCP) – an open standard that lets AI assistants talk to external tools. Once enabled, assistants like Claude, Gemini, and others can read, search, render, and edit your wiki directly. No copy-pasting, no exporting.

Who is this for? You’ll need Node.js and an MCP-compatible AI client – desktop apps like Claude Desktop work, as do CLI tools like Claude Code or Gemini CLI. If you’re comfortable running tiddlywiki from a terminal, you’re good.

After plugin installation, one command is all it takes:

tiddlywiki ./mywiki --mcp

Your wiki is now an MCP server. Any MCP-compatible AI client can connect to it.

What can the AI do with this MCP server?

Read & explore

  • Get tiddlers – fetch metadata or full content
  • List & search – browse by tag, run filter expressions, search across your wiki
  • Render – render tiddlers, fields, or arbitrary wikitext to plain text or HTML, including through the ViewTemplate cascade (so the AI sees what the browser shows)
  • Inspect – analyse widget trees, trace source positions, inspect variable scopes, and navigate TiddlyWiki’s internal $tw object
  • Wiki info – plugins, themes, tiddler counts, settings at a glance

Write (opt-in)

Writing is disabled by default. You explicitly enable it with --mcp rw.

  • Create, edit, delete tiddlers – the AI can modify your wiki when you allow it
  • Upload files – images, PDFs, and other binary files can be uploaded with an “external” command ‘tw-upload’, that does not use any tokens
  • Save as folder – export the wiki as a node.js folder structure, ready for version control or deployment. Directly executed by the mcp-server. Minimal tokens needed
  • Build HTML – render the entire wiki as a single HTML file the mcp-server can produce on demand

Key features

Token-aware by design

AI assistants charge by the token, so the server is careful about how much data it sends:

  • Metadata first – when you ask “show me the tiddler GettingStarted”, the AI gets field names, tags, and dates – but not the full text. Only when you say “show me the full contents” or “I want to edit it” does it fetch the body.
  • Small edits, small cost – changing a line in a long tiddler sends only that line, not the whole tiddler.
  • Large results stay manageable – browsing deep internal structures or running broad filters automatically caps the output size, so you don’t burn tokens on data you didn’t ask for.

Built-in conflict detection means two editors (human and AI, or two AI clients) can’t silently overwrite each other.

Readonly by default

The server starts in readonly mode. Safe for exploration, analysis, and Q&A. Enable writes only when you need them.

Multiple clients at once

Several AI clients can connect to the same wiki simultaneously. The first process becomes the primary server; others relay through it automatically. No conflicts, no stale data.

Want to see changes in the browser too? Start with --mcp rw listen – the HTTP server and MCP server share the same wiki store. Browser edits and AI edits see each other instantly.

Safety built in

  • Existing tiddlers aren’t overwritten unless explicitly requested
  • System tiddlers ($:/) are excluded by default
  • File operations are restricted to safe directories
  • Internal communication between tools is authenticated

Getting started

Quick start

tiddlywiki ./mywiki --mcp

Or with browser access:

tiddlywiki ./mywiki --mcp rw listen port=8080

Got a single-file wiki? Import it into a prepared wiki folder:

tiddlywiki ./mywiki --mcp file=path/to/wiki.html

The mcp-server analyzes the tiddler structure and proposes a directory layout before writing anything to disk.

For full setup instructions (plugin installation, client configuration, environment setup), see the Installation Guide.

Demo

A real conversation – you ask in plain language, the AI does the rest:

User: Fix the typo in the intro of GettingStarted

AI: Let me read the current content...

    1#VW: !! Introduction
    2#MV: 
    3#TR: ''Works with TW >= v5.4.0''
    4#SW: 
    5#TY: The ''TW MCP'' plugin adds a ... standarsied interface.

    I see "standarsied" on line 5. I'll fix that.
    [sends only line 5 with the corrected spelling]
    
    Done -- fixed "standardised" in GettingStarted.

The AI reads your tiddler, identifies the problem, and sends only the changed line. The 1#VW: prefixes are content hashes – if someone else edited the tiddler since the AI read it, the hashes won’t match and the edit is safely rejected. No silent overwrites.

Current status

Feedback, bug reports, and ideas welcome. Try it out and let me know what you think!

[1] TW devtools plugin

7 Likes

Link to the DevTools thread: WikiLabs DevTools Plugin — Browser-Side Developer Tools for TiddlyWiki

Cool, I believe this needs some advertisement in reddit or somewhere, so people could know LLM memory solution is not only about Obsidian.

By the way, in the MCP config, you seem to be using the global TiddlyWiki command. Will it start an empty wiki? How will it find the project wiki that user is using?

I usually use https://github.com/tiddly-gittly/Modern.TiddlyDev and directly use VS Code copilot to directly modify content in the ./src (plugin TS code) or ./wiki (documents) folder, and then it will auto-reload server to reveal the change. I usually concurrently work with 3~7 projects, and each has its own wiki.

The --mcp rw label=<who-started-it> is a TW command. It initialises the $tw. structure as a server would.

tiddlywiki <wiki> --mcp rw listen label=<who-started-it> also starts a server and a mcp-server, which share the same $tw. memory. So no server restart needed if you change content from the mcp-client like VSCode or Claude Code CLI.

With and eg: “claude add mcp …” an other mcp-server will be started. If it uses the same <wiki> directory it will find the server which was started with --mcp listen. Itself will work as a proxy. So it has no $tw. object. It uses the server.

So many mcp clients can work with the same wiki store. That’s also one reason why it is experimental. It needs more testing.

So all of them work with the same TW memory. That’s cool stuff. The edition has a lot of docs, that explains everything in detail.

@pmario I want to try this out. How to install tw 5.4.0 using node js since we don’t have the release yet ?

Prerequisite git and node.js. For Node.js install the latest LTS version

Then IMO the easiest way is

cd /your/experiments
git clone https://github.com/TiddlyWiki/TiddlyWiki5.git
cd TiddlyWiki5

npm link # will make this branch the default tiddlywiki command

npm unlink  # Will undo the link command. Globally installed tiddlywik command takes over 

npm link will make this directory the default tiddlywiki command. So

tiddlywiki --version

should give you.

5.4.0-prerelease

Then

  • Modify the tiddlywiki.info in ./edtions/tw5.com-server
  • Add the wikilabs/tw-mcp to the plugins section

To be able to do that.

cd /your/experiments
git clone https://github.com/wikilabs/plugins.git

This will clone all wikilabs plugins. Now you will need to set the TIDDLYWIKI_PLUGIN_PATHenvironment variable to /your/experiments/plugins

If you do that, all wiklabs plugins will be visible as eg: wikilabs/tw-mcp in tiddlywiki.info files.

Your tw5.com-server/tiddlywiki.info should look similar to this

{
	"description": "Server configuration of the tw5.com edition",
	"plugins": [
		"tiddlywiki/tiddlyweb",
		"tiddlywiki/filesystem",
		"tiddlywiki/highlight",
		"tiddlywiki/internals",
		"wikilabs/tw-mcp"
	],
	"themes": [
		"tiddlywiki/vanilla",
		"tiddlywiki/snowwhite"
	],
	"includeWikis": [
		"../tw5.com"
	],
	"config": {
		"default-tiddler-location": "../tw5.com/tiddlers"
	}
}

Now the following command will start the latest tw-mcp server in read/write mode.

It is important to read the docs at: TW MCP Server — connects LLMs to your wiki
Especially : MCP Installation

cd /your/experiments/TiddlyWiki5
node tiddlywiki.js ./editions/tw5.com-server --mcp rw listen 
1 Like

Thank you for the detailed explanation…I will try these when I am back on my desktop

There are two typos. This is the corrected code

node tiddlywiki.js ./editions/tw5.com-server --mcp rw listen

Thank you @pmario I got it working

Make sure you have the latest version of the plugin. Currently it still is under development.

I did take care, that the mcp-server does use as little tokens as possible. Depending on the size of your tiddlers, token usage can be “heavy”. …

So feedback is very welcome.

Hi,

Congrats @pmario it is a really interesting plugin.

Is it possible to send natural language instructions to the LLM within the Wiki?

Probably you would need a websocket between the wiki and the Terminal that is running Claude? Do you think is it possible to make it more straightforward?

best,

The MCP protocol is well defined. It is intended to allow communication between an LLM in a CLI with an otherwise unknown system. Not the other way around. An MCP server can be started and stopped from the LLM CLI side.

So by default the mcp server is not active, until it is used.

Currently the MPC specification has no flow that would go from Wiki → mpc-server → llm client in CLI → local or cloud LLM … and back.

There is a draft spec that defines a mode that we could probably use in the future. But for me it looks extremely wasteful in terms of token usage. And it has a mechanism, that they call “human in the loop”, which would activate 2 additional prompts in the CLI for security reasons.

Ok probably I am missing something…

What I thought would be useful is a chat window within TW where you could give natural language instructions to the MCP.

Example: “create a tiddler about XYZ”

Now I can achieve this result but I have to send the instruction in the CLI which is inconvenient because I have to be in the same machine that runs the NodeJs server

Yea, but that’s not the usecase mcp is designed for.

LLMs have their own api, that can be directly accessed from the wiki. There are several plugins in the forum here that implement that way. But they implement a chat UI, where you would have to copy paste the llm response into a new tiddler.

As I wrote. Even with the draft spec, it needs the physical presence of a human at the CLI, for security reasons.

-writen with some AI help-

Going back to the “chat inside TW” idea…

Today, to use TW-MCP you need an MCP client living in a terminal (Claude Code, Gemini CLI, or a custom one). That forces the user to be on the same machine running Node.js, juggle two contexts (browser + terminal).

The reason is that the plugin currently exposes only stdio and named pipe transports. Both are local IPC mechanisms. Browsers can’t speak either of them by design, for security reasons. So any MCP client has to be a process outside the browser, and if you want a UI inside the wiki you need a bridge process (typically a small HTTP server) between the browser and that local client. Extra moving parts.

If we add a third transport to the plugin: HTTP — specifically Streamable HTTP, which is already part of the MCP spec —.

In practice this means the server, on top of stdio and named pipe, would also expose something like POST ``http://localhost:8090/mcp, speaking the same JSON-RPC messages (initialize, tools/list, tools/call, etc.) that already flow through the other transports. The protocol doesn’t change, only the medium. Stdio and named pipe stay exactly as they are for existing CLI clients.

Why this would matter: an HTTP endpoint is reachable from the browser via fetch(). So a TW plugin running inside the wiki could be a real MCP client of the wiki’s own MCP server — no intermediate process, no terminal open.

How a “Chat IA” plugin would look

Once HTTP is available, the plugin could live entirely inside the wiki:

  • A chat UI built with TW widgets.

  • Config tiddlers for the LLM API key, MCP endpoint URL, auth token, model, etc.

  • A small JS MCP client: fetch to initialize, fetch to tools/list, cache the tools.

  • A small JS LLM client: fetch to the LLM API with the prompt + tool definitions translated to the LLM’s tool_use format.

  • The agent loop: when the LLM returns a tool_use, the plugin fetches the MCP endpoint to execute it, feeds the result back to the LLM, repeats until end_turn.

  • Render the final reply in the chat panel.

All wiki reads and writes still go through TW-MCP, with all the safety guarantees and token-efficiency features intact. No second Node.js process. No CLI.

With HTTP transport on the plugin, the proxy (or Gemini CLI/Claude Code) collapses away:

Browser → TW chat plugin → HTTP → LLM API
                              ↕
                         HTTP → MCP server → wiki

What it would take

  • A small HTTP server in the plugin, bound to 127.0.0.1 by default.

  • A /mcp endpoint accepting JSON-RPC POSTs, dispatching to the same handlers that stdio and named pipe already use.

  • SSE for streamed responses, as the spec describes.

  • Auth: a bearer token (similar to the 256-bit token you already generate for the named pipe) sent in the Authorization header.

  • CORS configured so only the wiki’s origin can hit it.

  • A decision on coexistence with --listen: same port, different path (/mcp/), or separate port.

The tool logic itself is already written; it’d be a new entry point reaching the same internals.

Potential problems

  • API key for the LLM in the browser. If the plugin calls the LLM API directly, the key has to be reachable from the browser. Storing it in a tiddler is risky (sync, share, leak).

  • HTTP endpoint security. More attack surface than the named pipe (which relied on filesystem permissions). Bearer token + CORS + localhost-default mitigate it.

  • Coexistence with --listen.

I don’t know if the convenience gains justify the added complexity.

Are there design objections I’m not seeing? Or would this be better as a separate “MCP-over-HTTP bridge” plugin rather than something inside TW-MCP itself?

That’s why it is implemented that way by design.

I know, but I do not want to deal with the side effects, like authorisation and authentication.

That’s right, but this only opens a 3rd path to

| Manually triggered client request over HTTP | ↔ | MCP & TW server manging Wiki-store | ↔ | Browser |

Since this is only one line looks like it is simple. It is not.

As I wrote: I do not want to deal with that. I do not plan to use any 3rd party dependency, that makes everything hard to maintain.

That’s exactly the summary, why currently there is no HTTP endpoint.

Coexistence with --listen is solved. The TW server + mcp-proxy is started as the first command. The mcp-server is a TW server side plugin.

tiddlywiki ./editions/... --mcp rw listen label=I-am-a-TW-server

A mcp-“client” is started with:

tiddlywiki ./editions/... --mcp rw label=I-am-mcp-only

Final decision:

What I could think about, an what I would like to implement, is a decentralised P2P approach, to access and MCP-server and / or a locally running TW server.

Interesting…

Is that idea about allowing other people to interact with wikis that do have the TW MCP server plugin installed, and natural language instructions sent from an outside wiki?

If so you might have a similar user interface than the “IA chat”, because you would use TW as a platform to send natural language instructions.

That will open the door to editing wikis colaboratibely while mantaining tone and cohesion becasue the MCP server will follow the same rules on edit/write, even if the request comes from different users. It might even act as a gatekeeper and reject troll requests based on the semantic meaning of the proposed edit.