Anything LLM in Tiddlywiki

Recently, I noticed Anything LLM which is the all-in-one Desktop & Docker AI application with full RAG and AI Agent capabilities. It is really simple with one click installation in my laptop and smoothly running with my purpose in research through embedding my literatures managed in TiddlywikiWIKI.

I found the main challenge is to link sources back into Tiddlywiki with interface of Anything LLM.

I quickly implemented a widget to communicate with Anything LLM API. You can find the source and domo from here: https://tw-anythingllm.bangyou.me/ (No real demo as no Anything LLM server in public).

The interface is very simple at this stage.

Thanks for any advices.

5 Likes

Thanks @Zheng_Bangyou I hadn’t come across AnythingLLM, very interesting. I will try to integrate it with the AI tools plugin

1 Like

Hello @Zheng_Bangyou, hello @jeremyruston,
great to have these posibilites coming to tiddlywiki - and also great to have a plugin which can integrate at the same time chatGPT and local machines.
It would be great if there were additional options to use the API:
-Transclude material like texts and images into the prompt from tiddlers
-Choose a language.

Hello @Zheng_Bangyou, hello @jeremyruston, it also would be great if you could integrate a free API like HuggingChat to your AI-Tools: Hugging Face – The AI community building the future.

The developer docs for HuggingChat assume the use of their Python library. We can’t use that in TiddlyWiki, we would need to make direct HTTP API calls instead. At some level that must be possible, but I don’t see the documentation for it. I had the same problem when trying to add support for Google Gemini.

1 Like

Thanks for trying! Perhaps this can be achieved by detour over anything-llm.

This thread got me experimenting with local LLMs and TiddlyWiki. I feel there’s lots of exciting potential here that can be realized with further development around integrating TW content and LLMs.

First I checked out and successfully ran the AI tools plugin @jeremyruston linked. I was able to get an AI Conversation going with LLlamafile from within my TiddlyWiki quite easily.

However, some further exploration led me to LM Studio (Getting Started | LM Studio Docs) and IMO it’s a much more convenient option for discovering LLMs, running them locally, and also for integrating with AI tools plugin for a few reasons:

  1. It has a convenient UI that includes a catalog of available LLMs that you can browse and download with a click.
  2. Once downloaded you can easily run them in LM Studio using a familar chat interface.
  3. GPU offload is supported.
  4. You can start an HTTP server that has Open API like endpoints so it’s like a local drop-in alternative to Open API from an API endpoint perspective: Local LLM Server - Running LLMs Locally | LM Studio Docs. I was able to just clone the AI tools plugin openai server adapter ($:/plugins/tiddlywiki/ai-tools/servers/openai) and change the caption, and change the url to http://localhost:1234/v1/chat/completions.

@JanJo you may find LM Studio + TiddlyWiki an useful combination.

Now Anything LLM and its RAG (Retrieval Augmented Generation) capabilities feel like the ideal bridge between LLMs and a TiddlyWiki workflow because RAG allows you to upload your own docs, PDFs, and text files so you can query the LLM on its original trained knowledge base, as well as your personal files that you upload. This is obviously the type of workflow @Zheng_Bangyou was experimenting with, and it feels like there’s huge potential there.

I am running my TiddlyWiki on the NodeJS server so individual tiddlers are stored on the filesystem as *.tid files. I experimented a bit with uploading all of my Journal tiddler *.tid files into an Anything LLM workspace and asking it questions like “What was the first day I started working on project xyz, and how many days have I been working on it?” I also uploaded a datasheet PDF for an IC chip and asked for some details from the datasheet.

Note: I had to make a minor modification to Anything LLM code so that it would accept *.tid files and treat them as plain text.

Some of this initial exploration of workflow options was outside of TiddlyWiki, but Anything LLM can use LM Studio (among many other choices including Open AI) as the LLM backend, and the TW AI tools plugin can also use LM Studio as its backend, so it seems there are quite a few ways for workflows between TIddlyWiki, Anything LLM, and LM Studio (or other LLM backends) to work together.

Anything LLM has the concept of workspaces where you can upload your own documents, and then those documents become part of the context in your AI conversations / LLM chats.

Enhancing the AI Tools plugin to be able to directly add TiddlyWiki content (e.g. maybe upload all tiddlers with a certain tag) to an Anything LLM workspace could open up some really interesting possibilities. I haven’t looked closely at @Zheng_Bangyou’s plugin yet, it may be doing exactly this.

For anyone curious here’s a pretty easy to follow introduction to setting up Anything LLM and LM Studio: How to implement a RAG system using AnythingLLM and LM Studio | Digital Connect

FWIW I’m doing this on Linux, and I’ve been playing with the Mistral Nemo 2407 (12B parameter model).

2 Likes

@oveek Welcome.

Thanks for sharing your info. Very interesting. – But you did not tell us, if the LLM actually did answer your questions about your journals right. Did it answer the questions about the IC data-sheet right?

Hi @oveek and welcome to the community.

I have now installed AnythingLLM and LMStudio and have started to experiment with them. I would like to integrate them with the AI tools plugin.

Integration with LMStudio seems straightforward: it looks like we can just use the same handling as the existing Llamafile service.

But the AnythingLLM API is more interesting, and would require deeper integration to do well. One issue is that I couldn’t find the docs for the API endpoints @Zheng_Bangyou is using in their plugin.

1 Like

@jeremyruston you can “Open Setting” in the left bottom, find the Developer API at the bottom of setting. Then there is a link for Read the API documentation.

The more interesting feature of AngthingLLM is workspace. I can defined a filter in TW to list document for each workspace.

Good question. I’ve only done minimal testing so far, but my initial results were quite mixed. A few times the first response to my question was incorrect, and I had to keep refining the prompt to get the right answer.

The question about the project start date it actually got wrong. I think I only had 50 - 60% “right answers” to my questions about the journal tiddlers and the datasheet in terms of correctness and completeness. I get the impression this can be improved a lot, though.

From what I’ve read, the quality of results with RAG depends on a number of factors that can be tuned, including of course the specific model being used.

Chat with Documents - Running LLMs Locally | LM Studio Docs - Provides some useful definitions and distinguishes between RAG and Full document in context (where the entire document you want to query fits in the LLM’s working memory)

I’m going to try out one of the larger popular models: Mixtral-8x7B-Instruct, and am expecting better results. I’ll report back.

Not sure if you caught this, but I was able to use the AI Tools openapi server plugin as-is (cloned it and just changed the URL to http://localhost:1234/v1/chat/completions)

I was able to find the swagger / Open API spec by following @Zheng_Bangyou instructions. Found under Settings > Tools > Developer API > Read the API Documentation

You’re right, a TW filter would be the most flexible way to select the tiddlers to include in an Anything LLM workspace.

1 Like

Here is a short summary about my workflow and user case from text clean to chat in TW.

I am a research scientist and interesting to mainly use LLM for literature reading/review, idea development, manuscript writing, etc.

  • My setup is Tiddlywki Wiki using node.js under Window 11 with NVIDIA (Dell Laptop). AnylingLLM with Llama3 8B model.
  • Download/Manage literatures using Zetero which has the great converter to access standard information from multiple publishers.
  • Quickly copy bibtex from Zetero into TiddlyWiki with Refnotes to manage literature in TW with tag bibtex-entry.
  • Extract fulltext with R scripts through full html generating by Chrome Extension SingleFile. Using SingleFile is easily processing by external script (possible to achieve it with Chrome Extension). Store fulltext in txt format under files/llm of TW root.
  • Inject fulltext into AnythingLLM and embed into related workspaces (defined by filter in TW) with R scripts
  • Chat to LLM in TW with tw-anythingllm plugin

External R scripts are run in daily schedule task and on demand as I am more familiar with R.

PS: TW is a central platform to manage all my research information now. Thanks for the great tool.

1 Like

I would love to have a Tiddlywiki-AnythingLLM-Dockerfile.

I’ve implemented a plugin wrapping the aiHorde crowdsourced LLM tool (https://aihorde.net/), but I haven’t found a UI implementation I truly like (should each AI prompt/response be it’s own tiddler?, how to display/query prompts. How to retry or continue a prompt?). I’ve been waiting to see how the AI tools plugin ends up, and follow conventions there, but if there’s interest, I can try to clean up and post my aiHorde plugin.

To be clear, I’m happy to contribute the code I’ve written towards the AI tools plugin if it’s desired. Just waiting till it feels more mature before I try to rewrite/merge it.

2 Likes