DeepSeek and TiddlyWiki

Has anyone tried DeepSeek with TiddlyWiki? There are many discussions online suggesting that it surpasses ChatGPT.

The code is available on GitHub: DeepSeek.

1 Like

The latest version of anythingllm is supported DeepSeek through ollama.

I tried DeepSeek 8B and 11B in my laptop. Both models have the reasonable response speed. But it seems 11B has better answer for domain specific question.

In the online version of DeepSeek and ChatGPT, ChatGPT has better correct answer to generate TiddlyWiki related questions, e.g. generated wikitext, js macro/widget.

2 Likes

Yes, Deepseek can be run locally on oLlama – I will probably be running a model based off of Deepseek sometime this year, the strategy is far more efficient, and it’s likely the best open source model one could pick at this time.

Though the training strategy is far more efficient – I have tried both and neither their reasoning model nor their advanced LLM beats chatGPT equivalent models. And they also censor politically inconvenient content.

2 Likes

I’ve just tried o3 mini free version with tw5 tonight. I wanted to see how it could do the stuff I had recently worked without giving it any code. It does great! But it needs some inputs and guidance but man, this is really usable. Quite better than the chatgpt4 for this kind of stuff. And it generate code and some decent bits of docs/reasoning at the same time! I have glanced it, it seems about OK. But it does not produce a full tiddywiki, just bits of code and organization (naming conventions mainly) that you have to incorporate manually It won’t replace the coder, but it really seems to boost productivity. That’s the first time I can say such a thing for this kind of technology. Caveat: you will be better at appreciating it if you tried doing your stuff without AI beforehand. I shall have a goat assembling what it wrote and really check it. And report here.

o3 is well fit for coding stuff. R1 is not done for that (from what I read about that). Surely enough, we may see an R3 mini cloning the possibilities of o3 mini. That will be very good instead for the free software movement.

What is cool with o3, that is that I described how I wanted my tiddlers, and now every new iteration is automatically exported as a json file. I can directly use tiddlywiki to import it and see what it does or not, and how.

1 Like

Yes, I have also found o3 (high effort) to be very good at building view templates if you give it a couple examples first :slight_smile:

I have not implemented this yet but I too use anythingLLM to run local LLMs as well as using Gemini’s API to run it through it as well in order to make use of anythingLLM’s RAG capabilities. My intent is to use anythingLLM to build a vector database of tiddlywiki knowledge. I was thinking today how it might be possible to even build a community database that is shareable. I don’t know how big such a database would get.

RAG btw stands for Retrieval Augmented Generation. You can build a database of documents and have the LLM reference that before referring to it’s own training. AnythingLLM has a feature that allows you to load documents and it stores them in a vector database that you can then direct the LLM to use.

Try deepseek on https://deploy-preview-8966--tiddlywiki-previews.netlify.app , this is preview of AI tools more server by linonetwo · Pull Request #8966 · TiddlyWiki/TiddlyWiki5 · GitHub

2 Likes