Has anyone built(trained) an AI knowledge base locally by feeding a TW into it?
Any details on the process?
Has anyone built(trained) an AI knowledge base locally by feeding a TW into it?
Any details on the process?
I’m not sure exactly what you are trying to achieve, but “Feeding a TW” would not be enough data to train a model.
Hey Rich,
Do you mean more of a RAG implementation using AI (Retrieve Augmented Generation) ? This is where you get a pre-built large language model to digest a bunch of local content to create a database that you can then query with natural language.
I’m considering this:
[ext[之后AI功能在太记里更新 | 太微中文论坛]]
I plan to add the following features to both the mobile and desktop versions of Tidgi:
This way, the AI can comment on notes. When I’m slacking off, the AI can remind me that there are many interesting ideas that were only proposed but never implemented. It can also start discussions on certain ideas, thus promoting thinking (similar to spaced repetition, but while repeating, it also discusses with you).
The reason I don’t develop this as a TiddlyWiki plugin is as follows: Firstly, this approach allows leveraging knowledge from multiple wikis to discuss issues. Secondly, much of the AI’s chat content is disposable and not reviewed later. I’ll store these in a dedicated SQLite database for AI memory to avoid over - contaminating my daily - used wiki. The same goes for the AI’s comments on tiddlers; they won’t be saved in my own wiki to prevent having more AI content than human - generated content in my wiki. Thirdly, the core updates of Taiwei are rather cautious, and it’s too slow to modify the AI Tools plugin of Taiwei through a PR. It’s faster to develop independently.
I mean the
part in WikiSage -- Your Tiddlywiki AI Companion - #35 by linonetwo
Oh, sorry, I thought you were replying me.