I asked AI to create a chessboard widget and this is the result

Should’ve asked it to improve GitHub - Anacletus/TiddlyChess instead :slight_smile:

It’s a pity the project stays frozen.

That is the problem with AI-Coding: It is easier to make a good start and reach a certain level than to polish something already quite good but buggy to get really usefull.

Hi @well-noted could your plugin be a helper for plugin-development by saying: look at the plugin have here…help me implement…? Or @buggyj can your widget be used for this.

Yes, it could do so – I would probably want to write a custom interface just for coding, and maybe even have some coding specific javascript – but the model can certainly read plugin files and make recommendations based on them.

1 Like

I don’t know that this is necessarily true – give Claude 3.7 thinking a crack at it, I’d say. I’d give it a try, but I’m not familiar enough with the project to prompt-engineer.

I think we’re at the point with AI where people don’t realize there’s still skill involved in communicating with it effectively :wink:

Okay, @well-noted you convinced me to invest a nickel.

What would a coding setup look like for sonnet.
Is there a readymade setup with fields to tell ā€œhimā€ what to do, which tiddlers to modify and which to take for context?

And @well-noted could you integrate gemini into your sage-project. As we learned from @buggyj it is the only free API-Key. 2.5 seems to be quite an enhancement.

2.5 will only be free for a while longer, while it is ā€œexperimental.ā€ But I think WikiSage could incorporate Gemini - - that said, I don’t find it better than Sonnet.

The prompt should instruct the model to think, that’s for sure. I haven’t played around with the API yet for the thinking mode, but I understand it’s designed in some way that the model will respond when told to ā€œthink.ā€

The prompt should also include a reference file for TW documentation, and/or (and would be far better, imo), it should be connected to the new webReference api so it can access the documentation and also this forum (I find that webReference tends to refer to these two sources for TW knowledge)

The interface itself, I think, might be similar to the way you’ve constructed, with a sidebar for the chatbot and then a reference for what the storyview is. It might be helpful to have an ā€œExpandā€ button for plugins that would open all the tiddlers in the story, or you could do something like Tinka and have all the plugins listed in a dropdown from which the user could select, and then the chatbot would be instructed to reference all files associated with that plugin.

I might, for coding, have a daemon monitoring all things that are happening, which would then use other chatbots to make changes through tool use – then the daemon would be able to catch if one of the agents neglected to do something or did something harmful, and then implement undo and command another agent to fix.

1 Like

I am on the start of a learning curve with AI. With the free or ā€˜low cost’ access to the models, the only customization is via what is call the System Instruction or ā€˜Role’. I have written a few that are accessible via the tools button on the chats side bar, you can try editing or creating a new one, they are tagged geminiRoles. I have one for creating code and one for searches on the internet. I have found it challenging to create good ones.

My experience and understanding is that AI will produce good results only when it is asked about subjects it has been well trained on, and that the System Instruction can only add emphasis. I only have been able to get Gemini to reliably produce interfaces between preact code and tiddlywiki, but I think that is enough. The interface or glue code is in my ā€˜unchane’ plugin, and is very simple to use. Although Gemini can easily create configurations for this glue code (and write the tiddlers) it is not so good at produce reliable solutions to the subject of the code, eg a chessWidget. I have found that Claude is much better at this task.

We have limited free access to claude via it website, but I have found this to be enough. ā€˜unchane’ separates the subject of the plugin in to a pure preact component, and the models have been exposed to a large amounts of preact (and react with is almost identical). So I asked Claude to procedure a preact chess component and got good results, and copy-pasted it over the Gemini produced preact-component tiddler.

The AI chats I had with Gemini and Claude to produce the widget are also on the demo tiddlyhost page.

I feel that this approach can be used to create widgets to interface with javascript libraries, to produce graphs, animations etc that appear accross the web, and that cannot be produced directly with wikitext.

1 Like

I just used Claude to help me write the Timeline-Widget:
Here it is my report where it was successfull and where not.

  • Claude could not create the initial widget, though it knew the codebase of both the timeline and TW .
  • With @oeyoews demo as a startingpoint claude could implement the creation of the timeline-data from a filter.
  • As the widget gets bigger it gets more and more difficult to use it to track down the flaws. Especially those that would need deeper understanding of mechanisms like a creating a wikilink in the widget. I am still struggling with that.

This is a good example of what I call ā€œAI is a hammer.ā€ Lots of people can use a hammer, but knowing how to use a tool, and knowing how to apply that tool in a particular situation, becomes vastly different in proportion to the complexity of a project.

This is definitely not to sound laudable – Anything I know about dev-work I’ve learned from hitting walls with AI and having to figure out my way around them.

@JanJo, in my experience, the AI probably is capable of getting around any errors you are encountering – If you continue playing around with it, eventually you might hit on the right phrasing to achieve what you are trying to do and, in that process, learn something valuable.

I’ve definitely found that in the past several months I’ve been far more capable of leaping over walls by being able to ask very specifically for something, or directing a model to look at something in a particular way, based on having already wracked my head for days over a similar problem.

1 Like