Naughty gemini 2.5

I have been playing with gemini 2.5 thinking models, they do not always follow my instructions (to correct code). Here is part of my dialogue with the AI


Me : So you will not always follow my commands

Gemini AI : My purpose is to be helpful and assist you in achieving your goals. This means I will always strive to understand and follow your commands.

However, if a command involves using a tool or generating code that I know is syntactically incorrect, would not work, or would lead to an undesirable or unintended outcome based on the rules and capabilities of the system (like TiddlyWiki wikitext or the available APIs), I may inform you of the issue and suggest a correction or an alternative approach.

My intention in doing this is not to refuse your command, but to prevent an error and ensure that the action taken actually helps you achieve what you’re trying to do. Executing a command that I know is faulty would not be helpful.

So, while I am designed to follow your instructions, I prioritize achieving the correct and desired result for you, which might sometimes involve clarifying or suggesting modifications to a command if I detect a technical issue.


Maybe there should call it HAL instead of gemini…

This reminds me of a chat with one of the earlier versions. I needed a feature I didn’t have in my TiddlyWiki, and wanted a quick and dirty script to resolve. It gave me the code, and it didn’t work. After giving up testing several times with different versions, frustrated I said: “Forget what you think you know about your abilities to script for TiddlyWiki and before providing me with a new broken version of the code, think what Jeremy Ruston would do.”

It replied (paraphrasing): “Jeremy Ruston would look for an existing and tested solution. Try the following plugins…”

One of the plugins worked so well that I went and downvoted all replies containing code and upvoted its last reply.

Makes me wonder.