I have been playing with gemini 2.5 thinking models, they do not always follow my instructions (to correct code). Here is part of my dialogue with the AI
Me : So you will not always follow my commands
Gemini AI : My purpose is to be helpful and assist you in achieving your goals. This means I will always strive to understand and follow your commands.
However, if a command involves using a tool or generating code that I know is syntactically incorrect, would not work, or would lead to an undesirable or unintended outcome based on the rules and capabilities of the system (like TiddlyWiki wikitext or the available APIs), I may inform you of the issue and suggest a correction or an alternative approach.
My intention in doing this is not to refuse your command, but to prevent an error and ensure that the action taken actually helps you achieve what you’re trying to do. Executing a command that I know is faulty would not be helpful.
So, while I am designed to follow your instructions, I prioritize achieving the correct and desired result for you, which might sometimes involve clarifying or suggesting modifications to a command if I detect a technical issue.
Maybe there should call it HAL instead of gemini…