In support of the argument’s put by @stobot I think that the original statement of @jeremyruston’s be modified to read something like;
Modify this to read;
Please do not post the raw output of AI generated content from tools like ChatGPT or Claude unless it has first being tested and working within tiddlywiki.
As a JavaScript Kiddy I have being asking ChatGPT to rewrite modules [Edit] to my need, build bookmarklets that modify tiddlywiki. the code works, and is concise and effective. Since I have tested and implemented and reviewed the code for internal documentation etc… I would not like a policy that stops me publishing working solutions that happen to be the output of an LLM.
- Keep in mind that LLM’s like all computer output is only as effective as the questions asked and reality checks, prompt engineering and building custom GPT’s can change the quality of the output substantially.
Personally I think we should continue researching LLM’s with a view to supporting users and designers of tiddlywiki as long as they demonstrate due skepticism and testing.