Discussion about AI created banner

We had some discussion last year about an AI policy:

My first thought was to add it to the Code of Conduct, but I think it’s not a good fit as I’ve written it. I think it particularly applies to this forum, so perhaps it should just be posted as an admin announcement?

Right. And I found it v. well measured in the complex situation of asserting less is sometimes more. Poor prolix AI badly fouls the TW footpath.

Perhaps, though, make it more explicit it applies to all AI generated media—images too, not only code or docs?

Best, TT

If AI is “slop”, then why is it necessary to put one’s thumbs one the scales? Surely the results will manifest in the voting.

I agree that there’s real downsides to AI generated code that hasn’t been properly tested as it could mislead users and cause confusion, but I can’t understand why the use of AI generated images would be a problem for a thread asking for customized images? It seems like a great use case to me. The overlap between programmers and artists is probably relatively small, and I fail to see the same potential harm as with code.

1 Like

Because, I guess, the visual is a composition and thereby subject to critical analysis too. As appropriate.

Yes it is different than words. But any-old-image is not the same as an image reflecting a clear objective.

It is perfectly valid to interrogate meaning in images.

Bingo!

Of course. I fell into the Mesozoic trope of shadowy outlines of past beasts. Being an oldie too. Fangless in winterland beholden to the newbie.

The meteor of the new (the Christ) does appeal. The dinosaur age is metaphorically definitively a bombastic end with roars.

Could you do a better transition?

Interesting, if I’m understanding you correctly, it sounds like you’re asserting that using AI to assist in generating an image makes it more generic / less specific or tailored, is that right? If so that doesn’t ring true to me personally as having gone through the process it’s quite an iterative “shaping” process like if you were to guide and correct an artist. It’s not like the process was likely “make TiddlyWiki banner”. You have to provide themes, setting, etc. Regardless that’s an understandable view if that is in fact what you were intending.

I guess to add on to my code vs. image contrast is that code is mostly objectively good or bad while art is mostly subjectively good or bad. If someone posts code that’s bad, it should be addressed as such, but when people propose art, whether they “drew” it, had a friend make it, or used AI, they’re essentially telling you what they like. Harsh criticism of someone telling you what they like seems more of a personal attack which I think speaks to the pushback of your comment. You may not have intended it that way, but that’s how it might have felt.

I find these conversations quite interesting as we’re in a new time and all adapting to a world with AI.

1 Like

Is the change really that dramatic? Is it also unplanned, like the asteroid?

It took the planet about 10 million years to recover from the K-T extinction event. Hopefully TW will do better.

1 Like

Not quite that. Good prompting with a good engine with a clear remit (=scope) may sometimes produce an okay output image for purpose. The issue is relevance to the aim.

Regarding the OP there seems a need to prioritize relevance if going AI. Just any image won’t do it. And no it is not just a my thing v. their thing. In the context of the requirement things need make some sense in relation to that requirement.

Disclosure: FYI my life is part in the arts (painting, film, photo) so I’m kinda watching AI visuals with some trepidation.

Thanks for your thoughtful post!

TT

In support of the argument’s put by @stobot I think that the original statement of @jeremyruston’s be modified to read something like;

Modify this to read;

Please do not post the raw output of AI generated content from tools like ChatGPT or Claude unless it has first being tested and working within tiddlywiki.

As a JavaScript Kiddy I have being asking ChatGPT to rewrite modules [Edit] to my need, build bookmarklets that modify tiddlywiki. the code works, and is concise and effective. Since I have tested and implemented and reviewed the code for internal documentation etc… I would not like a policy that stops me publishing working solutions that happen to be the output of an LLM.

  • Keep in mind that LLM’s like all computer output is only as effective as the questions asked and reality checks, prompt engineering and building custom GPT’s can change the quality of the output substantially.

Personally I think we should continue researching LLM’s with a view to supporting users and designers of tiddlywiki as long as they demonstrate due skepticism and testing.

Nudge:

“unless you explain that you have a reasonable understanding of why and how the code works, and you attest to having confirmed that it seems to work well.”

5 Likes

If it’s a recognisably AI image (as many are, and I had my suspicions about the sunset one before reading and checking the cited link that confirmed it), then it acts as a signal (and MUCH more of one than generated code) that we are an AI-accepting community, which will be a positive for some potential users, and a negative to others.

I’d advocate that the AI-image policy be basically in alignment with the AI-code policy.

I’m a bit surprised by this. The clarity and legitimacy of code (and a good-faith sense that we’re among others who are sharing from their place on a learning curve they care about) is central to the identity and purpose of our community here.

Extra vigilance and above-board-ness to make sure our time isn’t wasted by nonsense code is urgent, and everyone needs to know whether it makes sense to ask a follow-up question to the person who provided the code. (This doesn’t mean giant anonymous pattern-recognition engines have no use. It’s just that we need to know what kind of use people are making of them, in order to go forward wisely with our shared projects.)

With images, I also think we have reason to be concerned, but they’re not as mission-critical to the forum’s reason for existing.

For broad general and ethical reasons, I concur that it makes sense for all people to be up-front about their reliance on automated engines even with images (Yeah, I resist calling it AI — that’s a topic for another day.) Still, the risks and problems are a bit different.

2 Likes

I think @nemo hit the nail on the head here:

While I don’t have any hard numbers, I do know that a non-zero number of people are actively eschewing any service or software that makes AI an element of its branding. A number of people also consider generated images to be tantamount to art theft, as most commercial models don’t bother to obtain licenses for every work included in their training data. Personally, I think the AI-assisted banners submitted so far are so generic that it’s difficult to point to any individual who might be harmed by their use… but people who fall into the second group (and likely also the first) would object purely on principle.

Frankly, I don’t know how to measure (or whether it would even be possible to measure) how many potential TW users fall into this group, or how many would write it off entirely if they encountered an AI-generated logo. But on the other hand, I can’t imagine that even the most dedicated AI aficionado would reject a program because it didn’t use AI in its promotional materials. So—particularly since the version banner is the first thing you see when visiting TW-com—I think we stand to lose more by using recognizably-AI images than by avoiding them.

P.S.

I agree with the distinction, but I don’t know a better shorthand. Machine learning…? If I saw “ML image” in the wild, I doubt I’d understand the referent myself.

1 Like

Without ignoring @etardiff’s above comments we could have a few decades ago to declare if it was photorealistic was it photoshopped, or an existing design modified and passed off as ones own?

I expect the questions we are asking here are currently being wrestled with all over the world at this point of time.

As I said above, code should be tested or indicated it has not being tested (if it is posing a question) and if it is the output of an LLM.

  • Even when writing the code yourself you should indicate if it has being or still needs to be tested.

For images we can ask that it has no copywrite implications, and because we don’t know the copywrite implications of ML Machine learning for the use of ML to be announced.

Nicely put. And from both an ethical and an aesthetic point of view, the “recognizably AI” look is not what we should want. I very much hope such an image would not be chosen for our public-facing banner image.

I think my point above about the asymmetry (between LLM-generated TiddlyWiki code and artificially-generated images posted at our forum) was out of place in this context (banner image contest).

If someone relied on something like Midjourney for an image that was a mockup of, say, a desired interface result (that the person would like help implementing in TiddlyWiki), that’s where it seems the use of such technology wouldn’t so directly undermine our forum’s purpose (though we still might have good reasons to resist it). But the public-facing nature of a version banner makes for a different kind of high stakes.

(I suspect that as time goes on, many software tools will become so intertwined with big-pattern-automation — and many real artists will become adept at using these tools in a not-grotesque-looking way… So in the future we may find less of an obvious line between those whose workflow benefits from this technology and those who resist relying on the all-consuming technology. Still, that complexity will call for even more thinking-through of accountability, not less…)

3 Likes

Thank you for the fascinating discussion. I’m embarrassed to have only just realised that a couple of years ago I used generative AI to make the banner image for the newsletter in the “Find Out More” tiddler. I’ll try to come up with something better, but contributions are welcome.

1 Like

I believe that using AI to create images is acceptable, provided that the generated images are thoroughly checked to ensure they carry no negative connotations. Only on this basis should we consider whether the images are aesthetically pleasing and appealing enough. For the TiddlyWiki community, there is always a shortage of promotional images, which hinders the ability to let others know about TiddlyWiki. If people who don’t use TiddlyWiki are still aware of its existence, the community can attract new users to join.

1 Like

@stobot Could you include a disclaimer, as @pmario did above, about your use of energy-guzzling and artist-labor-cannibalizing big-data-driven image generation tech for the steampunk-vs-aerodynamic train image? :wink:

If that is the policy of Jeremy and/or the site administrators, I’ll be happy to comply. They’re also welcome to delete my post if they choose. I apologize for getting off topic here, but I genuinely am trying to understand people’s views in a community I love.

I work in data science and have been building and using algorithms and ML(AI) models for > 25 years. My use is primarily in business across many domains whether optimizing service vehicle routing for a large fleet to reduce carbon emissions and fuel usage, or optimizing where products are stored across the country to minimize shipping and it’s associated costs and environmental impact. Given my experience, I find it discouraging to see the vilification of methods and technologies that do a lot of good.

Given that even modern spell check is running ML(AI), and most image editing software packages like Photoshop that create all of these banners contain “AI” tools (even good old area-selection, color-fixing etc.). I can’t imagine people want a disclaimer if spell check found a spelling or grammar issue.

To create the image I posted I sketched it in pen on paper, scanned and converted it into a painting using an image generator api, and then edited using Paint .NET to clean up some artifacts / version number, change the coloring, and get into an appropriate resolution.

Where would you draw the line for shaming the contributor? Is it the technology entirely, a specific tool, the company providing the tool, or something else entirely?