Discussion about AI created banner

Smells like AI rubbish.
A cat outline at sunset.

So?

It doesn’t communicate anything much about TW or this release.

p.s. I guess a sunset vaguely refers to the “end-of-previous” (i.e. towards 5.4) but, if so, there are no clues whether it is sun-up or sun-down.

Who wrote this? Who does it refer to? What is it for? Where do you reply to it?

I did add the admin banner, because I thought the comment was a bit harsh.

In hindsight a PM may have been more appropriate. It’s also a learning curve. I moved this discussion into it’s own thread, so we can let it go stale.

2 Likes

Right. It was. In a way.

On the other hand over use of AI images doesn’t help us.

I think the poster was genuine but too naive.

That is maybe the problem. AI easily creates engaging irrelevant images.

Overall I, personally, am very sceptical of too easy AI image solutions.

This is a side note.

TT

But dinosaurs do? But I digress.

Anyway, everyone likes cats, and sunsets.

And by “everyone”, of course I mean vampires

1 Like

We had some discussion last year about an AI policy:

My first thought was to add it to the Code of Conduct, but I think it’s not a good fit as I’ve written it. I think it particularly applies to this forum, so perhaps it should just be posted as an admin announcement?

Right. And I found it v. well measured in the complex situation of asserting less is sometimes more. Poor prolix AI badly fouls the TW footpath.

Perhaps, though, make it more explicit it applies to all AI generated media—images too, not only code or docs?

Best, TT

If AI is “slop”, then why is it necessary to put one’s thumbs one the scales? Surely the results will manifest in the voting.

I agree that there’s real downsides to AI generated code that hasn’t been properly tested as it could mislead users and cause confusion, but I can’t understand why the use of AI generated images would be a problem for a thread asking for customized images? It seems like a great use case to me. The overlap between programmers and artists is probably relatively small, and I fail to see the same potential harm as with code.

1 Like

Because, I guess, the visual is a composition and thereby subject to critical analysis too. As appropriate.

Yes it is different than words. But any-old-image is not the same as an image reflecting a clear objective.

It is perfectly valid to interrogate meaning in images.

Bingo!

Of course. I fell into the Mesozoic trope of shadowy outlines of past beasts. Being an oldie too. Fangless in winterland beholden to the newbie.

The meteor of the new (the Christ) does appeal. The dinosaur age is metaphorically definitively a bombastic end with roars.

Could you do a better transition?

Interesting, if I’m understanding you correctly, it sounds like you’re asserting that using AI to assist in generating an image makes it more generic / less specific or tailored, is that right? If so that doesn’t ring true to me personally as having gone through the process it’s quite an iterative “shaping” process like if you were to guide and correct an artist. It’s not like the process was likely “make TiddlyWiki banner”. You have to provide themes, setting, etc. Regardless that’s an understandable view if that is in fact what you were intending.

I guess to add on to my code vs. image contrast is that code is mostly objectively good or bad while art is mostly subjectively good or bad. If someone posts code that’s bad, it should be addressed as such, but when people propose art, whether they “drew” it, had a friend make it, or used AI, they’re essentially telling you what they like. Harsh criticism of someone telling you what they like seems more of a personal attack which I think speaks to the pushback of your comment. You may not have intended it that way, but that’s how it might have felt.

I find these conversations quite interesting as we’re in a new time and all adapting to a world with AI.

1 Like

Is the change really that dramatic? Is it also unplanned, like the asteroid?

It took the planet about 10 million years to recover from the K-T extinction event. Hopefully TW will do better.

1 Like

Not quite that. Good prompting with a good engine with a clear remit (=scope) may sometimes produce an okay output image for purpose. The issue is relevance to the aim.

Regarding the OP there seems a need to prioritize relevance if going AI. Just any image won’t do it. And no it is not just a my thing v. their thing. In the context of the requirement things need make some sense in relation to that requirement.

Disclosure: FYI my life is part in the arts (painting, film, photo) so I’m kinda watching AI visuals with some trepidation.

Thanks for your thoughtful post!

TT

In support of the argument’s put by @stobot I think that the original statement of @jeremyruston’s be modified to read something like;

Modify this to read;

Please do not post the raw output of AI generated content from tools like ChatGPT or Claude unless it has first being tested and working within tiddlywiki.

As a JavaScript Kiddy I have being asking ChatGPT to rewrite modules [Edit] to my need, build bookmarklets that modify tiddlywiki. the code works, and is concise and effective. Since I have tested and implemented and reviewed the code for internal documentation etc… I would not like a policy that stops me publishing working solutions that happen to be the output of an LLM.

  • Keep in mind that LLM’s like all computer output is only as effective as the questions asked and reality checks, prompt engineering and building custom GPT’s can change the quality of the output substantially.

Personally I think we should continue researching LLM’s with a view to supporting users and designers of tiddlywiki as long as they demonstrate due skepticism and testing.

Nudge:

“unless you explain that you have a reasonable understanding of why and how the code works, and you attest to having confirmed that it seems to work well.”

5 Likes

If it’s a recognisably AI image (as many are, and I had my suspicions about the sunset one before reading and checking the cited link that confirmed it), then it acts as a signal (and MUCH more of one than generated code) that we are an AI-accepting community, which will be a positive for some potential users, and a negative to others.

I’d advocate that the AI-image policy be basically in alignment with the AI-code policy.

I’m a bit surprised by this. The clarity and legitimacy of code (and a good-faith sense that we’re among others who are sharing from their place on a learning curve they care about) is central to the identity and purpose of our community here.

Extra vigilance and above-board-ness to make sure our time isn’t wasted by nonsense code is urgent, and everyone needs to know whether it makes sense to ask a follow-up question to the person who provided the code. (This doesn’t mean giant anonymous pattern-recognition engines have no use. It’s just that we need to know what kind of use people are making of them, in order to go forward wisely with our shared projects.)

With images, I also think we have reason to be concerned, but they’re not as mission-critical to the forum’s reason for existing.

For broad general and ethical reasons, I concur that it makes sense for all people to be up-front about their reliance on automated engines even with images (Yeah, I resist calling it AI — that’s a topic for another day.) Still, the risks and problems are a bit different.

2 Likes

I think @nemo hit the nail on the head here:

While I don’t have any hard numbers, I do know that a non-zero number of people are actively eschewing any service or software that makes AI an element of its branding. A number of people also consider generated images to be tantamount to art theft, as most commercial models don’t bother to obtain licenses for every work included in their training data. Personally, I think the AI-assisted banners submitted so far are so generic that it’s difficult to point to any individual who might be harmed by their use… but people who fall into the second group (and likely also the first) would object purely on principle.

Frankly, I don’t know how to measure (or whether it would even be possible to measure) how many potential TW users fall into this group, or how many would write it off entirely if they encountered an AI-generated logo. But on the other hand, I can’t imagine that even the most dedicated AI aficionado would reject a program because it didn’t use AI in its promotional materials. So—particularly since the version banner is the first thing you see when visiting TW-com—I think we stand to lose more by using recognizably-AI images than by avoiding them.

P.S.

I agree with the distinction, but I don’t know a better shorthand. Machine learning…? If I saw “ML image” in the wild, I doubt I’d understand the referent myself.

1 Like

Without ignoring @etardiff’s above comments we could have a few decades ago to declare if it was photorealistic was it photoshopped, or an existing design modified and passed off as ones own?

I expect the questions we are asking here are currently being wrestled with all over the world at this point of time.

As I said above, code should be tested or indicated it has not being tested (if it is posing a question) and if it is the output of an LLM.

  • Even when writing the code yourself you should indicate if it has being or still needs to be tested.

For images we can ask that it has no copywrite implications, and because we don’t know the copywrite implications of ML Machine learning for the use of ML to be announced.