Nicely put. And from both an ethical and an aesthetic point of view, the “recognizably AI” look is not what we should want. I very much hope such an image would not be chosen for our public-facing banner image.
I think my point above about the asymmetry (between LLM-generated TiddlyWiki code and artificially-generated images posted at our forum) was out of place in this context (banner image contest).
If someone relied on something like Midjourney for an image that was a mockup of, say, a desired interface result (that the person would like help implementing in TiddlyWiki), that’s where it seems the use of such technology wouldn’t so directly undermine our forum’s purpose (though we still might have good reasons to resist it). But the public-facing nature of a version banner makes for a different kind of high stakes.
(I suspect that as time goes on, many software tools will become so intertwined with big-pattern-automation — and many real artists will become adept at using these tools in a not-grotesque-looking way… So in the future we may find less of an obvious line between those whose workflow benefits from this technology and those who resist relying on the all-consuming technology. Still, that complexity will call for even more thinking-through of accountability, not less…)
Thank you for the fascinating discussion. I’m embarrassed to have only just realised that a couple of years ago I used generative AI to make the banner image for the newsletter in the “Find Out More” tiddler. I’ll try to come up with something better, but contributions are welcome.
I believe that using AI to create images is acceptable, provided that the generated images are thoroughly checked to ensure they carry no negative connotations. Only on this basis should we consider whether the images are aesthetically pleasing and appealing enough. For the TiddlyWiki community, there is always a shortage of promotional images, which hinders the ability to let others know about TiddlyWiki. If people who don’t use TiddlyWiki are still aware of its existence, the community can attract new users to join.
@stobot Could you include a disclaimer, as @pmario did above, about your use of energy-guzzling and artist-labor-cannibalizing big-data-driven image generation tech for the steampunk-vs-aerodynamic train image?
If that is the policy of Jeremy and/or the site administrators, I’ll be happy to comply. They’re also welcome to delete my post if they choose. I apologize for getting off topic here, but I genuinely am trying to understand people’s views in a community I love.
I work in data science and have been building and using algorithms and ML(AI) models for > 25 years. My use is primarily in business across many domains whether optimizing service vehicle routing for a large fleet to reduce carbon emissions and fuel usage, or optimizing where products are stored across the country to minimize shipping and it’s associated costs and environmental impact. Given my experience, I find it discouraging to see the vilification of methods and technologies that do a lot of good.
Given that even modern spell check is running ML(AI), and most image editing software packages like Photoshop that create all of these banners contain “AI” tools (even good old area-selection, color-fixing etc.). I can’t imagine people want a disclaimer if spell check found a spelling or grammar issue.
To create the image I posted I sketched it in pen on paper, scanned and converted it into a painting using an image generator api, and then edited using Paint .NET to clean up some artifacts / version number, change the coloring, and get into an appropriate resolution.
Where would you draw the line for shaming the contributor? Is it the technology entirely, a specific tool, the company providing the tool, or something else entirely?
AI is a tool that people can master the use of - just like github is a tool or even TW is a tool - it improves peoples productivity and capabilities - but doesn’t replace their creativity & imagination (maybe emphasise that as most valuable aspect of any contribution - regardless of tools used)
Avoid words in this discussion and in the policy like “slop” and "rubbish" - while I agree that Ai can produce poor quality atm … ‘It’ doesnt care what you think of it - but the user of the tool will care about what you think of what is actually their contribution.
the policy should ask for disclosure of any contributor of a piece of work including AI - thats why I mentioned my use of Sora - as I believe it is ethical to be honest about that … If I hadn’t credited Ai, I wonder if anyone would have been as critical - or maybe use more human friendly tone ?
Lastly - try to remember that we all volunteer contributions to this community because it is rewarding to do so… I posted my image in the spirit that it is a fun competition…
As stated previously code may still need validation or testing when Posted here. Images are subject to review and taste but are no longer breakable, just asethetic. Although it may be fair to declare its source.
Gosh, my post really didn’t hit the right tone [and EDIT: your post seems to have been edited, so that it no longer shows the words I’m quoting?]. (Despite the wink and gesture of crossing out the most judgmental phrase, I clearly erred on the side of sounding sharp.) Shame is certainly not the point.
Thanks for offering these details. They’re very helpful.
For what its worth, possibly not much, but here’s my view on this:
Images: I think this is a hopeless battle and a waste of time to fight it. More or less ALL images will be AI generated soon.
Code; basically the same argument. Please just ensure that if you post here, don’t waste other peoples time, just like you shouldn’t post your own code if it is not properly curated and on topic. AI is already better than ALL of us here at coding (but one needs access to them and to know how to prompt them properly)
Posting in general: If anyone nautrally posts nuggets of knowledge - great! And if anyone is really a moron but who still post nuggets thanks to AI - great!
Bottom line: AGI→ASI→Singulartiy is approaching like a train and we cannot do anything about it. It’ll be hell or heaven. I’m scared, but let’s make the best of it. If you can make a great image, or macro, or whatever with AI, of course you should go ahead.
Just in case anyone here is not following how fast AI is developing, here’s a video from one of the channels I follow. As you can see, it is simply an (embarassing) mistake to bluntly say “AI images are bad”.
Actualy I know a few people who produce images professionaly or passionatly and I think they will still do it and produce results machine learning will not in a hurry.
I welcome the opportunity to put more imagery into what I do.
I am confident all we have for now is some machine learning and we are a lot further from this than we think. Although this does not mean we will not find problems in society as a result of what is here and comming soon.
Did someone say “AI images are bad”? Are you attributing this claim to me, in particular? Images don’t have moral character; decisions do — and moral character never reduces to the binary good/bad, on my view.
Of course images aren’t really generated by “AI” anyway… they’re generated (upon indirectly-paid request) through neural-net circuitry which digests1 and churns through vast input of images not generated by AI (and eventually will be re-digesting prior output of its own kind)… And this technology can be leaned on to greater and lesser degrees, with greater and lesser skill, and greater and lesser transparency about the role of this tech.
Of course there’s more and less interesting use, more and less responsible use. Clearly, at least two of our maximalist version-banner entries have engaged in a process that takes considerable initiative, time and tinkering, and the results (quite different as they are, despite being similarly maximalist in detail!) reflect on the image-tech users as individuals to that degree.
My point is not to “shame” (as it apparently came across to @stobot above) in a blanket way, but to ask for transparency and reflection.
And also, I admit I’m trying to read the room.
I’m a bit of a geeky luddite; I enjoy working with graphic design tools such as bezier curves (including the fonts built on these), gradients, compositional shape-play, and alpha transparency effects. “Such quaintly pre-2022 stuff!” people may now say.
In response to Jeremy’s suggestion (that it would be nice for the banner to include more than typography), I’d like to put more effort into my germ of an idea, but I’m not sure it’s worthwhile…
Over the years, I’ve put many hours into generating candidates for version-banners, and I don’t regret any of it! I have cheered on dozens of entries that have left my entries in the dust… I think this site dedicated to version banner contest entries shows that enthusiasm.
But entries like pmario’s image don’t just leave me in the dust the way a faster runner would; they take the wind out of my sails, so to speak, the way a flash flood would.
I feel a bit as if I’d be showing up to a coffee-house with my acoustic guitar, while others arrive with an amplified app that instantly transforms their voice into pitch-corrected Stevie Wonder or Taylor Swift in real time, with an automatic beat-box over an infinite array of popular chord-changes at the press of a button.
It’s not that multi-track beat boxes and voice-transformation apps (or people who use them) are intrinsically bad, or that there’s no skill inovlved in using them well. It’s just when the environment is so amped-up, it no longer feels like the place to show up with an acoustic guitar and a trained-but-mortal voice.
Of course, this is just a version-banner contest. It’s not a Big Deal. But our discussion is a microcosm of the discussions happening all around us. 2
Why send a selfie that shows your warts, if you can send an enhanced one? Why write your own love poem when you can tap into curated algorithms of seduction? Why learn a language with your ears and mouth when you could have an instant bidirectional translation app? Why learn to cook when your spouse wants the AI-robotics kitchen gadget designed to serve you fusion Mexican-Indian culinary delights on cue, based on your Data-Distilled Taste Profile? Why should we all bother setting up a community poll as asking participants to look at entries and vote on them, when the entries could simply be submitted to a super-smart evaluation engine to decide instantly which is the best one? Is democracy, too, not bumblingly inefficient, quaint, naively un-ambitious?..
I’m going to choose to disengage on this topic and hide it as it’s taking away my enjoyment of this community. I appreciate you all, and wish you best of luck in your goals.
@Springer , nope not at all particularly to you. Since you ask, I was triggered to post after reading this unconstructive criticism but also the general gist of negativity towards AI images. I’m not saying they’re necessarily “good” - I’m saying we should embrace the new world because we can’t escape it.
Specifically, when it comes to banners here, I side with @Mark_S when he says
If AI is “slop”, then why is it necessary to put one’s thumbs one the scales? Surely the results will manifest in the voting.
Now, of course, this doesn’t have to take away the joy of tinkering. Making images in photoshop (which, by the way, totally killed the hand-made-drawings-industry) or clinking on a piano (which, by the way, totally killed the harpsicord). I actually think that in an ASI world, tinkering is pretty much what we’ll all do, for the joy of it.
Yes, a concern - but I expect AI to solve the energy problem within a few years. (It is a purely technological matter and we have failed thus far because we’re not smart enough!)
Why do you pick up your acoustic guitar when you can just turn on spotify? Why write a love poem when Lord Byron already made it better even before you were born?
I have thought a lot about what an ASI world will look like (or, I should say, I’ve been reading and listening to hundreds of experts on the matter over the years). While I am afraid of the uncertainty (especially the transitional period to such a world) I find great comfort in that much - perhaps even most - of what we humans do is not really done with the intention of producing something of value. It is typically done just for the intrinsic joy (or relief) of doing it. I think this will continue.
TBH this discussion illustrates that concerns on AI imagery are not trivial.
They reach way beyond TW itself.
On the other hand I have no interest in bogging down TW discussions in the philosophies of the relation of ideologies to actions. Relevant as they are.
Who is saying that here? To say, more modestly, “most AI is weak” is to affirm that the slick outpourings very evident are a gambit rather than a realization. Seems true.
You are right in questioning.
But any blind acceptance that ''AI is good" won’t do.
Are you saying that?
Get your case studies in gear broheim to make the point you haven’t yet.