AI Generated Content on talk.tiddlywiki.org

The notion of bullshit as I meant it refers to that article, which paraphrases Frankfurt to say

“The models are in an important way indifferent to the truth of their outputs.”

And I’m afraid that I believe – as a very outside observer – that prompt engineering is similar to both the pyschic’s con and to the nonsense about knowledge from previous livess that Plato promotes in the Socratic dialogue, Meno. With enough triangulation from questions, you can arrive at an answer that suits you, but that has no relationship whatsoever with what’s actually true or what’s actually known.

But the subject at hand is community standards. I don’t think posting small chat outputs here is a big deal, so long as they’re properly labelled and not passed off as original work. Unless the discussion is about the LLMs, though, I would find it much more helpful if the poster synthesized the information in their own words and did enough research to verify that it’s not hallucinations/bullshit.

Such posters should also recognize that in using this material, they may be losing some not insignificant portion of their potential audience. I read most everything you post here, @TW_Tones, but I skipped that one, entirely on the basis of your sourcing from ChatGPT.

2 Likes

It’s fair to say, as you do here, that there’s no source of information we should approach in an absolutely uncritical way.

In at least one way, though, LLMs are very much unlike trusted friends and human experts. They are impervious to our interest in accountability, and their developers have shown no interest in building genuine accountability into their interactions.

Pattern-recognition (what LLMs do) is a matter of what the logician and semiotician C.S. Peirce calls “firstness,” (registering similarities, resemblances, which are always a matter of degree), while an orientation toward distinguishing the true from false, appropriate from inappropriate (etc.) involves engaging with signs in their third (most normative) dimension.

My friends, and human experts, do participate in this field of thirdness (being accountable for registering what’s ok vs what’s not). For this reason, there’s such a thing as developing some appropriate degree of trust in them. If I mislead you by relying on them (in precisely the areas where they’ve committed to being reliable), I can apologize to you, but I can also turn around and hold them to account.

Meanwhile, there is no appropriate degree of trust (normative trust) in an LLM — though of course there’s such a thing as making good probability bets. How I interact with LLMs (or share their output, etc.) involves a distinct responsibility that can only lie with me and with other socially-connected beings who show me that they give a damn about not screwing up.

(I don’t imagine that we’re disagreeing on anything of substance here, @TW_Tones — I’m just hoping to articulate one qualitative difference that is easily overlooked in the "just as with other sources of information… " line of argument.)

5 Likes

Bravo, @Springer

The universe AI cares not.

3 Likes

Yes, @Springer I think we are in agreement,

@Springer yes my point was even if you trust a friend you need to be sceptical, if I can’t trust them at all they would not be a friend. I was not saying LLM’s are a trusted friend, the are an untrusted tool. If they were useless, I would not use them at all. I don’t trust hammers as well, its the way I use them.

I have made use of LLMs to great effect, as a tool. You could say it takes fuzzy search to new heights. It is in fact important to ask the right questions and be critical of the result.

  • I don’t think I need argue here they have many and effective applications

Until such time as real A.I. Exists, we use LLM’s as tools, and we don’t hold our tools accountable, we ourselves are accountable for what we do with our tools. In fact when I produce something, even with the advice of trusted humans, I need to be accountable for what advice I relied on.

  • I think this points to a negative consequence of claiming LLM’s is A.I. that some are inclined to assign responsibility to the LLM and ignore their own.
  • Yes, because they are LLM’s not A.I. or even I
  • Even the Title of this thread makes the error of suggesting there is any A.I. in LLM’s
    • As if we need it, I could demonstrate by listing what LLM’s can’t, don’t or will not do.

It is true that some people will overly rely on the LLM’s as a tool and pass it off as their own work when it is not, and yes this should be discouraged. Just like any information without sufficient scepticism it will be unreliable.

I think your larger statement “The universe cares not” is the more reliable. This is also clear in Evolutionary science, to start with we “need theory of mind” at a minimum.

  • Care is perhaps only potentially valid between individuals and in social species.

I am sure there is people out there that don’t know what they are talking about, but there is a serious and new science around prompt engineering with very high success approaches. We ignore that at our peril. Prompt engineering is about setting the context, the input, filters, and asking the right questions.

  • Not so fuzzy questions, you ask of a fuzzy data/knowledge base.

I would suggest this is an errant approach, especially if you normally find my content of use, to ignore based on an arbitrary classification. However it may not have been content of use to you, I scan some content for importance or relevance all the time.

But I am ready to read your content. I am not ready to read ChatGPT’s content.

With a limited amount of time to invest on any subject, I might briefly scan widely, but I will invest time in sources that seem more reliable. During the height of Covid, for instance, I paid far more attention to serious medical researchers than to the likes of Joseph Mercola.

ChatGPT is emphatically not a reliable source. Its training material contains plenty that is true, but also contains vast swaths of falsehoods. And it has no mechanism to distinguish them. The only way it might get a relatively high truth rating is if the material it’s trained on is substantially more truthful than the Internet as a whole. And that would require enormous amounts of expertise.

Because such LLMs are trained on more correct information than disinformation, they are useful to suggest areas of study, to help with brainstorming, and so on. But their raw output is not worth much more than that. I choose not to invest my time with them.

Had you synthesized the information yourself, done some research into those techniques, and only used LLM output as a jumping-off point, then I would have happily read your post.

Yes, I’ve been following the LLM journey somewhat closely. As the backlash seems to be setting in, I’m guessing we’ll see much more of a realization that prompt engineering is just another example of trying to fit people to the limitation of software rather than crafting software designed for people. I’m guessing it will end up a short-lived fad. I’ll bet my NFTs on it. :wink:

No, there are plenty of intelligences indifferent to the truth; we call them bullshitters. For at least current-crop LLMs, there is simply no notion of truth to consult. I agree that “AI” is misapplied to LLMs, but pretty much every historical application of the term to some field of technology been problematic. LLMs’ level of (un-)intelligence has little to do with their reliability.

2 Likes

I urge further investigation into this, for example with prompt engineering if you ask for answers that a qualified professional would say, you change the probabilities that you will get better answers.

As I said, it is the quality questions you ask.

I have already created a custom Chat Session that includes in its corpus specific tiddlywiki source material, for example the word “wikitext” returns valid wiki text, it is not perfect because it needs more training or references.

  • It also gives code examples you can test.

So do you have suggestions on how a prompt engineer would have updated your prompt

so as to get a more correct result… without any foreknowledge of the deprecation in its second suggestion or the incorrect code in the third one?

The result was reasonable as it stood, have a look at the [Edit] comments by @Mario Mario who did the due diligence.

  • My question set the scope by saying “website” which means html, javascript, and css was relevant
  • I did not expect perfection, and I did not pretend it was so.

Here we see I, ChatGPT and mario have collaborated to generate usable info and code. Now we need to create some action triggers. Make them into modules, currently just beyond my reach.

I think we’re going to have to agree to disagree here.

From my point of view, it required outside expertise to correct 2/5ths of the suggestions, and there is no obvious way to have found that using prompt engineering or other current techniques. Therefore the original chat response was wrong far too often for me to want to take it seriously. Even 1 in 10 would be awfully high.

But clearly you see it differently. I don’t think there’s much chance of changing one another’s minds.

But if I get a vote, I’m in favor of Jeremy’s suggestion in the OP.

3 Likes

Yes, we will agree to disagree, it is not a function of producing perfect output from the LLM but appropriate output. Remember these calls in plain JavaScript need to be read in the TiddlyWiki context and it was posted to facilitate collaborations, that has occurred.

  • In fact the way I presented it, it may very well be useable in another context independent of TiddlyWiki.
  • As soon as one sets hard, fast and somewhat inflexible rules, the community has started a slow death in my opinion.
  • Remember all the caveats, I preface it with a discussion, explained its source, quoted it in full and invited comment. Which I got and @Mario tested it.
    • So if I did something wrong there, you are suggesting inflexible rules or harsh criticism.

It’s interesting how I have a short break from substantial contributions then return and quickly find grief. This is not the way to encourage and maintain participation.

Please TW_Tones, I think you take this far too personal. You are appreciated, very active and helping a lot of us. Thank you!
Read the original post once more. It is about not copying AI generated Content. Not about you using it and commenting.
I often have problems due to lack of language skills - and to be honest also not having knowledge enough about Tiddlywiki. I often experience remembering a thread at a later date and understand better.
What I really did understand is to avoid this forum being yet another echo chamber for other AI’s to be trained from. As a user, that is also of interest to you.
Yes, your post was mentioned as an example, the post - not you! If you had not done it, somebody else would.
The discussion has been interesting to read!

1 Like

Thanks for your comment @Birthe

That is correct, and my “responsible post” was given as example implying it was breaking the proposed rule. Subsequent comments supported that position, and I think it is WRONG.

  • This is not about it being personal, it is about it being difficult to put a reasonable argument.
  • It is me saying I think the proposed moderation is inappropriate especially if my post falls into rule, it was given as an example.

Everything is easy if I agree, but If I want to have a second opinion respected, then it seems like grief.

I don’t understand why this is being discussed at all.

Just like two years ago, if someone posts…

  1. outright crap, they are banned
  2. bad quality (illegible, often stupid suggestions, etc), they are ignored
  3. good stuff, people read it

…and the same applies to any stuff you copy, AI generated or not!

Obviously if someone just copy-pastes any gob without curating it then that is just a bad member and will get treated as such.

How is this not obvious?

:thinking:

Twice recently, someone has posted LLM-generated stuff (without putting this fact up front) where I have been the first (as far as I know) to respond with a “smells like LLM” reply or flag. I do think it’s inappropriate to post that way. (One of the posts was inappropriate in other ways as well.)

However, I would not go from “that was a bad post” to saying each of these is “just a bad member.”

And I think it’s far from obvious what it means to say: once someone is deemed “a bad member” that they “will get treated as such.”

Some violations do warrant being barred, but barring can only be reasonable if the guidelines are clear enough to be applied even-handedly. (Even barring a person doesn’t require judging that the person is a bad person; it just requires recognizing that a civil community needs to be able to rely on some clear boundary expectations.)

I will flag posts that seem inappropriate, so that administrators here can sort things out. And I welcome this conversation partly because flagging a post for community-norm violations should feel like a pretty clearcut matter. I flag the post, not the person.

As an ordinary participant here, I do not want to be tracking the question of whether there are “bad members” here. I want to be able to treat people with the benefit of the doubt. I want to trust that we have clear-enough norms that we can gesture toward them with confidence.

Mostly, @twMat, I’ve felt a spirit of constructive engagement from you over time. I love your contributions here! Yet this post here came across as exasperated. Fair enough, we all have such moments. But I don’t think that exasperation will serve us well in the LLM landscape. Both curiosity and moral caution are appropriate — as well as patience with one another.

2 Likes

@Springer

Respectfully, you’ve misunderstood my post - probably because I didn’t express it clearly.

The point I was making comes here, praphrasing my earlier sentence:

Obviously if someone just copy-pastes any gob without curating it then that is just a bad post and will get treated as such.

In other words, if a post is just a bad one it will get treated as such. If it is, say, outright offensive it will probably be deleted. If it is gibberish, people will point it out - and if the poster doesn’t edit it, it might be deleted or curated by someone else.

But in my mind it is not different from being a “bad member”; I say “member” not “human being”. A member is anyone who posts here. That pretty much defines ones identity as a member. This does not deal with anyones, say, individuals worth as a human being. (I mean, how would anyone know anything about that for someone else here!?) To “treat accordingly” simply means there is a natural reaction to it, not that it (a post) necessarily is deleted, nor that they (a person) necessarily is banned. Also a wonderful posts are “treated accordingly”. (And, for that matter, wonderful humans too - but that is typically beyond the scope of this digital message board IMO.)

By the way: You’re doing us all a favour when you flag posts. IMO it doesn’t make much difference if it concerns some unedited LLM gob or any other gibberish, I appreciate it regardless.

Hope that makes my point clearer. If not, well I tried, and the intention was/is certainly not to insult anyone.

1 Like

Well, I appreciate the clarification that you meant to target posts, not members per se!

Still, I think that there’s little that’s obvious about the idea that bad posts “will be treated as such”. I think it’s important to have some places to articulate what counts as a bad post. Even better: we want to help people formulate posts so that they are less likely to waste other people’s time (or mislead or offend, etc.).

The ability to recognize and steer around “bad posts” does require some level of attention as well as background knowledge. People who come here looking for help with novice TiddlyWiki problems will find the forum helpful in inverse proportion to how much “noise” there is (Noise can be LLM-slop pasted as if it’s advice from someone who could stand behind the content, but it can also be personal comments popping up in threads whose titles look like they’re about this or that technical issue, etc. Alas, the former is less obvious, less easy to steer around, for a newbie.)

Mostly, I’ve found talk.tiddlywiki to be fairly low-noise, and I can usually see temperament-related traffic jams from far enough away to steer clear. Notably, tensions related to the convictions of people who really do give a damn about TiddlyWiki are quite different from the less predictable and more insidious noise that comes from slop. The risk of this may be likely to increase, moving forward.

I think we maintain a relatively low-noise helpful forum by being proactive as challenges emerge. As much as I wish handling LLM tools responsibly were an “obvious” matter of common sense, my experience (here and elsewhere, in ways that resonate with @Scott_Sauyet’s comments) suggests that many people are confused on this theme. Hence the need for this thread. :slight_smile:

1 Like

That’s a good point!

1 Like

Framed that way, I certainly agree… though I the guideline proposal by @jeremyruston didn’t strike me as inflexible. Still, if it strikes you that way — and I do value all the energy and initiative you bring to the community, @TW_Tones — I wonder whether there are ways to help nudge us all toward being on the same page here.

First, let’s see whether we all agree that there’s a world of difference between these:

  1. Cases where @TW_Tones, or anyone else, structures a post around taking LLM output (clearly flagged as such) seriously — to which some people respond by scrolling past the GPT output to look for any original commentary, or by giving up on the thread, slightly put off by the impression of being asked to listen to a blabbering automaton. :roll_eyes:
  2. Cases in which someone posts, “Try this:” or “Here, I made this:” and then pastes LLM output, unmarked. :grimacing:

Most of my comments so far have been about type 2, which seems less difficult to fix, since demanding attribution cannot be misconstrued as shutting down content. Now we’re turning to type 1.

It seems that the “waste of my time” reaction to explicitly excerpted LLM output (when it is clearly marked as such) happens when such output is swimming along in the very same content-stream as posts in which people are muddling through things in their own words/attempts at code (with or without excerpted code, attributed to other human authors, as needed — or conceivably an LLM-generated snippet when it’s just serving as the focus of a technical question).

So: What about having a category for threads that attempt to mine LLM for TiddlyWiki-related value? Even there, certain norms make sense (such as making it clear what content is pasted from LLM and what’s not). But it would be an “enter-at-your-own-risk” area for those who are rubbed the wrong way by having LLM-output taken seriously as substantive content.

We currently have a post category “cafe” that encourages people to post social or silly stuff that would otherwise irritate folks for being off-topic. We see “cafe” and know if we open this thread, we may find free-association, bad puns, personality quirks, music we might like or hate, etc. I imagine a new category that is not “Discussion” (all human voices here) and not “Cafe” category (humans but without any seriousness filter), but something like an “LLM in the room” category.

If there’s such a neighborhood here for such threads, then @Scott_Sauyet and @jeremyruston — and I myself, at least in most moods — can steer clear without seeming to shut down LLM-engaged posts “inflexibly”. Folks who find it worthwhile to read such stuff may have adventures that they find entirely agreeable, but without the risk of the rest of us feeling that we’re at a dinner party being re-introduced to that one narcissistic cousin who must be tuned out to retain our sanity. And who knows, perhaps someone who hangs out in the “LLM-welcome room” will learn amazing stuff that they can then share, with the rest of us, in their own words?

3 Likes

I understand the intent of this policy, and I appreciate the commitment to fostering a culture of meaningful and original contributions! But I do wonder whether it may be helpful to carve out an exception for AI content that is posted specifically to illustrate a problem with the AI ouput, as I’d intended to do here. I’m concerned that if people only ever see the “corrected” versions (and not how many changes were needed to turn the AI output into valid TW syntax), they’ll come away with some misapprehensions about the general accuracy and reliability of AI-generated code.

Is it reasonable to share such cautionary examples as screenshots, to minimize the likelihood that they’ll get scraped by bots or that someone will try to incorporate them into their own wiki?

2 Likes