@Springer yes my point was even if you trust a friend you need to be sceptical, if I canāt trust them at all they would not be a friend. I was not saying LLMās are a trusted friend, the are an untrusted tool. If they were useless, I would not use them at all. I donāt trust hammers as well, its the way I use them.
I have made use of LLMs to great effect, as a tool. You could say it takes fuzzy search to new heights. It is in fact important to ask the right questions and be critical of the result.
I donāt think I need argue here they have many and effective applications
Until such time as real A.I. Exists, we use LLMās as tools, and we donāt hold our tools accountable, we ourselves are accountable for what we do with our tools. In fact when I produce something, even with the advice of trusted humans, I need to be accountable for what advice I relied on.
I think this points to a negative consequence of claiming LLMās is A.I. that some are inclined to assign responsibility to the LLM and ignore their own.
Yes, because they are LLMās not A.I. or even I
Even the Title of this thread makes the error of suggesting there is any A.I. in LLMās
As if we need it, I could demonstrate by listing what LLMās canāt, donāt or will not do.
It is true that some people will overly rely on the LLMās as a tool and pass it off as their own work when it is not, and yes this should be discouraged. Just like any information without sufficient scepticism it will be unreliable.
I think your larger statement āThe universe cares notā is the more reliable. This is also clear in Evolutionary science, to start with we āneed theory of mindā at a minimum.
Care is perhaps only potentially valid between individuals and in social species.
I am sure there is people out there that donāt know what they are talking about, but there is a serious and new science around prompt engineering with very high success approaches. We ignore that at our peril. Prompt engineering is about setting the context, the input, filters, and asking the right questions.
Not so fuzzy questions, you ask of a fuzzy data/knowledge base.
I would suggest this is an errant approach, especially if you normally find my content of use, to ignore based on an arbitrary classification. However it may not have been content of use to you, I scan some content for importance or relevance all the time.
But I am ready to read your content. I am not ready to read ChatGPTās content.
With a limited amount of time to invest on any subject, I might briefly scan widely, but I will invest time in sources that seem more reliable. During the height of Covid, for instance, I paid far more attention to serious medical researchers than to the likes of Joseph Mercola.
ChatGPT is emphatically not a reliable source. Its training material contains plenty that is true, but also contains vast swaths of falsehoods. And it has no mechanism to distinguish them. The only way it might get a relatively high truth rating is if the material itās trained on is substantially more truthful than the Internet as a whole. And that would require enormous amounts of expertise.
Because such LLMs are trained on more correct information than disinformation, they are useful to suggest areas of study, to help with brainstorming, and so on. But their raw output is not worth much more than that. I choose not to invest my time with them.
Had you synthesized the information yourself, done some research into those techniques, and only used LLM output as a jumping-off point, then I would have happily read your post.
Yes, Iāve been following the LLM journey somewhat closely. As the backlash seems to be setting in, Iām guessing weāll see much more of a realization that prompt engineering is just another example of trying to fit people to the limitation of software rather than crafting software designed for people. Iām guessing it will end up a short-lived fad. Iāll bet my NFTs on it.
No, there are plenty of intelligences indifferent to the truth; we call them bullshitters. For at least current-crop LLMs, there is simply no notion of truth to consult. I agree that āAIā is misapplied to LLMs, but pretty much every historical application of the term to some field of technology been problematic. LLMsā level of (un-)intelligence has little to do with their reliability.
I urge further investigation into this, for example with prompt engineering if you ask for answers that a qualified professional would say, you change the probabilities that you will get better answers.
As I said, it is the quality questions you ask.
I have already created a custom Chat Session that includes in its corpus specific tiddlywiki source material, for example the word āwikitextā returns valid wiki text, it is not perfect because it needs more training or references.
The result was reasonable as it stood, have a look at the [Edit] comments by @Mario Mario who did the due diligence.
My question set the scope by saying āwebsiteā which means html, javascript, and css was relevant
I did not expect perfection, and I did not pretend it was so.
Here we see I, ChatGPT and mario have collaborated to generate usable info and code. Now we need to create some action triggers. Make them into modules, currently just beyond my reach.
I think weāre going to have to agree to disagree here.
From my point of view, it required outside expertise to correct 2/5ths of the suggestions, and there is no obvious way to have found that using prompt engineering or other current techniques. Therefore the original chat response was wrong far too often for me to want to take it seriously. Even 1 in 10 would be awfully high.
But clearly you see it differently. I donāt think thereās much chance of changing one anotherās minds.
But if I get a vote, Iām in favor of Jeremyās suggestion in the OP.
Yes, we will agree to disagree, it is not a function of producing perfect output from the LLM but appropriate output. Remember these calls in plain JavaScript need to be read in the TiddlyWiki context and it was posted to facilitate collaborations, that has occurred.
In fact the way I presented it, it may very well be useable in another context independent of TiddlyWiki.
As soon as one sets hard, fast and somewhat inflexible rules, the community has started a slow death in my opinion.
Remember all the caveats, I preface it with a discussion, explained its source, quoted it in full and invited comment. Which I got and @Mario tested it.
So if I did something wrong there, you are suggesting inflexible rules or harsh criticism.
Itās interesting how I have a short break from substantial contributions then return and quickly find grief. This is not the way to encourage and maintain participation.
Please TW_Tones, I think you take this far too personal. You are appreciated, very active and helping a lot of us. Thank you!
Read the original post once more. It is about not copying AI generated Content. Not about you using it and commenting.
I often have problems due to lack of language skills - and to be honest also not having knowledge enough about Tiddlywiki. I often experience remembering a thread at a later date and understand better.
What I really did understand is to avoid this forum being yet another echo chamber for other AIās to be trained from. As a user, that is also of interest to you.
Yes, your post was mentioned as an example, the post - not you! If you had not done it, somebody else would.
The discussion has been interesting to read!
That is correct, and my āresponsible postā was given as example implying it was breaking the proposed rule. Subsequent comments supported that position, and I think it is WRONG.
This is not about it being personal, it is about it being difficult to put a reasonable argument.
It is me saying I think the proposed moderation is inappropriate especially if my post falls into rule, it was given as an example.
Everything is easy if I agree, but If I want to have a second opinion respected, then it seems like grief.
Twice recently, someone has posted LLM-generated stuff (without putting this fact up front) where I have been the first (as far as I know) to respond with a āsmells like LLMā reply or flag. I do think itās inappropriate to post that way. (One of the posts was inappropriate in other ways as well.)
However, I would not go from āthat was a bad postā to saying each of these is ājust a bad member.ā
And I think itās far from obvious what it means to say: once someone is deemed āa bad memberā that they āwill get treated as such.ā
Some violations do warrant being barred, but barring can only be reasonable if the guidelines are clear enough to be applied even-handedly. (Even barring a person doesnāt require judging that the person is a bad person; it just requires recognizing that a civil community needs to be able to rely on some clear boundary expectations.)
I will flag posts that seem inappropriate, so that administrators here can sort things out. And I welcome this conversation partly because flagging a post for community-norm violations should feel like a pretty clearcut matter. I flag the post, not the person.
As an ordinary participant here, I do not want to be tracking the question of whether there are ābad membersā here. I want to be able to treat people with the benefit of the doubt. I want to trust that we have clear-enough norms that we can gesture toward them with confidence.
Mostly, @twMat, Iāve felt a spirit of constructive engagement from you over time. I love your contributions here! Yet this post here came across as exasperated. Fair enough, we all have such moments. But I donāt think that exasperation will serve us well in the LLM landscape. Both curiosity and moral caution are appropriate ā as well as patience with one another.
Respectfully, youāve misunderstood my post - probably because I didnāt express it clearly.
The point I was making comes here, praphrasing my earlier sentence:
Obviously if someone just copy-pastes any gob without curating it then that is just a bad post and will get treated as such.
In other words, if a post is just a bad one it will get treated as such. If it is, say, outright offensive it will probably be deleted. If it is gibberish, people will point it out - and if the poster doesnāt edit it, it might be deleted or curated by someone else.
But in my mind it is not different from being a ābad memberā; I say āmemberā not āhuman beingā. A member is anyone who posts here. That pretty much defines ones identity as a member. This does not deal with anyones, say, individuals worth as a human being. (I mean, how would anyone know anything about that for someone else here!?) To ātreat accordinglyā simply means there is a natural reaction to it, not that it (a post) necessarily is deleted, nor that they (a person) necessarily is banned. Also a wonderful posts are ātreated accordinglyā. (And, for that matter, wonderful humans too - but that is typically beyond the scope of this digital message board IMO.)
By the way: Youāre doing us all a favour when you flag posts. IMO it doesnāt make much difference if it concerns some unedited LLM gob or any other gibberish, I appreciate it regardless.
Hope that makes my point clearer. If not, well I tried, and the intention was/is certainly not to insult anyone.
Well, I appreciate the clarification that you meant to target posts, not members per se!
Still, I think that thereās little thatās obvious about the idea that bad posts āwill be treated as suchā. I think itās important to have some places to articulate what counts as a bad post. Even better: we want to help people formulate posts so that they are less likely to waste other peopleās time (or mislead or offend, etc.).
The ability to recognize and steer around ābad postsā does require some level of attention as well as background knowledge. People who come here looking for help with novice TiddlyWiki problems will find the forum helpful in inverse proportion to how much ānoiseā there is (Noise can be LLM-slop pasted as if itās advice from someone who could stand behind the content, but it can also be personal comments popping up in threads whose titles look like theyāre about this or that technical issue, etc. Alas, the former is less obvious, less easy to steer around, for a newbie.)
Mostly, Iāve found talk.tiddlywiki to be fairly low-noise, and I can usually see temperament-related traffic jams from far enough away to steer clear. Notably, tensions related to the convictions of people who really do give a damn about TiddlyWiki are quite different from the less predictable and more insidious noise that comes from slop. The risk of this may be likely to increase, moving forward.
I think we maintain a relatively low-noise helpful forum by being proactive as challenges emerge. As much as I wish handling LLM tools responsibly were an āobviousā matter of common sense, my experience (here and elsewhere, in ways that resonate with @Scott_Sauyetās comments) suggests that many people are confused on this theme. Hence the need for this thread.
Framed that way, I certainly agreeā¦ though I the guideline proposal by @jeremyruston didnāt strike me as inflexible. Still, if it strikes you that way ā and I do value all the energy and initiative you bring to the community, @TW_Tones ā I wonder whether there are ways to help nudge us all toward being on the same page here.
First, letās see whether we all agree that thereās a world of difference between these:
Cases where @TW_Tones, or anyone else, structures a post around taking LLM output (clearly flagged as such) seriously ā to which some people respond by scrolling past the GPT output to look for any original commentary, or by giving up on the thread, slightly put off by the impression of being asked to listen to a blabbering automaton.
Cases in which someone posts, āTry this:ā or āHere, I made this:ā and then pastes LLM output, unmarked.
Most of my comments so far have been about type 2, which seems less difficult to fix, since demanding attribution cannot be misconstrued as shutting down content. Now weāre turning to type 1.
It seems that the āwaste of my timeā reaction to explicitly excerpted LLM output (when it is clearly marked as such) happens when such output is swimming along in the very same content-stream as posts in which people are muddling through things in their own words/attempts at code (with or without excerpted code, attributed to other human authors, as needed ā or conceivably an LLM-generated snippet when itās just serving as the focus of a technical question).
So: What about having a category for threads that attempt to mine LLM for TiddlyWiki-related value? Even there, certain norms make sense (such as making it clear what content is pasted from LLM and whatās not). But it would be an āenter-at-your-own-riskā area for those who are rubbed the wrong way by having LLM-output taken seriously as substantive content.
We currently have a post category ācafeā that encourages people to post social or silly stuff that would otherwise irritate folks for being off-topic. We see ācafeā and know if we open this thread, we may find free-association, bad puns, personality quirks, music we might like or hate, etc. I imagine a new category that is not āDiscussionā (all human voices here) and not āCafeā category (humans but without any seriousness filter), but something like an āLLM in the roomā category.
If thereās such a neighborhood here for such threads, then @Scott_Sauyet and @jeremyruston ā and I myself, at least in most moods ā can steer clear without seeming to shut down LLM-engaged posts āinflexiblyā. Folks who find it worthwhile to read such stuff may have adventures that they find entirely agreeable, but without the risk of the rest of us feeling that weāre at a dinner party being re-introduced to that one narcissistic cousin who must be tuned out to retain our sanity. And who knows, perhaps someone who hangs out in the āLLM-welcome roomā will learn amazing stuff that they can then share, with the rest of us, in their own words?
I understand the intent of this policy, and I appreciate the commitment to fostering a culture of meaningful and original contributions! But I do wonder whether it may be helpful to carve out an exception for AI content that is posted specifically to illustrate a problem with the AI ouput, as Iād intended to do here. Iām concerned that if people only ever see the ācorrectedā versions (and not how many changes were needed to turn the AI output into valid TW syntax), theyāll come away with some misapprehensions about the general accuracy and reliability of AI-generated code.
Is it reasonable to share such cautionary examples as screenshots, to minimize the likelihood that theyāll get scraped by bots or that someone will try to incorporate them into their own wiki?