Wondering what @Springer thinks…
My bet is on dumb parents tricked by cunning lawyer into suing. If they win, I predict that later in life one of them will get tricked by the same lawyer to sue for divorce.
I tend to lean towards a fairly optimistic view of the development and use of AI, so it’s interesting to see this play out. I think the benefits of AI are overstated but it is absolutely a tool and resource that should be used responsibly. The student clearly didn’t do enough to obscure his use of ChatGPT, but it would be interesting to see if the generated text ended up being correct or not.
Very few educational writing assignments are concerned with a perspective-neutral difference between “correct” and “incorrect.”
In my field, philosophy, the task is usually something like: think through a problem and show your own critical understanding of various approaches to it. The output of any LLM literally cannot get that task right, any more than the LLM can articulate my reflections after a first date.
Students are welcome to consult the mediocre word-salad oracle while studying. It may produce good verbal fodder more reliably than their roommate, without getting as easily irritated.
But the next time a student submits AI slop and expects me to comment on it, I think I’ll offer my feedback in the form of a random newspaper horoscope.
Right, BUT …
I do think that this, the AI—like so much of computing history—is already deeply “inter-wingled” with competitive capitalism and eschewal (obfuscation) of the obvious on knowledge rights and “right knowledge”.
My concern is a likely, affirmative: Any ROLLER-COASTER of endorsements is way too early.
Notes …
(1) - The Real was subverted sometime ago. Merely an “Infinite Homonym”.
Right!
THE “problem” with LLMs was well sung years ago by Laurie Anderson in “Only an Expert” …
The positioning of LLMs, by Advocates, LLMs being promoted as “EXPERTS”, is seriously dangerous IMO.
J, x
An LLM is an expert only in the sense that it can be an expert tool. Much like a torque wrench is an expert wrench in that you can dial in the exact torque you need to tighten the nut. But a torque wrench is not fit for purpose for removing lug nuts to change your tire.
In my schooling days we were punished for using calculators. A friend of mine had his TI-59 confiscated in an exam because it was programmable. I figured at the time if he could program the TI to give him the answers he certainly had a grasp of the course material. A lot has changed since then …
I quote that exact thing often. “One day”, this will all blow over, but we’re not there yet.
Btw, I used to work for TI, many moons ago. Great company to work for.
Of course “only an expert” (that is, one who knows the actual person plus various technical details — one recent “technical” threshold being how human hands actually look) is good at distinguishing synthetic photo imagery from real.
Finding and checking with those who can tell the difference has never been “quick,” and the problem is not in principle new.
Only an expert in a particular painter, plus various technical details of how paint works, is good at distinguishing a forgery…
Only someone who has read and interpreted Kant closely can tell that Eichmann (or the character on The Good Place) is paraphrasing Kant in a misleading way…
Only an expert in what undergraduate writing looks like, including the particular undergraduate submitting an assignment, has a good eye for LLM-conterfeit…
The real difficulty is still not whether and how the most well-positioned person can generally tell the difference. The difficulty is in the increased bandwidth-noise, resource-costs, and attention-burdens caused by needing to have such conversations so frequently, in a world where so many interactions lack the dense social familiarity that helps us track trustworthiness and credibility.
I don’t think using a LLM alone should be used to complete assignments. Neither is it a replacement for research. I agree more with the comparison to a calculator, LLM chatbots are tools to help, not replace the work of thinking.
Most of the articles do not go into great detail how much the users have refined the chatbot they were working with. It would be interesting to see if submitting relevant sources or papers directly to the knowledge base and also adjusting the prompt, if this creates output that is better than just the generic slop an untrained chatbot would normally produce.
Right! Literally.
Positive: An AI Expert Diagnostician might help me solve my health problems.
But. The current rabid lingo around AI of all sorts mixes up so many things it gets scarey.
In the pot are helpful-expert tools (e.g. health & law); extreme scam-machines; quick, sort-of, useful, though error prone, search summaries; ideological stuff around “singularity” (conscious machines); serious worries about a small number of companies creating a (their) market … etc.
IMO, right now, we need to DIFFERENTIATE the multiple aspects much better.
I liked your post.
Just comments
J
Right, at best.
I see you are very succinct and to the point on limiting to LLMs in a context.
Needed!
Id est divinum. (=Nice one !)
Where am I wa/ondering?
In the context of TW I’m very interested in how AI (generic) may eventually OBFUSCATE what TW does well by implying “AI” does it better.
Later
J
Totally agree.
IMO the “existential threat” of AI (generic) is exactly like that.
A train on a run that is producing a scale of information (BOTH useful & delusional/distractive) that, by sheer volume (expensive: needing special electricity supplies to power) is TESTAMENT to need to ask: “What is happening?”
I thought your points v. useful because they give a front-line understanding of the issues. Especially in learning!
Regarding the TW … @jeremyruston, I think, understood well, when he created TW, the central problem of “how to get out of the way yet help the mind find expressive structure”—without pretence.
IMO, LLM philosophy is ANTITHETICAL to the thinking that is behind TW. And pretentious.
I hope that it (AI) won’t damage it (TW).
The real threat of AI is the same as that of Dunning-Kruger.
Essentially, the more you know, the more you know you don’t know. If you only know a little, there is only a little more that you are aware of that you don’t know. This leads to false belief in “expertness”. Using AI and thinking that “makes you an expert” leads to incorrect conclusions because you lack the knowledge to distinguish them. And of course your sense of self-expertise will shield you from any criticism no matter how valid. It reinforces not bothering to learn.
Somewhere along the way we lost faith in the real experts in the field (all of them) and that could prove disastrous to us all.
Well, the threat strikes me as only half made of Dunning-Kruger phenomena. The other half is the incentive structure of capitalism.
For a corporate ad exec (or film producer or pulp-fiction publisher) to resort to AI for copy or images or video effects, they don’t have to think it’s as good as what attentively conscientious art / writing / animation pros would do. They only have to think the much greater up-front cost of employing a human (not only in salary, but in turnaround time and contractual burdens) is not worth the greater quality of output vis-a-vis the immediate market appetite as they (the “haves”) judge it.
The real kicker is that many people become less aware, over time, of the difference, while the AI engines are increasingly gobbling up other AI mash as their own cheap input.
That in itself is a continuation of the democratization of expertise that the internet has created (or perhaps rather accellerated). It is easier now to create artistic content and self-publish. There has been a plethora of substandard ebooks available on kindle now for years. Not to mention all the copycat apps that I see all the time on Google Play, many of which are nothing more than ad farms.
And all that is of course a carryover from the snake oil salesmen of yore preying on the gullible.
The bar for entry just keeps getting lower and lower, and AI helps to lower it even more. So much AI-generated YouTube …
I do believe that AI can be an expert tool in the hands of existing experts. The danger comes from thinking that AI is the expert, or its use makes you the expert.
This!
Not wishing to branch the topic but…
So-called “influencers” and their base bug the hell out of me. It’s not so much them, but their followers, the “influenced”. Sheeple. I blame education – lack of critical thinking skills being taught in high schools. Broad brush, I know, but it’ll have to stay like that – this is not the platform and I really don’t have the time or energy or influence(!) to do anything about it. <shrug>
Pretty sure Douglas Adams shipped them off on the B Ark. Problem “solved”. <chuckle>
I’m looking at management folks at GigantiCorp (and many of its peers, to be fair) who obviously see GenAI as an eventual cost-cutting measure. But as a programmer, I see it as job security. They’re going to need people who actually understand code to figure out and fix all the messes created by these AI programming tools. Prompt engineers will some day be recognized as a ridiculous layer of unnecessary management.
I could be wrong about this. But I hope I’m out of the industry by the time that happens.
Don’t get me started…
@Ste_W Same here. It’s a never-ending topic in my household. The whole (I almost typed “industry” !) institution of education needs a top to bottom shakeup.
But anyway, life’s waaay too short…