Which paper, in turn, appeals to Frankfurt’s philosophical account of bullshit.
And indeed, the reason I’m suspicious of LLM output is not so much because it’s “dumb machine circuits” rather than “smart organic humans” but because of how LLMs have been trained up. They’ve been trained to throw gobs of stuff at the wall repeatedly until something sticks.
When our children do that, we send a clear visceral signal that there are lines that must not be crossed. Throw spaghetti at the wall one more time, and you’ll be regretting it for a while.
When we are in a learning role, we need a sense of finitude, humility, social deference to those who are likely to know what we don’t, and shame (being mortified, taken aback, and rendered at least briefly dumb with soul-searching when we really mess up, hit the third rail, encounter the silent treatment, etc.).
Alas, those who have been training up our LLMs chose an initial recipe that is 100% shameless exuberance and 0% of the critical capacity (awareness of one’s own ignorance) that is equally vital to intelligence. LLM disclaimer boilerplate (about their limits) and “taboo patches” (censorship around certain words or patterns) are tacked on at the edges, rather than built into the core of its handling of meaningful stuff.
As Socrates realized (trying to make sense of the oracle that proclaimed no Athenian was wiser than he): The first step in wisdom is recognizing one’s own ignorance.

