ChatGPT – A new kind of “truthiness”

ChatGPT – A new kind of “truthiness”

I’ve been thinking a lot about ChatGPT recently, following a discussion with @sean.

Sean’s position is that ChatGPT can be generally trusted, but you shouldn’t put too much faith in it. I can understand that position, but I fundamentally don’t agree with it.

The trouble with ChatGPT isn’t that it’s a bullshit generator, because it isn’t, quite.

Bullshit has a certain flavour. people writing bullshit will dance around the truth. They don’t post exact figures, or they avoid addressing certain aspects that would get them into trouble. When reading it, a critical thinker can usually tell it’s bullshit, because it doesn’t have the feel of well-researched, well-reasoned discussion.

ChatGPT, on the other hand, does have the feel of well-researched, well-reasoned discussion, because that’s what it’s been trained to do. It’s been trained to craft messages that sound like they’re written by an intelligent, self-consistent human writer with access to a substantial collection of information. Accuracy may be part of its training, but it clearly falls behind the other requirements in scoring.

What this leads to is a system which “cares” more about sounding right than it does about being right. And it’s very, very, good at sounding right. Enough that for most purposes, you don’t even think you need to worry about checking its references.

Now, you may say, as Sean did, that this is much the same as Wikipedia. A high quality Wikipedia article will also sound convincing enough that you will not feel like you need to check its reference sources, and we know that sometimes, there are errors in Wikipedia articles, inserted due to mistakes or malice.

But there’s one major difference in the two situations. Wikipedia actually provides reference sources to support its statements (in the case of the high quality articles), which can be checked and compared against other sources, and we know that there are people trawling through Wikipedia actually looking at those references to check them.

ChatGPT doesn’t do that. It Just gives you an answer, with no references, no sources. It even says regularly that it can’t browse the Internet. Answers that are often wrong. Sometimes, they’re obviously wrong, such as when it explains how it is calculating someone’s age and adding/subtracting the wrong numbers to get an inaccurate value. Sometimes, they’re subtly wrong, such as when it inexplicably gets some figures wrong by a few percentage points.

If there was any indication of a consistent aim in these errors, we’d have to conclude that ChatGPT is liar with an agenda to spread misinformation, but that’s clearly not the case. ChatGPT is closer to a pathological liar, though even that is not quite right.

No, what is clear is that ChatGPT just has a certain set of ranked values, based on the rules with which it was trained. And accuracy, or “truth”, is clearly a few steps down that ranking. Enough that it is often over-ruled by competing values.

ChatGPT is an interesting toy, but we shouldn’t put our trust in anything it says.

3 thoughts on “ChatGPT – A new kind of “truthiness”

Comments are closed.

Comments are closed.