Here’s a new way to lose an argument online: the appeal to AI

4 days ago 18

Over the course of the last 20ish years spent as a journalist, I have seen and written about a number of things that have irrevocably changed my view of humanity. But it was not until recently that something that just made me short circuit.

I am talking about a phenomenon you might also have noticed: the appeal to AI.

There’s a good chance you have seen somebody using the appeal to AI online, even heard it aloud. It’s a logical fallacy best summed up in three words: “I asked ChatGPT.” 

  • I asked ChatGPT to help me figure out my mystery illness.
  • I asked ChatGPT to give me tough love advice they think I need the most to grow as a person.
  • I used ChatGPT to create a custom skin routine.
  • ChatGPT provided an argument that relational estrangement from God (i.e., damnation) is necessarily possible, based on abstract logical and metaphysical principles, i.e. Excluded Middle, without appealing to the nature of relationships, genuine love, free will, or respect.
  • So many government agencies exist that even the government doesn’t know how many there are! [based entirely on an answer from Grok, which is screenshotted]

Not all examples use this exact formulation, though it’s the simplest way to summarize the phenomenon. People might use Google Gemini, or Microsoft Copilot, or their chatbot girlfriend, for instance. But the common element is placing reflexive, unwarranted trust in a technical system that isn’t designed to do the thing you’re asking it to do, and then expecting other people to buy into it too.

If I still commented on forums, this would be the kind of thing I’d flame

And every time I see this appeal to AI, my first thought is the same: Are you fucking stupid or something? For some time now, “I asked ChatGPT” as a phrase has been enough to make me pack it in — I had no further interest in what that person had to say. I’ve mentally filed it alongside the logical fallacies, you know the ones: the strawman, the ad hominem, the Gish gallop, and the no true Scotsman. If I still commented on forums, this would be the kind of thing I’d flame. But the appeal to AI is starting to happen so often that I am going to grit my teeth and try to understand it.

I’ll start with the simplest: The Musk example — the last one — is a man advertising his product and engaging in propaganda simultaneously. The others are more complex.

To start with, I find these examples sad. In the case of the mystery illness, the writer turns to ChatGPT for the kind of attention — and answers — they have been unable to get from a doctor. In the case of the “tough love” advice, the querent says they’re “shocked and surprised at the accuracy of the answers,” even though the answers are all generic twaddle you can get from any call-in radio show, right down to “dating apps aren’t the problem, your fear of vulnerability is.” In the case of the skin routine, the writer might as well have gotten one from a women’s magazine — there’s nothing especially bespoke about it. 

As for the argument about damnation: hell is real and I am already here.

ChatGPT’s text sounds confident, and the answers are detailed. This is not the same as being right, but it has the signifiers of being right

Systems like ChatGPT, as anyone familiar with large language models knows, predict likely responses to prompts by generating sequences of words based on patterns in a library of training data. There is a huge amount of human-created information online, and so these responses are frequently correct: ask it “what is the capital of California,” for instance, and it will answer with Sacramento, plus another unnecessary sentence. (Among my minor objections to ChatGPT: its answers sound like a sixth grader trying to hit a minimum word count.) Even for more open-ended queries like the ones above, ChatGPT can construct a plausible-sounding answer based on training data. The love and skin advice are generic because countless writers online have given advice exactly like that. 

The problem is that ChatGPT isn’t trustworthy. ChatGPT’s text sounds confident, and the answers are detailed. This is not the same as being right, but it has the signifiers of being right. It’s not always obviously incorrect, particularly when it comes to answers — as with the love advice — where the querent can easily project. Confirmation bias is real and true and my friend. I’ve already written about the kinds of problems people encounter when they trust an autopredict system with complex factual questions. Yet despite how often these problems crop up, people keep doing exactly that.

How one establishes trust is a thorny question. As a journalist, I like to show my work — I tell you who said what to me when, or show you what I’ve done to try to confirm something is true. With the fake presidential pardons, I showed you which primary sources I used so you could run a query yourself.

But trust is also a heuristic, one that can be easily abused. In financial frauds, for instance, the presence of a specific venture capital fund in a round may suggest to other venture capital funds that someone has already done the due diligence required, leading them to skip doing the intensive process themselves. An appeal to authority relies on trust as a heuristic — it’s a practical, if sometimes faulty, measure that can save work.

How long have we listened to captains of the industry say that AI is going to be capable of thinking soon?

The person asking about the mystery illness is making an appeal to AI because humans don’t have answers and they’re desperate. The skincare thing seems like pure laziness. With the person asking for love advice,I just wonder how they got to the point in their lives where they had no human person to ask — how it was they didn’t have a friend who’d watched them interact with other people. With the question of hell, there’s a whiff of “the machine has deemed damnation logical,” which is just fucking embarrassing

The appeal to AI is distinct from “I asked ChatGPT” stories about, say, getting it to count the “r”s in “strawberry” — it’s not testing the limits of the chatbot or engaging with it in some other self-aware way. There are maybe two ways of understanding it. The first is “I asked the magic answer box and it told me,” in much the tone of “well, the Oracle at Delphi said…” The second is, “I asked ChatGPT and can’t be held responsible if it is wrong.”

The second one is lazy. The first is alarming.

Sam Altman and Elon Musk, among others, share responsibility for the appeal to AI. How long have we listened to captains of the industry say that AI is going to be capable of thinking soon? That it’ll outperform humans and take our jobs? There’s a kind of bovine logic at play here: Elon Musk and Sam Altman are very rich, so they must be very smart — they are richer than you are, and so they are smarter than you are. And they are telling you that the AI can think. Why wouldn’t you believe them? And besides, isn’t the world much cooler if they are right?

Unfortunately for Google, ChatGPT is a better-looking crystal ball

There’s also a big attention reward for doing an appeal to AI story; Kevin Roose’s inane Bing chatbot story is a case in point. Sure, it’s credulous and hokey — but watching pundits fail the mirror test does tend to get people’s attention. (So much so, in fact, that Roose later wrote a second story where he asked chatbots what they thought about him.) On social media, there’s an incentive to put the appeal to AI front and center for engagement; there’s a whole cult of AI influencer weirdos who are more than happy to boost this stuff. If you provide social rewards for stupid behavior, people will engage in stupid behavior. That’s how fads work.

There’s one more thing and it is Google. Google Search began as an unusually good online directory, but for years, Google has encouraged seeing it as a crystal ball that supplies the one true answer on command. That was the point of Snippets before the rise of generative AI, and now, the integration of AI answers has taken it several steps further.

Unfortunately for Google, ChatGPT is a better-looking crystal ball. Let’s say I want to replace the rubber on my windshield wipers. A Google Search return for “replace rubber windscreen wiper” shows me a wide variety of junk, starting with the AI overview. Next to it is a YouTube video. If I scroll down further, there’s a snippet; next to it is a photo. Below that are suggested searches, then more video suggestions, then Reddit forum answers. It’s busy and messy.

Now let’s go over to ChatGPT. Asking “How do I replace rubber windscreen wiper?” gets me a cleaner layout: a response with sub-headings and steps. I don’t have any immediate link to sources and no way to evaluate whether I’m getting good advice — but I have a clear, authoritative-sounding answer on a clean interface. If you don’t know or care how things work, ChatGPT seems better.

It turns out the future was predicted by Jean Baudrillard all along

The appeal to AI is the perfect example for Arthur Clarke’s law: “Any sufficiently advanced technology is indistinguishable from magic.” The technology behind an LLM is sufficiently advanced because the people using it have not bothered to understand it. The result has been an entire new, depressing genre of news story: person relies on generative AI only to get made-up results. I also find it depressing that no matter how many of these there are — whether it’s fake presidential pardons, bogus citations, made up case law, or fabricated movie quotes — they seem to make no impact. Hell, even the glue on pizza thing hasn’t stopped “I asked ChatGPT.”

That this is a bullshit machine — in the philosophical sense — doesn’t seem to bother a lot of querents. An LLM, by its nature, cannot determine whether what it’s saying is true or false. (At least a liar knows what the truth is.) It has no access to the actual world, only to written representations of the world that it “sees” through tokens. 

So the appeal to AI, then, is the appeal to signifiers of authority. ChatGPT sounds confident, even when it shouldn’t, and its answers are detailed, even when they are wrong. The interface is clean. You don’t have to make a judgment call about what link to click. Some rich guys told you this was going to be smarter than you shortly. A New York Times reporter is doing this exact thing. So why think at all, when the computer can do that for you?

I can’t tell how much of this is blithe trust and how much is pure luxury nihilism. In some ways, “the robot will tell me the truth” and “nobody will ever fix anything and Google is wrong anyway so why not trust the robot” amount to the same thing: a lack of faith in the human endeavor, a contempt for human knowledge, and the inability to trust ourselves. I can’t help but feel this is going somewhere very dark. Important people are talking about banning the polio vaccine. Residents of New Jersey are pointing lasers at planes during the busiest travel period of the year. The entire presidential election was awash in conspiracy theories. Besides, isn’t it more fun if aliens are real, there’s a secret cabal running the world, and the AI is actually intelligent?

In this context, maybe it’s easy to believe there’s a magic answer box in the computer, and it’s totally authoritative, just like our old friend the Sibyl at Delphi. If you believe the computer is infallibly knowledgeable, you’re ready to believe anything. It turns out the future was predicted by Jean Baudrillard all along: who needs reality when we have signifiers? What’s reality ever done for me, anyway?

Read Entire Article