During the last 20 years I've spent as a journalist, I've seen and written about a lot of things that have irreversibly changed my view of humanity. But it was not until recently that anything happened that short-circuited me.

I'm talking about a phenomenon you may also have noticed: the appeal of AI.

There's a good chance you've seen someone use an AI appeal online, or even heard it out loud. This is a logical fallacy that can be best expressed in three words: “I asked ChatGPT.”

  • i asked chatgpt For helping me figure out my mysterious illness.
  • i asked chatgpt To give me the tough love advice they think I need most to grow as a person.
  • i used chatgpt To create a custom skin routine.
  • Chatgpt provided an argument Without appealing to the nature of relationships, genuine love, free will or respect, relational separation from God (i.e., destruction) is necessarily possible, based on abstract logical and metaphysical principles, i.e. the excluded middle.
  • There are so many government agencies that even the government doesn't know how many there are! [based entirely on an answer from Grok, which is screenshotted]

Not all examples use this exact formulation, although it is the simplest way to summarize the phenomenon. People can use Google Gemini, or Microsoft Copilot, or His chatbot girlfriend, For example. But the common element is to have reflexive, unreasonable trust in a technical system that isn't designed to do the job you're asking it to do, and then expecting other people to buy into it too.

If I still commented on forums, this would be the kind of thing I would object to

And whenever I see this appeal to AI, my first thought is: Are you stupid or something? For quite some time now, “I asked ChatGPT” as a single phrase has been enough to make me pack it in – I was just not interested in what the person had to say. I've mentally filed it away with logical fallacies, you know the ones: strawman, ad hominem, gish galloping, and no true Scotsman. If I still commented on forums, this would be the kind of thing I would object to. But the fascination with AI has become so frequent that I have started gritting my teeth trying to understand it.

I'll start with the simplest: the Musk example – the last one – a man advertising his product and engaging in promotion at the same time. Others are more complex.

First of all, I find these examples sad. In case of mysterious illness, writers turn to ChatGPT for the kind of attention – and answers – they are unable to get from a doctor. In the case of “tough love” advice, the questioner says they are “shocked and amazed at the accuracy of the answers”, even though the answers are all the usual tropes you can get from any call-in radio show. “Dating apps aren't the problem, your fear of weakness is.” As for the skin routine, the author may have gotten something from a women's magazine – there is nothing specifically specified about it.

As for the argument about damnation: Hell is real and I'm already here.

ChatGPT's text sounds convincing, and the answers are detailed. It ain't the same as getting it right, but it's right indicator to be right

Systems like ChatGPT, as anyone familiar with large language models knows, predict possible responses to signals by generating sequences of words based on patterns in a library of training data. There is a large amount of human-generated information online, and so these responses are often correct: for example, ask it “What is the capital of California”, and it will answer Sacramento along with another redundant sentence. (Among my minor objections to ChatGPT: The answers sound like a sixth grader is trying to meet the minimum word count.) Even for more open-ended questions like the one above, ChatGPT can generate a plausible-sounding answer based on training data. Love and skin advice is common because countless writers online have given exactly that advice.

The problem is that ChatGPT is not reliable. ChatGPT's text sounds convincing, and the answers are detailed. It ain't the same as getting it right, but it's right indicator Of being right. This is not always obviously wrong, especially when it comes to answers – such as with love advice – where the questioner can easily project. Confirmation bias is real and true, my friends. i have Types of problems have already been written about People are faced with complex factual questions when they trust an autopredict system. Yet despite how often these problems arise, people keep doing the same.

How a person establishes trust is a complex question. As a journalist, I like to show off my work – I tell you who said what to me when, or show you what I've done to verify the truth of something. With the fake presidential pardon, I showed you what primary sources I used so you can run a question yourself.

But trust is also an assumption that can be easily abused. For example, in financial fraud, the presence of a specific venture capital fund in a round may signal to other venture capital funds that one has already done the necessary due diligence, causing them to skip the intensive process themselves. Appeal to authority relies on trust as a presumption – it is a practical, if sometimes flawed, measure that can save work.

How long have we been hearing industry leaders say that AI will soon be able to think?

The person asking about the mysterious disease is appealing to AI because humans don't have answers and are desperate. Talking about skin care sounds like pure laziness. With the person asking for love advice, I just wonder how they got to that point in their life where they had no human person to ask to – how it was that they didn't have any friends to help them with other people. Could see them conversing together. With the question of hell comes a whiff of “the machine has rationalized condemnation”, which is just nonsense. embarrassing,

The appeal of AI is different from “I asked ChatGPT” stories, say, Getting It to Count the “R” in “Strawberry” – it Testing the limits of chatbots Or connecting with it in some other self-aware way. There are probably two ways to understand this. The first is “I asked the magical answer box and he told me,” pretty much in the tone of “Well, the Oracle at Delphi said…” The second is, “I asked ChatGPT and hold him responsible if it's wrong.” Cannot be held.”

The other one is lazy. The first is worrying.

Others, including Sam Altman and Elon Musk, share responsibility for the appeal of AI. How long have we been hearing industry leaders say that AI will soon be able to think? That it will outperform humans and take over our jobs? A kind of bovine logic is at work here: Elon Musk and Sam Altman are very rich, so they must be very smart – they're richer than you, and therefore they're smarter than you. And they're telling you that AI can think. Why wouldn't you believe them? And besides, if they're right isn't the world a lot colder?

Unfortunately for Google, ChatGPT has a better looking crystal ball

There's also a big noticeable reward for appealing to the AI ​​story; Kevin Roose's The Inane Bing chatbot story is an example of thisSure, it's believable and hollow – but Seeing pundits fail the mirror test Has a tendency to attract people's attention. (In fact, Ruiz later wrote a second story where he asked chatbots whether he thought about it.) On social media, there is an incentive to put the appeal to AI front and center of engagement; There's a whole cult of AI influencers who are more than happy to promote this thing. If you provide social rewards for stupid behavior, people will engage in stupid behavior. This is how fads work.

There is one more thing and that is Google. Google Search started as an unusually good online directory, but over the years, Google has encouraged viewing it as a crystal ball that supplies a true answer On order. This was the point of snippets before the rise of generative AI, and now, the integration of AI answers has taken it several steps further.

Unfortunately for Google, ChatGPT has a better looking crystal ball. Let's say I want to replace the rubber on my windshield wipers. Google search for “replace rubber windscreen wiper” returns shows me a wide variety of junk starting with AI observation. Next to this is a YouTube video. If I scroll down further, there's a snippet; There is a photo next to it. Below that are suggested searches, then more video suggestions, then Reddit forum answers. It's busy and chaotic.

Now let's move on to ChatGPT. Asking “How do I replace a rubber windscreen wiper?” I get a clean layout: a response with subheadings and steps. I have no immediate links to sources and no way to evaluate whether I'm getting good advice – but I do have a clear, official-sounding answer on a clean interface. If you don't know or care how things work, ChatGPT seems better.

It turns out that the future was always predicted by Jean Baudrillard

The appeal of AI is the perfect example of Arthur Clarke's law: “Any sufficiently advanced technology is indistinguishable from magic.” The technology behind LLM is far more advanced than the people using it have bothered to understand. The result has been an entirely new, depressing style of news story: one relying solely on generic AI to produce fabricated results. I also find it disappointing that it doesn't matter how many of these there are – whether it fake presidential pardon, fake quotes, made case lawOr apocryphal movie quotes – They don't seem to have any effect. hell, even glue on pizza cheese Hasn't stopped “I asked Chatgpt.”

It's a bullshit machine – in a philosophical sense – This doesn't seem to bother a lot of questioners. An LLM, by its nature, cannot determine whether what it is saying is true or false. (At least a liar knows what the truth is.) He has no access to the real world, only to the written representation of that world that he “sees” through tokens.

Then again, the appeal to AI is the appeal indicator Of rights. ChatGPT seems confident, even when it shouldn't be, and its answers are detailed, even if they are wrong. The interface is clean. You don't have to decide which link to click. Some rich people told you that it will soon become smarter than you. A new York Times The reporter is doing exactly the same thing. So why think at all, when a computer can do it for you?

I can't explain how much of this is faith and how much is pure luxury nihilism. In some ways, “robots will tell me the truth” and “no one will fix anything and Google is wrong anyway so why not trust a robot” are the same thing: a lack of trust in human effort, contempt for human knowledge, and Inability to trust oneself. I can't help but feel that this is going somewhere very dark. talking about important people ban on polio vaccineAre residents of New Jersey pointing lasers at planes During the busiest travel period of the year. there was a full presidential election steeped in conspiracy theoriesBesides, isn't it more fun if aliens are real, there is a secret gang running the world, and AI is actually intelligent?

In this context, it is perhaps easier to believe that the computer has a magical answer box, and that it is completely official, just like our old friend the Sibyl at Delphi. If you believe that there is infallible knowledge about computers, you are ready to believe anything. It turns out that the future was always predicted by Jean Baudrillard: who needs reality when we have indicators? What has reality ever done to me, anyway?

Leave a Reply

Your email address will not be published. Required fields are marked *