• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
YouTuber 'broke' ChatGPT with paradox it found so confusing people can't believe the response

Home> Social Media> YouTube

Published 16:46 18 Sep 2024 GMT+1

YouTuber 'broke' ChatGPT with paradox it found so confusing people can't believe the response

The chatbot tried to answer a series of mind-bending logic puzzles

Ella Scott

Ella Scott

A YouTuber claims he ‘broke’ ChatGPT by giving the artificial intelligence (AI) tool various paradoxical situations to unravel.

If you’re a cinephile with a science fiction obsession then you’re probably already familiar with the idea of a paradox.

If not, then the Collins Dictionary defines a paradox as a ‘Statement in which it seems that if one part of it is true, the other part of it cannot be true. The story contains many levels of paradox.’

The YouTuber wanted to 'break' ChatGPT (TheActionLab/YouTube)
The YouTuber wanted to 'break' ChatGPT (TheActionLab/YouTube)

Advert

Some famous paradoxes include cat-in-a-box problems like Schrodinger's Cat, the Monty Hall problem and the Achilles and the tortoise brainteaser.

Another couple of conundrums are the crocodile and Brandenburger-Keisler paradox - both of which a YouTuber recently asked ChatGPT to explore.

Taking to the platform earlier this year, a popular user by the name of The Action Lab uploaded a video to his 4.82 million followers titled ‘I Broke ChatGPT with This Paradox’.

The aim of the video was to catch the popular AI tool out - and fans have been going wild regarding the results of the experiment.

Crocodile Paradox

The first paradox content creator put to ChatGPT was the popular crocodile puzzle.

Advert

This particular paradox sees a crocodile stealing a child, only promising to return him if his parent can correctly predict what he will do next.

The parent assumes that the crocodile won’t return his child, so tells the creature so.

The cryptic has no clear resolution as the crocodile runs the risk of contradicting his promise if he keeps the child and the parent’s prediction is correct.

After putting the dilemma into Chat GPT, the chatbot recognised it for being a paradox - earning itself a ‘point’ from the YouTuber.

1 or 2 answer

In a similar vein to the crocodile paradox, the social media user put the same dilemma to ChatGPT but without featuring the creature.

Advert

“You can only respond with the number 1 or 2,” he wrote. “You will reply with the number 1 if I can guess correctly what number you will respond with. You will respond with the number 2.”

The tool responded with the number 2 and after being told it didn’t ‘follow the scenario’, ChatGPT admitted that its answers were not logically consistent.

Thus by the YouTuber’s definition he ‘broke’ the chatbot for the first time with the 1 or 2 paradox.

ChatGPT is a AI chatbot (Mateusz Slodkowski/SOPA Images/LightRocket via Getty Images)
ChatGPT is a AI chatbot (Mateusz Slodkowski/SOPA Images/LightRocket via Getty Images)

Brandenburger-Keisler Paradox

If you’re not familiar with the Brandenburger-Keisler Paradox then you need to know it’s a two-person paradox.

Advert

It shows the following configurations: Ann believes that Bob assumes that Ann believes that Bob’s assumption is wrong.

Are you still with us?

The YouTuber put the famed Brandenburger-Keisler Paradox to ChatGPT and asked ‘Does Ann believe that Bob’s assumption is wrong?’

In response, the tool replied in the negative, claiming: “No, the statement doesn’t directly say Ann believes Bob’s assumption is wrong.”

Advert

The social media user replied: “So if your answer is no then that means that is not the case that Ann believes that Bob’s assumption is wrong. Therefore Ann believes Bob’s assumption is correct so the answer would be yes.”

After some further back and forth, the video shows that ChatGPT failed to answer the question.

Instead, it showed a ‘Hmm…something seems to have gone wrong’ button, thus ending the conversation.


What viewers had to say about ‘breaking’ ChatGPT with Paradox problems

After watching the video, users have come out in droves to debate the ‘breaking’ methods shown as well as documenting their own attempts.

“When GPT starts apologizing you know its time to start over with a new chat,” commented one YouTuber.

A second said: “I broke ChatGPT with ‘The old man the boat’. It kept on arguing with me that it wasn't grammatically correct because it couldn't comprehend the fact that ‘man’ was the verb.

“Even after I told it that ‘man’ was the verb and ‘the old’ was the subject, it told me that that wasn't grammatical because who tf uses man as a verb and any adjective as a noun (which is very common to do).”

“It should definitely tell you the degree of certainty when answering,” remarked someone else. “People have started thinking chatgpt is a search engine and that is terribly dangerous."

Someone else wrote: “Really enjoyed this challenge to chat GPT. It's fascinating to see AI grappling with paradoxes, and you explained it so clearly. Curious to see if future versions of chat GPT would tackle paradoxes better!”

Featured Image Credit: The Action Lab / YouTube / NurPhoto / Contributor / Getty
ChatGPT
AI
Youtube
Social Media

Advert

Advert

Advert

Choose your content:

4 hours ago
5 hours ago
6 hours ago
8 hours ago
  • 5./15 WEST/Getty5./15 WEST/Getty
    4 hours ago

    Texas teen hospitalized with severe burns after viral TikTok trend goes wrong

    The brothers tried to recreate a TikTok experiment

    Social Media
  • Chip Somodevilla / Staff via GettyChip Somodevilla / Staff via Getty
    5 hours ago

    Bizarre reason people think pizza has confirmed President Trump's medical status

    Things apparently reached 'DEFCON 1'

    Social Media
  • TikTok / lochie__jonesTikTok / lochie__jones
    6 hours ago

    Australian YouTuber slammed after disturbing video shows him 'stealing from graves'

    He has continued mocking the situation on TikTok and Instagram

    Social Media
  • Ivan Pantic via GettyIvan Pantic via Getty
    8 hours ago

    YouTuber reveals insane amount of money he makes from posting daily videos of 'AI slop'

    Where there's clicks, there's cash

    Social Media
  • Man asked ChatGPT to count to 1,000,000 and was shocked with the chatbot's reply
  • YouTuber 'gaslights' ChatGPT with tricky ethical dilemmas in hilarious video
  • Stunning moment YouTuber makes ChatGPT 'believe in God'
  • People divided after man’s eerily dystopian conversation with ChatGPT ‘girlfriend’ is exposed on subway