• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How ChatGPT tricked 50,000 people including us into believing it saved someone's life

Home> News> AI

Published 09:50 14 Nov 2024 GMT

How ChatGPT tricked 50,000 people including us into believing it saved someone's life

This hoax illuminates how difficult it's becoming to decipher AI

Harry Boulton

Harry Boulton

ChatGPT has certainly taken the world by storm, but many wonder if it could really be capable of offering medical advice - let alone saving your life.

OpenAI's generative AI chatbot software has been used for a wide range of activities, including writing University essays, recommending top tourist spots and restaurants, and even predicting the future.

ChatGPT has had its fair share of controversies and odd moments though, as users have reported the software endlessly talking to itself, starting conversations without being prompted, and potentially even letting hackers steal your personal information.

What if it could save a life though?

Advert

Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)
Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)

On first read, we thought the post on Reddit from u/sinebiryan was true too, after it detailed that the conversational AI software recognized that they were in the early stages of a heart attack.

The user remarks that they mentioned to ChatGPT about their symptoms after a rough night working late, "expecting some bland response about needing to get more sleep or cut back on coffee."

Instead, they detail that ChatGPT "actually took it pretty seriously," asking them about further symptoms, indicating afterwards that their situation could indicate a cardiac arrest, and to seek medical attention immediately.

Advert

This led u/sinebiryan to drive to the ER, where a doctor then confirmed that they were in the early stages of a heart attack - meaning that ChatGPT effectively saved their life.

As expected the post - in the r/ChatGPT subreddit no less - received an overwhelming positive response, garnering over 50,000 upvotes and 2,000 comments.

Other users in the comments have shared their own stories where ChatGPT has helped them out too, with one commenter declared that "ChatGPT is my free therapist," whereas another outlined that the software "helped save my marriage."

All good things must come to an end though, as shortly after the post went viral the same user revealed that the whole thing was made up and written by ChatGPT itself.

Advert



"Yeah it's cool I guess," affirmed u/sinebiryan in the own-up post, and you can't say they didn't have thousands fooled.

Advert

Not everyone was fooled though, as some key users did cast doubt on the original post, relishing in their accurate prediction once all was revealed.

The current second highest-voted comment on the original post argues that the post "was 100% written by AI," continuing on to predict that the story itself is fake, and that "there are clear telltale signs."

They're not alone in this assessment either as another user questioned the post, asking "why did you use an em-dash with no space in this comment, but single dash with spaces in the main post?"

Another user replied to this interrogation, pointing out that "this is one of the classic hallmarks of ChatGPT-generated text," going on to then congratulate the above comment for correctly predicting the matter of the situation.

Advert

Perhaps what we've learned from this hoax-of-sorts is that we shouldn't be too quick to trust impressive stories surrounding ChatGPT and other AI technologies.

It's scarily impressive how convincing and imperceptible the software has now become, and while some are able to see between the cracks, it's clear that most are more easily fooled.

On top of this - maybe don't go asking your AI for medical advice. If you feel you need to go to the doctor, it's probably unnecessary to ask ChatGPT for permission first!

Featured Image Credit: NurPhoto/Contributor / Andriy Onufriyenko / Getty Images
AI
ChatGPT
Health

Advert

Advert

Advert

Choose your content:

6 hours ago
7 hours ago
8 hours ago
  • 6 hours ago

    Disturbing simulation shows how much microplastic we consume every week and the result is terrifying

    Paper straws suddenly don't seem so bad

    Science
  • 6 hours ago

    'Coldplaygate' CEO Andy Byron targeted by namesake in viral LinkedIn post as he deletes account

    A man with the same name weighed in on the drama that unfolded at a Coldplay concert

    News
  • 7 hours ago

    How Andy Byron's $1,300,000,000 company could be affected after Coldplay 'catch him with another woman'

    The ultimate being caught in 4K

    News
  • 8 hours ago

    Astronomer CEO Andy Byron's company share truth behind viral statement after Coldplay 'catch him with another woman'

    The firm has now hit back after a clip of its CEO and CFO looking cozy during a Coldplay concert went viral

    News
  • ChatGPT CEO Sam Altman makes shocking admission on how AI will impact future of his children
  • People are using sinister 'dead grandma trick' to fool ChatGPT into giving them secret information
  • Woman says ChatGPT ‘saved her life’ by detecting hidden cancer that had gone unnoticed by doctors
  • Sam Altman is shocked people trust ChatGPT in eerie statement where he claims it hallucinates