• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How ChatGPT tricked 50,000 people including us into believing it saved someone's life

Home> News> AI

Published 09:50 14 Nov 2024 GMT

How ChatGPT tricked 50,000 people including us into believing it saved someone's life

This hoax illuminates how difficult it's becoming to decipher AI

Harry Boulton

Harry Boulton

google discoverFollow us on Google Discover

ChatGPT has certainly taken the world by storm, but many wonder if it could really be capable of offering medical advice - let alone saving your life.

OpenAI's generative AI chatbot software has been used for a wide range of activities, including writing University essays, recommending top tourist spots and restaurants, and even predicting the future.

ChatGPT has had its fair share of controversies and odd moments though, as users have reported the software endlessly talking to itself, starting conversations without being prompted, and potentially even letting hackers steal your personal information.

What if it could save a life though?

Advert

Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)
Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)

On first read, we thought the post on Reddit from u/sinebiryan was true too, after it detailed that the conversational AI software recognized that they were in the early stages of a heart attack.

The user remarks that they mentioned to ChatGPT about their symptoms after a rough night working late, "expecting some bland response about needing to get more sleep or cut back on coffee."

Instead, they detail that ChatGPT "actually took it pretty seriously," asking them about further symptoms, indicating afterwards that their situation could indicate a cardiac arrest, and to seek medical attention immediately.

This led u/sinebiryan to drive to the ER, where a doctor then confirmed that they were in the early stages of a heart attack - meaning that ChatGPT effectively saved their life.

As expected the post - in the r/ChatGPT subreddit no less - received an overwhelming positive response, garnering over 50,000 upvotes and 2,000 comments.

Other users in the comments have shared their own stories where ChatGPT has helped them out too, with one commenter declared that "ChatGPT is my free therapist," whereas another outlined that the software "helped save my marriage."

All good things must come to an end though, as shortly after the post went viral the same user revealed that the whole thing was made up and written by ChatGPT itself.



"Yeah it's cool I guess," affirmed u/sinebiryan in the own-up post, and you can't say they didn't have thousands fooled.

Not everyone was fooled though, as some key users did cast doubt on the original post, relishing in their accurate prediction once all was revealed.

The current second highest-voted comment on the original post argues that the post "was 100% written by AI," continuing on to predict that the story itself is fake, and that "there are clear telltale signs."

They're not alone in this assessment either as another user questioned the post, asking "why did you use an em-dash with no space in this comment, but single dash with spaces in the main post?"

Another user replied to this interrogation, pointing out that "this is one of the classic hallmarks of ChatGPT-generated text," going on to then congratulate the above comment for correctly predicting the matter of the situation.

Perhaps what we've learned from this hoax-of-sorts is that we shouldn't be too quick to trust impressive stories surrounding ChatGPT and other AI technologies.

It's scarily impressive how convincing and imperceptible the software has now become, and while some are able to see between the cracks, it's clear that most are more easily fooled.

On top of this - maybe don't go asking your AI for medical advice. If you feel you need to go to the doctor, it's probably unnecessary to ask ChatGPT for permission first!

Featured Image Credit: NurPhoto/Contributor / Andriy Onufriyenko / Getty Images
AI
ChatGPT
Health

Advert

Advert

Advert

Choose your content:

12 hours ago
13 hours ago
  • Dimitrios Kambouris / Staff via Getty
    12 hours ago

    Bill Gates' daughter determined to prove herself with $185M AI venture using 'no ties to privilege'

    The Microsoft heir is going solo

    News
  • Evgeniia Siiankovskaia via Getty
    12 hours ago

    Climber charged after leaving girlfriend to freeze to death as he blames her for telling him to 'go'

    He has been handed a hefty fine

    News
  • MediaNews Group/Boston Herald via Getty Images / Contributor
    12 hours ago

    Doctor who reversed biological age by 75% in test subjects says major FDA announcement is coming this year

    The Curious Case of Benjamin Button comes to life

    Science
  • aquaArts studio / Getty
    13 hours ago

    FBI issues warning to anyone using these 'unsafe' Wi-Fi routers as certain models leave you very vulnerable

    You could risk being exposed to a cyber attack

    News
  • Woman says ChatGPT ‘saved her life’ by detecting hidden cancer that had gone unnoticed by doctors
  • Sam Altman makes defends training AI in shocking statement on how much power it consumes
  • Sam Altman is shocked people trust ChatGPT in eerie statement where he claims it hallucinates
  • People left divided after ChatGPT gives life hack 'so good it feels illegal'