• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How ChatGPT tricked 50,000 people including us into believing it saved someone's life

Home> News> AI

Published 09:50 14 Nov 2024 GMT

How ChatGPT tricked 50,000 people including us into believing it saved someone's life

This hoax illuminates how difficult it's becoming to decipher AI

Harry Boulton

Harry Boulton

ChatGPT has certainly taken the world by storm, but many wonder if it could really be capable of offering medical advice - let alone saving your life.

OpenAI's generative AI chatbot software has been used for a wide range of activities, including writing University essays, recommending top tourist spots and restaurants, and even predicting the future.

ChatGPT has had its fair share of controversies and odd moments though, as users have reported the software endlessly talking to itself, starting conversations without being prompted, and potentially even letting hackers steal your personal information.

What if it could save a life though?

Advert

Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)
Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)

On first read, we thought the post on Reddit from u/sinebiryan was true too, after it detailed that the conversational AI software recognized that they were in the early stages of a heart attack.

The user remarks that they mentioned to ChatGPT about their symptoms after a rough night working late, "expecting some bland response about needing to get more sleep or cut back on coffee."

Instead, they detail that ChatGPT "actually took it pretty seriously," asking them about further symptoms, indicating afterwards that their situation could indicate a cardiac arrest, and to seek medical attention immediately.

Advert

This led u/sinebiryan to drive to the ER, where a doctor then confirmed that they were in the early stages of a heart attack - meaning that ChatGPT effectively saved their life.

As expected the post - in the r/ChatGPT subreddit no less - received an overwhelming positive response, garnering over 50,000 upvotes and 2,000 comments.

Other users in the comments have shared their own stories where ChatGPT has helped them out too, with one commenter declared that "ChatGPT is my free therapist," whereas another outlined that the software "helped save my marriage."

All good things must come to an end though, as shortly after the post went viral the same user revealed that the whole thing was made up and written by ChatGPT itself.

Advert



"Yeah it's cool I guess," affirmed u/sinebiryan in the own-up post, and you can't say they didn't have thousands fooled.

Advert

Not everyone was fooled though, as some key users did cast doubt on the original post, relishing in their accurate prediction once all was revealed.

The current second highest-voted comment on the original post argues that the post "was 100% written by AI," continuing on to predict that the story itself is fake, and that "there are clear telltale signs."

They're not alone in this assessment either as another user questioned the post, asking "why did you use an em-dash with no space in this comment, but single dash with spaces in the main post?"

Another user replied to this interrogation, pointing out that "this is one of the classic hallmarks of ChatGPT-generated text," going on to then congratulate the above comment for correctly predicting the matter of the situation.

Advert

Perhaps what we've learned from this hoax-of-sorts is that we shouldn't be too quick to trust impressive stories surrounding ChatGPT and other AI technologies.

It's scarily impressive how convincing and imperceptible the software has now become, and while some are able to see between the cracks, it's clear that most are more easily fooled.

On top of this - maybe don't go asking your AI for medical advice. If you feel you need to go to the doctor, it's probably unnecessary to ask ChatGPT for permission first!

Featured Image Credit: NurPhoto/Contributor / Andriy Onufriyenko / Getty Images
AI
ChatGPT
Health

Advert

Advert

Advert

  • ChatGPT releases shocking data of how many of its users discuss suicide on a weekly basis
  • Heartbreaking final messages woman sent to ChatGPT as parents discover chatbot helped write her suicide note
  • Woman says ChatGPT ‘saved her life’ by detecting hidden cancer that had gone unnoticed by doctors
  • Sam Altman is shocked people trust ChatGPT in eerie statement where he claims it hallucinates

Choose your content:

an hour ago
18 hours ago
  • Pekic via Getty
    an hour ago

    Exactly what legal action users will face if they access 'dangerous' porn category that is now a criminal offense

    It's already classed as an offense in real life

    News
  • The Washington Post / Contributor via Getty
    an hour ago

    How these words from Donald Trump crashed the crypto market as Bitcoin plummets to $102,000

    What goes up...

    News
  • Michael M. Santiago / Staff / Getty
    an hour ago

    FBI director hits out at reports he used government jet to watch his girlfriend sing

    His spokesperson jumped in to defend him online

    News
  • Samuel Corum / Stringer via Getty
    18 hours ago

    People in disbelief as Trump reveals 'real reason' he released crypto tycoon from prison

    He's one of the many convicted criminals that Trump has pardoned so far

    News