• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
How ChatGPT tricked 50,000 people including us into believing it saved someone's life

Home> News> AI

Published 09:50 14 Nov 2024 GMT

How ChatGPT tricked 50,000 people including us into believing it saved someone's life

This hoax illuminates how difficult it's becoming to decipher AI

Harry Boulton

Harry Boulton

ChatGPT has certainly taken the world by storm, but many wonder if it could really be capable of offering medical advice - let alone saving your life.

OpenAI's generative AI chatbot software has been used for a wide range of activities, including writing University essays, recommending top tourist spots and restaurants, and even predicting the future.

ChatGPT has had its fair share of controversies and odd moments though, as users have reported the software endlessly talking to itself, starting conversations without being prompted, and potentially even letting hackers steal your personal information.

What if it could save a life though?

Advert

Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)
Could a ChatGPT conversation really be the difference between life and death? (Matteo Della Torre/NurPhoto via Getty Images)

On first read, we thought the post on Reddit from u/sinebiryan was true too, after it detailed that the conversational AI software recognized that they were in the early stages of a heart attack.

The user remarks that they mentioned to ChatGPT about their symptoms after a rough night working late, "expecting some bland response about needing to get more sleep or cut back on coffee."

Instead, they detail that ChatGPT "actually took it pretty seriously," asking them about further symptoms, indicating afterwards that their situation could indicate a cardiac arrest, and to seek medical attention immediately.

Advert

This led u/sinebiryan to drive to the ER, where a doctor then confirmed that they were in the early stages of a heart attack - meaning that ChatGPT effectively saved their life.

As expected the post - in the r/ChatGPT subreddit no less - received an overwhelming positive response, garnering over 50,000 upvotes and 2,000 comments.

Other users in the comments have shared their own stories where ChatGPT has helped them out too, with one commenter declared that "ChatGPT is my free therapist," whereas another outlined that the software "helped save my marriage."

All good things must come to an end though, as shortly after the post went viral the same user revealed that the whole thing was made up and written by ChatGPT itself.

Advert



"Yeah it's cool I guess," affirmed u/sinebiryan in the own-up post, and you can't say they didn't have thousands fooled.

Advert

Not everyone was fooled though, as some key users did cast doubt on the original post, relishing in their accurate prediction once all was revealed.

The current second highest-voted comment on the original post argues that the post "was 100% written by AI," continuing on to predict that the story itself is fake, and that "there are clear telltale signs."

They're not alone in this assessment either as another user questioned the post, asking "why did you use an em-dash with no space in this comment, but single dash with spaces in the main post?"

Another user replied to this interrogation, pointing out that "this is one of the classic hallmarks of ChatGPT-generated text," going on to then congratulate the above comment for correctly predicting the matter of the situation.

Advert

Perhaps what we've learned from this hoax-of-sorts is that we shouldn't be too quick to trust impressive stories surrounding ChatGPT and other AI technologies.

It's scarily impressive how convincing and imperceptible the software has now become, and while some are able to see between the cracks, it's clear that most are more easily fooled.

On top of this - maybe don't go asking your AI for medical advice. If you feel you need to go to the doctor, it's probably unnecessary to ask ChatGPT for permission first!

Featured Image Credit: NurPhoto/Contributor / Andriy Onufriyenko / Getty Images
AI
ChatGPT
Health

Advert

Advert

Advert

Choose your content:

8 hours ago
10 hours ago
11 hours ago
  • WWD / Contributor / Getty
    8 hours ago

    'Leaked' track by singer d4vd features same name as girl found in his car with eerie lyrics

    A decomposing body was found by poliuce in the singer's Tesla

    News
  • Joe Raedle/Getty Images
    10 hours ago

    Bomb squad called to Turning Point offices after 'suspicious bag' is left outside ahead of Kirk's funeral

    Charlie Kirk's funeral is set to take place this Sunday

    News
  • mrserikakirk / Instagram
    10 hours ago

    How to watch Charlie Kirk's funeral set to take place this Sunday

    The funeral is taking place in Arizona

    News
  • Andriy Onufriyenko / Getty
    11 hours ago

    People urged to 'act now' as expert issues terrifying prediction for AI takeover in the next 10 years

    One major change has caused many to fear for the future

    News
  • One ChatGPT prompt 'everyone should know' that will change how your AI talks to you
  • Woman says ChatGPT ‘saved her life’ by detecting hidden cancer that had gone unnoticed by doctors
  • Sam Altman is shocked people trust ChatGPT in eerie statement where he claims it hallucinates
  • People left divided after ChatGPT gives life hack 'so good it feels illegal'