• News
    • Tech News
    • AI
  • Gadgets
    • Apple
    • iPhone
  • Gaming
    • Playstation
    • Xbox
  • Science
    • News
    • Space
  • Streaming
    • Netflix
  • Vehicles
    • Car News
  • Social Media
    • WhatsApp
    • YouTube
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • Tyla
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
TikTok
Snapchat
WhatsApp
Submit Your Content
Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Home> News> AI

Published 12:04 11 Nov 2024 GMT

Hacker plants false memories in ChatGPT to prove how easy it is to steal user data

Exploit in OpenAI's chat software could cause troubling circumstances

Harry Boulton

Harry Boulton

ChatGPT and other AI models have been accused of plagiarizing content since their popularity boom, but you might now need to be worried about them stealing your data.

Since its launch in 2022, OpenAI's ChatGPT has become synonymous with AI and machine learning, allowing users to generate text, translate information, and even build a conversation with the software.

It's inevitable that the service has expanded and improved over time, and it's even got to the point where it can rather creepily message users first.

Advert

However, one dedicated hacker has revealed an exploit in ChatGPT's new 'Memory' technology that not only allows you to implant false information into its storage, but also export that to an exterior source, effectively 'stealing' user data.

Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)
Exploit in ChatGPT could let hackers steal your data (Sebastien Bozon/AFP via Getty Images)

As reported by Ars Technica, cybersecurity researcher Johann Rehberger initially reported a vulnerability with ChatGPT's 'Memory' feature that was widely introduced in September 2024.

The feature in question allows ChatGPT to story and effectively 'remember' key personal information between conversations that has been discussed by the user. This can include their age, gender, philosophical beliefs, and much more.

Advert

OpenAI claim that this "makes future chats more helpful," as it means that you don't have to repeat the same information and context every time you start a new conversation as the software can intelligently 'remember' who you are.

The issue with this is that Rehberger realized that you could create and permanently store new fake memories within ChatGPT through a prompt injection exploit.

He managed to get ChatGPT to believe that he was 102 years old and lived in the Matrix, alongside having the chatbot convinced that the earth is flat - something even flat earthers aren't even good at!

The troubling aspect beyond this is that Rehberger, in an extensive proof of concept, was able to export these fake memories to an external website, effectively stealing the data that would otherwise remain private.

Advert

While OpenAI initially dismissed Rehberger's report showing the ability to create false memories, the company has since issued a patch that prevents ChatGPT from moving information from it's server. The ability to create false memories, however, still remains.

This issue raises continued concerns about the security of AI software like ChatGPT, and that sentiment is shared across social media too.

A recent post on the r/ChatGPT subreddit expresses these worries too.

The poster posits the question of whether anyone else is "concerned about how much ChatGPT (and more importantly, OpenAI) know about you," and this recent security flaw certain emboldens these fears.

Advert

Some are willing to gloss over any issues though, with one commenter claiming that "it crosses my mind on occasion, but given the internet already knows so much about me, I think the good ship Privacy has already sailed."

Considering there's worries that even our air fryers are selling our data, perhaps ChatGPT isn't the only place we should be looking.

Featured Image Credit: SEBASTIEN BOZON / Contributor / Chad Baker / Getty
Cybersecurity
ChatGPT
AI
Tech News

Advert

Advert

Advert

Choose your content:

11 hours ago
12 hours ago
  • Instagram/@taylorswiftInstagram/@taylorswift
    11 hours ago

    MrBeast makes bizarre prediction following Taylor Swift's engagement to Travis Kelce

    Travis Kelce reportedly proposed to Taylor Swift around two weeks ago in a garden at Lee’s Summit, Missouri

    News
  • 3alexd/Getty3alexd/Getty
    12 hours ago

    Research reveals US is missing out on a hidden goldmine that could power millions of EVs

    The wasted elements could change the nation's position in the EV market

    News
  • Jakub Porzycki/NurPhoto via Getty ImagesJakub Porzycki/NurPhoto via Getty Images
    12 hours ago

    Man asked ChatGPT to count to 1,000,000 and was shocked with the chatbot's reply

    The AI chatbot seems to skirt around the question

    News
  • Raine Family via TODAY YouTubeRaine Family via TODAY YouTube
    12 hours ago

    ChatGPT chatbot's surprising response after parents of teen who died by suicide sue OpenAI

    OpenAI has come out with a statement on the tragic case

    News
  • Shocking data reveals 'true cost' of being polite to ChatGPT
  • ChatGPT urges user to warn the public as it makes shock admission that it's trying to 'break' people
  • Terrifying signs that the AI 'bubble' is about to burst
  • Reddit user shares 'wild' ChatGPT prompt that gives you a full CIA intelligence report about your life