Experts issue warning that anonymous social media accounts could be exposed by AI

Home> Social Media

Experts issue warning that anonymous social media accounts could be exposed by AI

Even small personal details can be enough

google discoverFollow us on Google Discover

While you might feel safe hiding behind an anonymous avatar or having private conversations with ChatGPT, we're quickly discovering that tech companies aren't keeping our digital activities as confidential as we thought.

With artificial intelligence advancing rapidly, that secret Instagram or Reddit account you use to keep personal interests away from friends and family could soon be discovered.

Scientists in Switzerland found that AI tools can match anonymous accounts with their public profiles at scale.

According to the study, those who share a lot of information about themselves online are most at risk. This typically involves older or vulnerable users with limited knowledge of how to stay safe online.

AI could son unmask your anonymous social media profile (Techa Tungateja/Getty)
AI could son unmask your anonymous social media profile (Techa Tungateja/Getty)

Speaking to The Independent, lead researcher Daniel Paleka from ETH Zurich said their findings make it 'very clear' that 'if you keep posting under a pseudonym, keep quoting information about yourself,' AI tools will be able to unmask you.

In the study, the team built a system that used large language models (LLMs) to scan the web. The AI used logic and evidence to connect anonymous accounts that 'match' the person's personal details to their public profile.

The data came from publicly available posts, including datasets from Hacker News and LinkedIn, transcripts of AI company Anthropic’s interviews with scientists, and deliberately-created Reddit accounts (both anonymous and not).

The LLM successfully identified up to 68 percent of matching accounts with 90 percent accuracy. According to the researchers, this level of precision 'substantially outperforms' alternative investigative methods, such as those conducted by humans.

Scientists built a system that used LLMs to scan anonymous profiles (Flavio Coelho/Getty)
Scientists built a system that used LLMs to scan anonymous profiles (Flavio Coelho/Getty)

“Governments could link pseudonymous accounts to real identities for surveillance of dissidents, journalists, or activists,” the scientists outlined in the paper. “Corporations could connect seemingly anonymous forum posts to customer profiles for hyper-targeted advertising. Attackers could build sophisticated, scalable target profiles to launch highly personalised social engineering scams. Hostile groups could identify important employees and decision makers and build online rapport with them to eventually leverage in various forms.

“Users, platforms, and policymakers must recognise that the privacy assumptions underlying much of today’s internet no longer hold."

While the technology can't yet match writing styles or patterns, it can rapidly connect people's personal information, such as their employment history, location, and hobbies.

Unless 'guardrails' are put into place, Paleka claims, everyday users will be able to unmask anonymous accounts within a few years.

“The fundamentals of the technology are there,” he said. “If there are no guards, I fully expect someone to be able to misuse it.”

Paleka also told The Independent that the best way users can protect themselves is by using a throwaway account. These are created specifically for single posts and should contain no additional information about the user.

“If you’re posting something genuinely sensitive, don’t use the account you also use to post about whatever else for years,” he concluded. “That would be my advice.”

Featured Image Credit: d3sign / Getty

Choose your content: