To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

The 'dead internet' theory explained is just as scary as it sounds

The 'dead internet' theory explained is just as scary as it sounds

You can't trust that anything you're seeing is real anymore.

If you've ever looked at the comments section of an article that felt like it might have been auto-generated, and seen a whole bunch of replies that also look like they might be auto-generated, then you might have glimpsed the 'dead internet'.

This is a theory that's become a lot more popular in the last couple of years with the huge rise of artificial intelligence tools like ChatGPT.

With text generation now rampant, the dead internet theory basically suggests that the majority of the internet is now comprised of fake content and manufactured interaction between bots, and by bots.

Maria Korneeva / Getty
Maria Korneeva / Getty

That feels pretty credible, even if the numbers and proportions are pretty hard to get clear and provable data for, but there's another layer to the dead internet theory which is a little more far-fetched.

This basically argues that the dead internet was actually planned, by some combination of corporations and governments, as a method of controlling people.

According to this argument, the rise in auto-generated content and responses is a way to condition people's responses and get them used to certain ideas or ideologies. Pretty ominous stuff.

So, to keep up with the times, those who think this theory has some weight to it are now pointing to social media posts with clearly AI-generated content, whether it's imagery, videos or text, and the huge success they've been having.

One TikToker, @sidemoneytom, documents some of these posts, looking into where their viral success has come from.

When you drill down on some of these comment sections and posts, it's quite easy to establish that they've not only been posted by a fake account but also have countless replies from equally phony accounts.

This makes it proportionally much harder for real people who aren't necessarily experienced in detecting AI content to tell whether they're looking at something real or not.

Exactly what this theoretically accomplishes for the alleged cabal who sits behind all this orchestrating it isn't clear, beyond the erosion in trust that what we're looking at is remotely real.

Still, the broad sense that the internet has become less human and more robotic is quite a popular one - and it's echoed by complaints about services we used to be able to rely on.

Featured Image Credit: d3sign / Andriy Onufriyenko / Getty