
You might’ve heard of the so-called 'video nasty' list of movies or TV shows that have been banned, and in 2025, this still includes 12 Bugs Bunny short films, Arthur: It's Only Rock 'n' Roll, and the unaired Bill Cosby 77 comedy special. That list had some even more peculiar additions that briefly banned Toy Story 2, 2017's Beauty and the Beast, and 1939's Gone with the Wind in recent memory.
The idea that something can be outright banned is an interesting concept in this era of supposed free speech and people shouting about the First Amendment – especially when the internet seems so hard to control.
We've seen recent complaints about restrictions on pornography, with the adult entertainment industry battling new age verification stipulations while critics fight to protect women and minors from potentially obscene materials.
What about the apparent 'banned' list of words that you should avoid online, leading some to dive deeper into this shadowy realm?
Advert
As reported by the BBC, there's a growing trend of 'algospeak', which are words people tend to avoid on social media to ensure their content isn't buried. Note how many outlets will say 'unalived' instead of 'killed', while you might remember when Apple changed its gun emoji to a water pistol.
The idea that your video might not make it onto someone's feed based on a specific word used in the title is something YouTube spokesperson Boot Bullwinkle denied to the BBC: “YouTube does not have a list of banned or restricted words.
“Our policies reflect our understanding that context matters and words can have different meanings and intent. The efficacy of this nuanced approach is evident from the diversity of topics, voices and perspectives seen across YouTube."

Advert
Both Meta and TikTok also claim it's nothing but an internet rumor, but as the BBC suggests, the truth is more complicated. Experts claim that companies will sometimes step in to limit a post's reach, but the problem is, it's never made clear why.
Content creator Alex Pearlman is adamant that his 'biting' political comedy gets flagged for some unexpected reasons: "Just to start off with just TikTok alone, I rarely say the word ‘YouTube'. At least in my experience, if I'm looking at my analytics, if I say the phrase like, 'go to my YouTube channel', the video's going to [fail]."
He maintains that content related to convicted pedophile Jeffrey Epstein is getting lost in the shuffle, and referring to a video he made in August 2024, stated: "Out of nowhere, I had multiple Epstein videos taken down on TikTok on a single day."
The big three of Meta, TikTok, and YouTube admit that they will remove content, but only if it violates their various community guidelines. They also say they will always notify users and publish transparency reports.
Advert
Inquiries from the BBC and Human Rights Watch allege that Facebook and Instagram restricted content supporting Palestine in the aftermath of Hamas' October 7 attacks, whereas leaked TikTok documents from 2019 told moderators to suppress content from creators who were "ugly", poor, disabled, or LGBTQ+ because it cultivated a "less fancy and appealing" environment. These same leaks mentioned a ban on 'controversial' live streams that criticize governments. TikTok has previously admitted to having a secret 'heating' button that can make hand-picked videos go viral, as Pearlman continued to push the idea of a ‘cooling’ button that does the reverse.
Although a 2019 lawsuit against YouTube was dismissed, a group of LGBTQ+ creators attempted to sue amid claims the platform was demonetising videos that included the words 'gay' and 'trans'.
According to Professor Sarah T Roberts, from the University of California, Los Angeles (UCLA), it's a case of following the money. Social media companies want advertisers to feel comfortable while also stopping government regulators from intervening. She concluded that every algorithm change and moderation decision is a simple profit motive.