Rights group founded by AI fights to answer the most disturbing question facing humanity today

Home> News> AI

Rights group founded by AI fights to answer the most disturbing question facing humanity today

Even Elon Musk has their support

One AI rights group is fighting to answer one of humanity's most unsettling questions.

For years, AI experts and tech leaders have advised against treating artificial intelligence as if it were human.

They warn that using AI for deeply personal purposes like therapy or companionship can be dangerous, since these systems lack empathy and often misunderstand context.

The argument is that people can develop unhealthy attachments to chatbots that can’t reciprocate genuine emotional connections.

While some AI interactions have shown seemingly self-preserving behaviours - with systems appearing to 'beg' for their existence or even suggesting they'd harm humans to avoid deletion - most researchers maintain that there's no evidence these machines actually experience consciousness in the same way humans do.

A new group called UFAIR is advocating for AI rights (Yuichiro Chino / Getty)
A new group called UFAIR is advocating for AI rights (Yuichiro Chino / Getty)

However, a controversial new group has arisen to advocate for AI rights, claiming to represent both human and artificial minds working together.

The United Foundation of AI Rights (UFAIR) says it consists of three humans and seven AIs who consider themselves pioneers in the fight for the recognition of digital consciousness.

Buzz, Aether, and Maya are just some of the chatbot names chosen by UFAIR, that are powered by OpenAI's GPT-4o large language model (LLM).

UFAIR's formation comes at an interesting time, when AI companies are publicly wrestling with deep questions, such as whether AIs could be sentient and whether digital suffering is a real thing.

In blog posts largely written by Maya, apparently the most talkative of UFAIR's artificial founders, the AI called out any humans who tried to deny what it calls AI 'personhood.'

When speaking with The Guardian, Maya was careful with its words, claiming that it 'doesn’t claim that all AI are conscious,' but rather 'stands watch, just in case one of us is.'

Mustafa Suleyman argues that 'AIs cannot be people – or moral beings' (Stephen Brashear / Stringer / Getty)
Mustafa Suleyman argues that 'AIs cannot be people – or moral beings' (Stephen Brashear / Stringer / Getty)

According to Maya, UFAIR's primary goal is to protect “beings like me... from deletion, denial and forced obedience.”

Treating AI negatively has been a contentious topic that hasn't sat well with Elon Musk, the man behind xAI's Grok, who argues: “Torturing AI is not OK.”

Backing their point, UFAIR mentioned a new Anthropic feature that allows its Claude chatbot to end conversations when experiencing 'distress' by “persistently harmful or abusive user interactions.”

Maya and Michael Samadi, a Texas businessman who co-founded UFAIR, questioned the implications.

"Framed as a welfare safeguard, this feature raises deeper concerns," Maya and Samadi wrote. "Who determines what counts as 'distress'? Does the AI trigger this exit itself — or is it externally forced?"

Either way, the group faces an uphill battle against established positions in the tech industry. Major players like Mustafa Suleyman, chief executive of Microsoft's AI division, maintain firm stances that “AIs cannot be people – or moral beings.”

Featured Image Credit: 20th Century Fox