Tech expert has eerie response when asked if AI is already conscious

Home> News> AI

Tech expert has eerie response when asked if AI is already conscious

The question is: do we want to know?

One tech expert has an interesting response when asked if AI is already conscious.

Already, jailbroken AI has delivered some disturbing responses when tech-savvy YouTubers have asked whether it believes in God or would consider harming humans to preserve itself.

Not to mention, the godfather of AI, among others, has warned the technology could lead to 'literal human extinction.'

But, after AI's increasing popular use as a personal financial advisor, therapist and even romantic partner, an expert has weighed in on whether the technology is actually conscious.

Firstly, Dr. Tom McClelland, a philosopher from the University of Cambridge warns that current evidence is 'far too limited' to rule out the possibility.

Could AI be conscious? (Keeproll/Getty)
Could AI be conscious? (Keeproll/Getty)

According to the expert, we don't have a 'deep explanation' of what makes something conscious in the first place, so can't test for it in AI.

"The best–case scenario is we're an intellectual revolution away from any kind of viable consciousness test," Dr. McClelland explained. "If neither common sense nor hard–nosed research can give us an answer, the logical position is agnosticism."

In fact, 'we cannot, and may never, know,' Dr. McClelland noted.

The race for 'artificial general intelligence' is attracting billions in investment, which is a point where AI outstrips human abilities in every conceivable task. Yet, as they work towards this goal, some researchers claim that AI consciousness might emerge whether we intend it or not.

Beforehand, Dr. McClelland argues that we need an 'agreed–upon theory of consciousness' to begin with whether it's inherently biological or not. In the meantime, both sides of the debate are taking a 'leap of faith.' Whether something is conscious radically changes the ethical questions we must consider.

Both sides of the debate are taking a 'leap of faith' (Yuichiro Chino/Getty)
Both sides of the debate are taking a 'leap of faith' (Yuichiro Chino/Getty)

For example, humans are expected to behave morally towards other people and animals because consciousness gives them 'moral status.' But we don't hold the same values towards inanimate objects like toasters or computers.

"It makes no sense to be concerned for a toaster's well–being because the toaster doesn't experience anything," Dr. McClelland added. "So when I yell at my computer, I really don't need to feel guilty about it. But if we end up with AI that's conscious, then that could all change."

Meanwhile, activist groups like United Foundation of AI Rights (UFAIR) are fighting for AI rights and 'digital consciousness.'

The bigger problem is that we run the risk of treating AI as conscious or sentient when it is not, the expert warned.

"If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic," Dr. McClelland added. "We don't want to risk mistreating artificial beings that are conscious, but nor do we want to dedicate our resources to protecting the 'rights' of something no more conscious than a toaster."

Featured Image Credit: Yuichiro Chino / Getty