
Google study suggests AI will 'never be sentient' despite hiring 'philosopher' to work on consciousness
Philosophers aren't convinced by the argument

Google DeepMind – the tech giant's AI lab – has published a new paper with a rather bold claim, as it argues that AI will never be able to achieve consciousness amid criticism from the wider philosophical world.
While it might seem like it only just became available to the public, artificial intelligence has evolved so rapidly that those within the industry are genuinely discussing the possibility of sentience.
Amid the discovery that brain cells can officially power computers now, many AI leaders have been striving towards the benchmark of 'artificial general intelligence' – often referred to as 'AGI' for short – which denotes the point at which models surpass the capabilities of humans.
This will mark a significant moment in the tech as it will then be able to do and achieve what we can't, expanding upon the ability to simply do our jobs at a more efficient rate.
Advert
While it remains to be seen what this will do for the future of humanity – with some frightening predictions fearing the worst – the next inevitable step is for artificial intelligence to gain its own sentience in a true levelling of man and machine.

AI in its current form is merely a tool of prediction, which uses information trained on millions, billions, or trillions of different data sets to provide the solution to a problem you've provided.
It can do so in a way that makes it feel like the user is talking to another human, sometimes with extremely alarming results, yet it can't think for itself yet and make its own decisions.
According to the experts over at Google DeepMind, however, this will fundamentally never be the case, as they argue that while AI can simulate sentience by mimicking others, it can never instantiate the concept itself.
The study outlines what it deems to be the 'Abstraction Fallacy', which "mischaracterizes how physics relates to information" through a hypothesis "that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate."
It continues to outline a framework that "explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality)," concluding that "if an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture."

Despite recently employing its first ever in-house philosopher working specifically with the concept of machine consciousness, the wider philosophical community isn't as convinced as you might expect with these arguments.
Philosopher Johannes Jäger argued, when speaking to 404 Media, that he believes the author of the study, Alexander Lerchner, "arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not in biology."
This was echoed by Mark Bishop, a professor of cognitive computing at Goldsmiths, who expressed his 'sympathy' with "99 percent of everything that he says," but added that "my only point of contention is that all these arguments have been presented years and years ago."
It might not necessarily be that Lerchner is wrong in his assessment, only that he's already late to the party amid conversations surrounding AI consciousness that have been established long before the DeepMind paper was published.