
In the last two years artificial intelligence has managed to reshape our daily lives, With such rapid change happening around us, it’s understandable that many people feel uncertain, or even anxious, about what the future might bring.
One of the mainstream platforms is OpenAI's ChatGPT, which are are using for help with homework and assignments, recipes, and alarmingly, therapy and advice. It uses a language model which is an AI system trained to recognize patterns in text so it can understand and generate human-like language. Chatbots, like Anthropic’s Claude, and Google’s Gemini use language models to predict what words should come next based on the context of your question, which could fool some people into thinking they are talking to a real human on the other side of the screen. Language models help chatbots learn from a vast collection of written material, allowing AI systems to respond in a way that almost feels natural.
These chatbots can answer questions related to science, food tips, and give so-called advice. And according to experts, this is only the beginning. AI researcher Dario Amodei said Powerful AI ‘may come as soon as 2026 [and will be] smarter than a Nobel Prize winner across most relevant fields'.

Advert
Although chatbots are built to model human language, they can not replicate how the human brain thinks. It predicts patters in text, and does not have internal concepts, emotions, or functions in the human sense. At least you can put your sci-fi AI bot apocalypse fears to rest for now.
Research suggests that improving language modeling does not automatically produce-human-like intelligence, and even if chatbots get better at producing fluent, convincing text, that fluency does not guarantee a true understanding or reasoning of the situation.
Experts in neuroscience argue that human intelligence and human language aren’t the same thing, and building AI that gets better at predicting language doesn’t automatically mean we’d be creating machines that think like we do, or better than we do.
AI researchers push the idea that we’re close to seeing AI as intelligent as humans, or that we’re on the verge of creating ‘superintelligence’.
Advert

A commentary on this theory was published in the journal Nature titled "Language is primarily a tool for communication rather than thought.” Co-authored by Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley) and Edward A.F. Gibson (MIT).
The article sums up decades of research on how language and thought connect. It makes two key points: language does not create our ability to think and reason, and instead it developed as a cultural tool to help us share our thoughts with each other.
Language is just one part of human thinking, and a lot of our intelligence involves capacities outside of language. Even without the ability to speak, humans can still think, form beliefs, fall in love, and live life. But if you take away language from a big language model, there’s not much left.