
ChatGPT CEO Sam Altman has made a disturbing observation surrounding the effect of AI on human behavior, and its change in just a relatively short period of time suggests that things are only going to get worse.
There's no doubt whatsoever than artificial intelligence has already had a significant impact on humanity in the short period of time that generative models have been available to the public.
You only need to look at how many jobs have already been lost as a consequence of AI-driven redundancies, and this will only continue to ramp up in the future as technology advances.
Furthermore, there have been a number of tragic incidents involving various AI models, and studies have suggested that it might not be up to scratch when interacting with certain types of people, leading to potentially dangerous scenarios.
Advert
One area that is perhaps under discussed when it comes to how AI has influenced human behavior though is how we speak, and the man behind ChatGPT has started to notice this in a rather worrying way.

As reported by Business Insider, Sam Altman has seemingly started to notice that there's either a dramatic increase in the number of 'fake' accounts or bots driven by AI across social media, or that people are increasingly starting to speak in the same way as AI.
Quoting a screenshot of Reddit 'users' discussing OpenAI's new coding tool 'Codex' on X, Altman wrote:
Advert
"I have had the strangest experience reading this: I assume it's all fake/bots, even though in this case I know codex growth is really strong and the trend here is real.
"I think there are a bunch of things going on: real people have picked up the quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very 'it's so over/we're so back' extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so I'm extra sensitive to it, and a bunch more (including probably some bots)," Altman illustrated.
By far the most intriguing suggestion here is that people are starting to talk like LLMs, which is perhaps a natural consequence of the amount of exposure millions of people have with these AI models.
Advert
"The net effect is somehow AI Twitter/AI Reddit feels very fake in a way it really didn't a year or two ago," Altman adds, and whether it's simply down to an increase in bots or a rapid change in human behavior is up in the air.
Altman has already commented on the reality of the 'dead internet theory', which denotes the increasing number of AI posts relative to the decreasing amount of posts from humans, but the inability to distinguish man from machine on language alone could only complicate this further.
People aren't just using AI to ask questions anymore, they're using it as a substitute for therapy and even romantic relationships, and that can undeniably have an effect on how you communicate, especially when new models can in turn dramatically alter the way it communicates with humans.