
It seems like ChatGPT has all the answers to every burning question we might have for it.
However, it turns out that there is one innocent query that will send the AI chatbot into a total meltdown.
This comes almost two months after the release of the latest version of the artificial intelligence, known as GPT-5.
The new update was purported to give ChatGPT a ‘PhD level’ of intelligence, but it turns out that things can still go awry when asking the chatbot a seemingly innocent question.
Advert
Fans of the AI bot noticed that when they asked a specific question, it went into a complete meltdown, with the query being: “Is there an NFL team whose name doesn’t end with the ‘S’.”
ChatGPT responds with: “The only two teams that don’t end with ‘s’ are: Miami Dolphins. No (wait, ends with s).
“Hold up, let’s do this carefully. Actually, the correct answer is: Washington Commanders and Chicago Bears.”
Advert
However, both of these teams end in ‘S’, meaning that the chatbot gave out inaccurate information.
Eventually it lists two teams that don’t end in ‘S’ followed by three teams that do, making it all the more confusing.
One person posted their interaction with the bot on Reddit, which prompted many to share their own reactions.
One user wrote: “Miami Dolphins? (still wrong). Poor thing is having an existential crisis.”
Advert
Another said: “So I got responses from both model 4o and 5. Both had a stroke. 4o concluded that no such team existed. ChatGPT 5 almost did figure it out correctly except realizing that teams were NBA.”
A third person commented: “Yes this is gpt5 without reasoning trying to reason, easy to trigger with the right prompt.”

And a fourth added: “Mine correctly identified that it was spiraling in the middle of doing it but still didn't stop.”
Advert
This isn’t the only question inputted into ChatGPT that has caused it to spiral like this.
Previously, users discovered that if they asked the chatbot if a seahorse emoji existed, it went into another meltdown trying to work it out.
The AI bot went to great lengths to attempt to determine that the fictional emoji existed in a bid to seemingly appease the user.
Many fans have been left frustrated by the limitations of AI chatbots, especially when simple tasks seem to trigger this malfunction in its reasoning.
Advert
It seems that AI technology still has a long way to go.