
AI researchers are working on creating what they call ‘humanity’s last exam’ in order to probe the true limits of machine intelligence.
The tech industry has exploded with AI advancements in recent years and it doesn’t appear to be slowing down anytime soon.
In fact, a team of researchers are now working on a test that is able to benchmark the progress of AI bots.
The research team published a paper in the Association for Computing Machinery, where they explained: “Participants in our experiment were no better than chance at identifying GPT-4 after a five minute conversation, suggesting that current AI systems are capable of deceiving people into believing that they are human.
Advert
“The results here likely set a lower bound on the potential for deception in more naturalistic contexts where, unlike the experimental setting, people may not be alert to the possibility of deception or exclusively focused on detecting it.”

The paper continued: “Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities.
“However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities.”
A study into the test for AI systems was published in Nature, which detailed how it is essential for there to be precise measurements for AI capabilities as these systems ‘approach human expert performance in many domains’.
This will assist with informing research, governance and the broader public on the improvements AI is making.
The study continued: “To establish a common reference point for assessing these capabilities, we publicly release a large number of 2,500 questions from HLE to enable this precise measurement, while maintaining a private test set to assess potential model overfitting.”

Many people have taken to social media to share their own reactions to the research, with one user writing on Reddit: “Scientists created an exam so broad, challenging and deeply rooted in expert human knowledge that current AI systems consistently fail it. “Humanity’s Last Exam” introduces 2,500 questions spanning mathematics, humanities, natural sciences, ancient languages and highly specialized subfields.”
This prompted many to reply, with another saying: “I fail that exam too. Most people do too since you can only be an expert in a few fields.”
A third commented: “I would contend that if these questions are all ever answered correctly, you know it is an AI, because no single human could have that broad of a knowledge base.”
And a fourth user added: “Well you can already see that the advanced AI versions made huge gains. Matter of time before they ace the test.”