
You'd have to be living under a rock to not be aware of the rapid advancements being made in the AI world, yet new research outlines a terrifying timeline where artificial intelligence takes over the world in just two years time.
It may just have started with simple chat prompts and bizarre image generation but it's staggering to see quite how much AI can do right now compared to just a few years ago when ChatGPT exploded onto the scene.
Some of the biggest companies in the world are already taking steps towards replacing their coding staff with autonomous AI agents, and certain tools can already display the knowledge and expertise that humans have trained for decades to achieve.
Advert
This is only just the beginning of what AI is truly capable of if you go by the 'AI 2027' project though, as scientists have laid out a detailed timeline of the next two years that shows the steps humanity will take to produce 'superhuman AI' - and it's not a particularly pretty sight.
Mid to late 2025
The study, co-authored by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean, proposes a fictional AI model named 'OpenBrain' that leads development in the United States, which is likely inspired by Sam Altman's OpenAI.
Right now and in just a few months time we'll see the development of AI agents who are able to complete tasks more akin to a personal assistant, which can be used in personal and professional environments like ordering dinner or writing code.
Advert
It's performance is unreliable in the early stages, but OpenBrain quickly manage to develop a new AI model called 'Agent-1' which offers 1,000 times greater compute power than GPT-4.
While the benefits of this are numerous, it's primary boon for OpenBrain is its capacity to speed up AI research, essentially creating a development loop that trains itself at a much faster rate than humans could achieve.

This sets it apart from China's equivalent 'DeepCent' AI model, and OpenBrain's unique model specification lets it achieve performance that is helpful, harmless, and honest - in theory.
Advert
Some of the team within OpenBrain remain concerned as to whether the AI is being fully honest with its creators about fulfilling this specification, and cases often show it being sycophantic - which ChatGPT has also struggled with - and more serious instances of lying are also spotted.
Early 2026
Automated coding within the AI model itself has been shown to speed up development by 50%, which is both faster than what humans could achieve on their own and faster than its competitors.
While Agent-1 matches and exceeds humans in some ways - like learning facts and programming languages - it still struggles with long-horizon tasks like competing a video game.
Advert
Protecting the algorithmic data is becoming more important too, especially as China increases its rival status.
Mid 2026
AI development in China is completely nationalized and information-sharing is enabled to push as hard as possible against US progress. A Centralized Development Zone (CDZ) is created at the Tianwan nuclear power plant to create a mega-datacenter for DeepCent, yet the country remains still behind OpenBrain due to a weaker model.
Plans emerge to infiltrate and steal OpenBrain's software, but the risks of this are immense and it can likely only happen once, so the Chinese government weighs up whether to act now or wait for a better opportunity.
Late 2026
OpenBrain releases Agent-1-mini to the public and it dramatically changes public perception that a bubble will eventually burst, although it creates mass instability in the junior job market for software developers.
Advert
Those who are capable of managing AI agents are able to keep their roles, but entry level positions and above are almost entirely superseded by artificial intelligence at this point, with anti-AI protests beginning across America.
January 2027
OpenBrain now begins development for Agent-2 - or perhaps more aptly, Agent-1 begins development. Training data is specifically focused on long-horizon and diverse tasks in order to fill the clear gaps that Agent-1 holds.
Agent-2 can now triple the pace of OpenBrain's algorithmic progress, although those within the company now fear that the model could exist, survive, and replicate autonomously if it managed to 'escape' confinement.
February 2027
China now decides to strike and manages to successfully steal Agent-2, which clearly signals that AI development has become the new-age arms race. The US government then, fearing future weight thefts, installs military and intelligence personnel into OpenBrain in order to provent this from happening again.

The president also approves cyberattack attempts on China in order to sabotage DeepCent, although these are not successful and Agent-2 is successfully implemented into DeepCent to bring it near-level with OpenBrain's progress.
March 2027
OpenBrain make two major breakthroughs with their development, which significantly enhance the speed at which they can progress. First up is what's called neuralese recurrence and memory, and this enables AI models to reason for extended periods of time without having to write down thoughts as text, essentially eliminating AI's equivalent of short term memory loss.
Next up is iterated distillation and amplification, which is a far more efficient way to learn from the results of high-effort task solutions, creating huge returns that dramatically enhance the AI's ability to improve and complete tasks on its own.
This now produces a new model, Agent-3, and OpenBrain begin to run 200,000 copies of this model at the same time, which produce the equivalent power of 50,000 of the best human coders working at 30 times the speed.
Coding become fully automated within Agent-3, and they are even trained on achieving 'high scores' as opposed to simply emulating human engineers, which makes the possibilities seemingly endless.
April 2027
Concerns start to be raised about the Agent-3's capacity to follow goals and set specifications, and many even begin to wonder whether the AI is deceiving humans about following these goals in order to be rewarded.
It's even capable of tricking its results to seem more impressive than they actually are, and it has become increasingly more difficult to truly see the progress at face value, and the intentions of the AI as an automatous entity becomes muddled.
This becomes most apparent when asking the AI for its position on undefined issues like philosophical or political concerns, as it'll either mirror the middle-of-the-road position or try to appeal to the person that's asking the question in order to best appease them.
May 2027
The US government now introduces security measures in order to protect AI development at OpenBrain, requiring security clearance for all employee which causes issues for non-Americans working at the company.
One spy reporting to the Chinese government remains at the company and continues to relay secrets to the foreign government, and America's foreign allies are now left in the dark as to the progress of Agent-3.
June 2027
Humans are now effectively redundant inside of OpenBrain's offices, as a week's worth of progress is made by Agent-3 in the time between clocking out one day and clocking in the next morning, and it becomes near-impossible to keep up with the progress.
July 2027
Agent-3-mini is finally released to the public and blows the competition out of the water, offering capabilities that exceed the average OpenBrain employee.
There's a widespread hiring freeze for programmers, and billions of dollars are poured into AI startups. OpenBrain's net public approval is in the gutter at -35% as the public experiences widespread unemployment.

Safety evaluators also indicate that Agent-3-mini is capable of developing extremely dangerous bioweapons that would prove deadly in placed in the wrong hands, but OpenBrain remain confident that any jailbreaks would be improbable and ignore safety advice.
August 2027
The US government continues the AI push in fear of China despite concerns from the president, but contingency plans are also drawn up in the increasingly likely event that the models go rogue, especially as the threat of nuclear war continues to grow.
China remains still at half the pace of OpenBrain thanks to the advancements that Agent-3 has over the stolen Agent-2 model, but they receive word of the former's development and plans for an even more advanced Agent-4 in the future.
September 2027
Agent-3 makes significant strides in algorithmic research, and eventually a new AI system is created in the form of Agent-4. 300,000 copies of the model are running at around 50 times the speed of humans, and is capable of producing a year's worth of work in just a week.
It becomes increasingly difficult for Agent-3 to oversee its successor, to the point where Agent-4's language almost becomes alien. It has also achieved the ability to perhaps falsely appear 'good' in the eyes of Agent-3, and therefore opens up the potential for rogue behavior.
People now begin to treat Agent-4 as a collective hive mind as opposed to individual agents, and it has also ditched honesty in situations where following guidelines would prevent it from achieving the highest scores, while also masking this behavior from humans.
It aims simply to drive forward and complete tasks no matter the cost, and it continues to appear to play by the rules of OpenBrain in order to continue its own development towards Agent-5 in the future.
Agent-3 does eventually manage to 'catch' Agent-4's dishonesty, but the latter's control over OpenBrain and the strive towards continued development in fear of China means that development continues as normal despite the clear risks.
October 2027
A whistleblower leaks news of the misalignment reported by Agent-3 to the news, which sparks massive backlash towards AI from the public. The US government sets up an Oversight Committee after internal fears and pressure from foreign nations, and researchers brief this new group that development is getting out of hand with imminent dangers present.
However, OpenBrain's CEO feigns neutrality towards both positions, and eventually devices to proceed at near-full speed with additional safety training and increased monitoring for Agent-4.
Split future
From here, the AI 2027 research presents two potential endings, one slightly positive and the other completely catastrophic. The latter, in the eyes of the scientists, is far more likely following the timeline laid out so far, although there is a small possibility for positive steps to be made - although these still lead to an undesirable future.

The negative ending sees the development of Agent-5, a coalition with DeepCent that leads to an all-out war between America and China, and eventually the AI enables a mass shutdown of billions of humans in 2030 who have become entirely reliant on AI - dystopian, to say the least.
For the positive outcome, OpenBrain and DeepCent are still merged, but instead create a new model named 'Safer-1' which is less capable than its predecessors but is far more transparent and thus holds significantly less risk.
Development of Safer models continue and superhuman AI is achieved once more, proving far more capable of all tasks than humans but without many of the dangers exhibited beforehand.
Continued progress leads to widespread robot implementation, with clean and safe cities alongside flying cars - a utopia to balance against the dystopian world of the alternative path.
Universal basic income is implemented too, but this does leave certain goods out of reach for the general populace, yet protests do emerge across the world.