


One of the most powerful figures in the world of AI has just told people to brace for a 'world-shaking' cyberattack this year, as Sam Altman outlines how AI could make these threats – alongside potential bioterrorism – possible for malicious actors.
Many of the concerns that people have with artificial intelligence relate to its impact on the job market and the long-term degradation of the environment, but there could be a potentially imminent danger that few have predicted.
While AI is becoming increasingly capable thanks to rapid development, and with fears that it’s edging closer to artificial general intelligence in the coming years, that proficiency could become a vulnerability if the technology gets into the wrong hands.
AI models are already impressive when it comes to cybersecurity, but open source models will likely prove to be the key that unlocks a catastrophic cyberattack, and ChatGPT CEO Sam Altman has outlined what could be in store as early as this year.
Advert
Speaking to Axios in an interview, Altman outlined the potential for a cyberattack that could hit the world within the next 12 months, and what AI companies are already doing to prepare for the threat.
"In the next year, we will see significant threats we have to mitigate from cyber," Altman warned, "And these models are already quite capable and get much more capable."
Referencing the potential of an attack this year, Altman argued that it's 'totally possible', noting that "to avoid that, it will require a tremendous amount of work — also in a sort of resilient style of approach.
"Again, it's not just make one AI model safe, it is defenders. We have this thing called a trusted access program, other companies have other things. Cybersecurity companies, major platforms, the governments, using this technology to try to rapidly secure their systems, the open source stack, all of that. That's quite important now."

It was far from the only threat that Altman revealed, however, noting that similar advancements in the scientific capability of certain models could lead to an increase in bioterrorism attacks as a consequence of AI.
"The models are clearly going to get very good at helping people do biology at an advanced level. Wonderful things are going to happen there, we'll see a bunch of diseases get cured," he proposed.
"Someone's going to try and misuse those, and for now, when the frontier models are in the hands of pretty responsible companies, I think we can mitigate those by the companies aligning their models and having good classifiers and good safety stacks.
"But we're not that far away from a world where there are incredibly capable open source models that are very good at biology, and the need for society to be resilient to terrorist groups using models to try and create novel pathogens, that's no longer a theoretical thing or it's not going to be for much longer."
The solution to these issues appears to be far more complex than most people likely think, and there's perhaps not a concrete answer that'll solve everything right now, increasing the risk that these dangers not only appear but are allowed to wreak havoc across the world.