Spending over 20 years in cybersecurity, helping in scaling companies dealing in cyber security, I watched the methods attacking in a creative way. But the forecasts of Kevin Mandia regarding cyber attacks powered by artificial intelligence during the year are not only future -oriented, the data show that we are already there.
The numbers do not lie
Last week, Kaspersky issued statistics from 2024: over 3 billion malware around the world, and the defenders detected an average of 467,000 malicious files. Trojan detecting increased by 33% year -on -year, mobile financial threats doubled, and here Kicker, 45% passwords can be broken in less than a minute.
But Tom is not the whole story. The nature of threats is fundamentally changing when AI becomes armament.
This is already happening. Here's the proof
Microsoft and Opennai confirmed what many of us suspected-the actors of the nation-state are already using artificial intelligence for cyber attacks. We are talking about large players: fancy bear of Russia using LLM to collect an interview on satellite communication and radar technologies. Chinese groups, such as Typhoon with charcoal, generate the content of social engineering in many languages and perform advanced classes after compromise. Iran Crimson Sandstorm Crafting Phishing E -Lail, while emerald gaps in North Korea and nuclear programs experts.
What more worries? Scientists from Kaspersky now find malicious AI models conducted in public repositories. Cybercriminals use artificial intelligence to create phishing content, develop malware and carry out social engineering attacks based on a deep range. Researchers see LLM gaps, AI supply chain attacks and what scientists call “Charów AI”-unauthorized use of AI tool employees that leak sensitive data.
But this is just the beginning
What we see now helps the attacker on the scale of surgery and explain malicious code to new languages and architecture in which they were not previously experts. If a nation -state has developed a truly innovative case of use, we may not detect it until it is too late.
We head towards autonomous cybernetic weapons specially built to move undetected in the environments. These are not typical attacks for children, we are talking about AI agents who can conduct recognition, identify gaps and perform attacks without February in a loop.
The challenge goes beyond faster attacks. These autonomous systems cannot reliably distinguish between justified infrastructure and civilians, which security researchers call “the principle of discrimination”. When AI weapons are focused on the power grid, it cannot distinguish military communication from the hospital next to it.
We now need global management
This requires management and global contracts similar to nuclear weapons treaties. At the moment, there is basically no international AI weapon frame. We already have three levels of autonomous weapon systems: systems supervised with monitoring people, semi -automatic systems that engage the pre -selected goals, and fully autonomous systems that independently choose and engage goals.
A terrifying part? Many of these systems can be carried away. There is no such thing as an autonomous system that you can't hack, and the risk that non -state actors took control through opposite attacks is real.
Fighting fire with fire
There are many cyber security companies that build new ways of defending against such attacks. Take AI SOC analysts from companies such as Dropzone AI, which allow teams to carry out 100% of Alert investigations, dealing with a huge gap in security operations today. Or companies such as Natoma, which build solutions to identify, monitor, secure and governance with AI agents in the enterprise.
The key is to fight fire with fire or in this case AI with AI.
The new generation SOCS (security operation centers), which combine automation of artificial intelligence with human knowledge, are needed to defend the current and future status of cyber attacks. These systems can analyze the attack patterns at the speed of the machine, automatically correlate threats in many vectors and react to incidents faster than any human team could manage. They do not replace human analysts – they increase them about the possibilities that we need desperately.
The rates cannot be higher
What distinguishes from previous evolution of cyberspace is the potential of mass losses. Autonomous cyber -targeted weapon focused on critical infrastructure, hospitals, energy networks and transport systems can cause physical damage on an unprecedented scale. We are no longer talking about data violations; We are talking about AI systems that can literally expose life.
The window to prepare is closing quickly. The one -year Mandia schedule seems optimistic when you take into account that criminal organizations are already experimenting with the tools of the attack with AI enabled using less controlled AI models, and not with OPENAI or Antropic focused on safety.
Lower line
Expanding security teams with AI agents It's not just the future, it's now. Ai will not replace the defenders of our nation; It will be their 24/7 partners in defense of the organization and our great nation. These systems can monitor threats around the clock, process huge amounts of threat intelligence and respond to attacks in milliseconds.
But this partnership model works only if we start building it now. Every day we delay, gives opponents more time to develop autonomous offensive possibilities, while our defense remains largely dependent on man.
It is not about whether AI powered cyberratacs will come, whether we will prepare a powered defense with artificial intelligence. The race is turned on and honestly, we are already behind.