If you watched cartoons, such as Tom and Jerry, you will recognize a common motif: the elusive goal avoids his powerful opponent. This “cat and mouse” game-good or other-makes the pursuit of something that always escapes with every attempt.
In a similar way, avoiding persistent hackers is a constant challenge for cyber security teams. By maintaining how by chasing what is out of reach, myth researchers are working on the AI ​​approach called “Artificial Opposite Intelligence, which imitates the attackers of the device or network to test the defense of the network before real attacks occur. Other defense measures based on artificial intelligence help engineers to strengthen their systems to avoid weapons of data, data stolen.
Here, the main researcher Una-May O'Reilly, myth computer science and artificial intelligence laboratory (CSAIL) Every science of all groups (Alfa), discusses how the opponent's artificial intelligence protects us from cyber criminals.
Q: How can artificial opposite intelligence play the role of cyber attack and how does the opponent's artificial intelligence depict?
AND: Cyber ​​attacks exist along the spectrum of competence. At the lowest end there are so -called kiddie scenarios or threatened actors who splash known exploits and malware in the hope of finding a network or devices that did not practice good cyber hygiene. In the middle there are cyber mercenaries who are better resolved and organized to feed after enterprises with ransomware or extortion. And, at the highest level, there are groups that are sometimes supported by the state, which can start the most difficult to detect “advanced persistent threats” (or apts).
Think about the specialized, wicked intelligence that you attacked the marshal – this is the opposite intelligence. The attackers create very technical tools that allow them to break into the code, choose the right tool for the target, and their attacks have many steps. They learn something at every stage, integrate it with their situational awareness, and then decide what to do next. In the case of sophisticated APTS, they can strategically choose their goal and develop a plan of slow and low visibility, which is so subtle that its implementation sneaks with our defense covers. They can even plan deceptive evidence indicating another hacker!
My research goal is to repeat this particular type of offensive or attacking intelligence, intelligence, which is an opponent oriented (based on intelligentsia, which are the actors of threatened people). I use artificial intelligence and machine learning for the design of cybernetic agents and model the opposite behavior of human attackers. I also model learning and adaptation that characterize cyber races.
I should also notice that the defense cyber is quite complicated. They have evolved their complexity in response to the escalational possibilities of attack. These defense systems include the design of detectors, processing logs, releasing the appropriate alerts, and then the solution. In incident response systems. They must be constantly vigilant to defend a very large attack surface, which is difficult to track and very dynamic. On the other side of the striker-Refuse-Defender competition, my team and I also invent artificial intelligence in the service of these different defensive fronts.
Another thing stands out about the opposite intelligence: both Tom and Jerry can learn from competing with each other! Their skills sharpen and lock themselves in the arms race. One is getting better and then to save the skin, it also becomes better. This improvement of Tit-to-Tat goes on and up! We are working on repeating the cybercriminals of these arms races.
Q: What are the examples in our daily lives in which artificial intelligence was against security? How can we use opposing intelligence agents to overtake threatened entities?
AND: Machine learning was used in many ways to ensure cyber safety. There are all kinds of detectors that filter threats. They are tuned for anomal behavior and, for example, to recognizable types of malware. There are segregation systems with AI support. Some of the spam protection tools on a mobile phone are supported by AI!
With my team I design cybercriminals serving AI who can do what the actors do. We invent artificial intelligence to provide our cybercriminals with computer skills and programming knowledge to make them able to process all kinds of cybernetic knowledge, plan steps and make informed decisions within the campaign.
Intelligent agents opponent (like our AI cybercriminals) can be used as a practice when testing network defense. A lot of effort is aimed at checking the reliability of the network to attack, and AI is able to help in this. In addition, when we add machine learning to our agents and to our defense, they play the arms race, we can check, analyze and use to predict what remedies can be used when we take means to defend ourselves.
Q: What new risk they adapt and how do they do it?
AND: It seems that it is never the end of new software and new system configurations. With each release, there are gaps that the attacker can attack. These can be examples of code weakness, which are already documented or may be new.
New configurations are a risk of errors or new ways of attack. We could not imagine ransomware software when we were dealing with attacks of refusal to services. Now we juggle cyberspace espionage and ransomware software with theft of IP (intellectual property). All our critical infrastructure, including telecommunications and financial, health, urban, energy and water networks, are the target.
Fortunately, many efforts were devoted to defending critical infrastructure. We will have to translate this into products and services based on AI, which automatize some of these efforts. And of course, to continue to design smarter and smarter agents opposing, to keep us on our feet or help us in the practice of defending our cyberbeling assets.