AI isn’t a far-off technology that will involve self-thinking “robot overlords.” It’s already here, in the form of Alexa or Siri, and self-driving cars. The reach of AI will affect many areas of technology, but its presence will be acutely felt within both cybersecurity threats and the strength of systems to protect against threats. The technology might have been able to predict and slow the spread of the infamous WannaCry malware, and proponents hope it can serve such a purpose in the future. AI brings new opportunities to businesses and people, but the insights and power derived from AI are not always going to be used by “good actors”.
AI is improving security tools
The various processes that spot attacks and perform functions against those attacks can be improved with AI that is predictive and covers a lot of scenarios. For example, there’s a new field of data deception tech that can use AI to spot activity that fits the pattern of an attack, since most intrusions occur in an abnormal manner. Automated deception tools can send out decoys to trap attackers, and are easier to deploy by lower-level IT staff. There’s also “robo-hunters” which proactively search the environment for threats or exposed areas in an intelligent manner.
Organizations are already beginning to use AI to bolster cybersecurity and offer more protections against sophisticated hackers. In the broadest sense, AI is needed to help manage the sheer size of potential threats coming from IoT and other new technologies. The number of threat vectors becomes unwieldy for non-AI systems to handle due to their complexity.
AI helps improve cybersecurity by automating complex processes for detecting attacks and reacting to breaches. This will improve incident monitoring, which is crucial because the “speed of detection” and the subsequent response are essential in limiting damages. AI can also mean automated responses to certain attacks without needing a human hand involved. People will still want to review the response and make any adjustments, but better AI will produce better responses and results. AI can also be used within human behavior monitoring to detect how people enter their passwords, or the character length of the password, in order to spot bad practices. Ideally, such human-centered tools can remove “user error” from the equation, as it’s a common entry point for attackers.
Hackers can gain an edge with AI
Many of the defenses that AI will present to the cybersecurity industry and its users will also be used as a way to improve threats. For example, machine learning can produce attacks scripts at a pace and level of complexity that simply cannot be matched by humans.
AI within security attacks will also make it easier for low-level hackers to operate sophisticated intrusions by simply launching an intelligent program at low cost. Hackers often succeed by scaling their operations. The more people they reach for phishing schemes or the more networks they explore, the more likely they are to get a willing person or to gain entry. AI provides them with a way to scale to a much higher degree, by automating the targets and sending out attacks in bulk. And AI might be able to better personalize the phishing scheme by referencing every individual target with that person’s social data and other online information, making each attack more likely to succeed.
Hackers will leverage AI to look at devices and find exploits, especially within products that aren’t being updated. AI can also find potential human targets more quickly, by running searches that for example that locate wealthy individuals who used unsecured devices and can be prone to ransomware or personal blackmail. AI is especially useful for mining large amounts of data, a characteristic that will help both attackers find potential targets from a global pool of internet-connected people.
The terror threat and AI
As the sophistication of AI and hackers grows, so will the chances of the two coming together for real-world attacks. For example, some hackers can tie together drones into a “swarm” which might be rigged with explosives in order to carry out assassinations or terror attacks. Such an attack recently occurred in Syria, with militants attacking a Russian base with a collection of drones. AI enables hackers to program these attacks more easily and link the drones together with rudimentary “intelligence.”
Other AI-based threats will involve advanced social-network mapping. For example, ISIS used monitoring systems to identify key people in various cities in an effort to effectively target certain individuals in a fast power grab. AI-based tools that will look deeper into social networks will enable terrorists to spot the right human and city targets and work more effectively. Defense departments and cybersecurity experts will need to work hand-in-hand to spot such threats and develop countermeasures.
The defensive side of the coin will use AI to improve monitoring with predictive analytics, and on the other side hackers will use it to skirt around detection tools and decoys and create new ways to attack. Over time, the two sides will likely combat each other in an arms race, with the humans waiting around for a result, as AI-powered tools react in real time and present new intelligent ways to either intrude or protect networks.