Lighthouse_AI-Blog_ML-1

How Will Machine Learning (ML) Tools Alter the Cyber Threat Landscape?

Reading time: 7 min
Share this Share on email Share on twitter Share on linkedin Share on facebook

Artificial intelligence, or AI for short, is by no means a new phenomenon. The term has existed since the 1950s, however today’s AI has gathered momentum and become far more intelligent than once was even thought imaginable, to the point that fears of AI cyberwarfare are now a reality. 

While “traditional” cyberwarfare refers to the use of the internet and technology to hurt or disrupt a particular country, be it through the use of malware, denial-of-service attacks or disinformation campaigns, AI cyberwarfare refers to the use of AI and machine learning (ML) tools to enhance these operations and make them even more powerful. 

“At the same time that the conditions for digital warfare emerged, automated information processing capabilities experienced a qualitative leap. AI can thus be found at the heart of cyberwarfare because of its perceived potential and inherently numerical nature,” wrote academic researchers Rudy Guyonneau and Arnaud Le Dez . “AI speaks the language of machines and will translate it for humans to conduct their fight within the machine space.” 

Applying ML techniques for malicious purposes

Hackers are already taking advantage of machine learning to deploy malicious algorithms that can adapt, learn, and continuously improve in order to evade detection. Therefore, a shift to AI-powered attacks isn’t out of the realms of possibility — particularly when you consider that rapid advances in the technology could amplify the speed, power, and scale of future attacks.

For example, not only does AI open up more systems to cyberattacks, but it also opens up the possibility for new attack vectors such as audio or video manipulation, adding a new and potentially more sinister twist to the spread of misinformation that can have very real consequences in the physical world. 

Similarly, attacks that target AI systems could potentially offer attackers access to machine learning algorithms, along with vast amounts of data from facial recognition and intelligence collection and analysis systems. This data could be used, for example, to support surveillance missions.

While AI has yet to make it to the battlefield, there are numerous other ways in which the technology could be embraced by cybercriminals, be it for automating attacks and significantly improving the targeting of victims, better-impersonating individuals for more effective social engineering, or for developing more virulent malware and viruses.

Exploiting vulnerabilities and abusing open-source AI tools

Worryingly, there are tools that exist today that could enable attackers to start preparing for AI-powered attacks due to the open-source nature of AI research; for example, open-source toolsets and Linux distributions such as Kali Linux contains a suite of white hat tools that can be used acrimoniously. Similarly, tech-savvy individuals are increasingly learning and developing open source solutions to assist in hacking activities. 

These open-source tools can be used for everything from exploiting websites and servers to injecting packets into wireless network traffic with the aim of intercepting and decrypting traffic. 

What’s more, machine learning systems themselves — the backbone of AI — are a growing threat, and it’s only a matter of time before attack code appears. Exploiting these vulnerabilities could enable hackers to manipulate the machine learning systems’ integrity, confidentiality, and availability, all of which would be bad news for organizations of all sizes.

Preventing cyberattacks and defensive AI

Thankfully, AI can also be used as a force for good. The technology presents opportunities for cybersecurity, including functionality such as dynamically and proactively adapting to sophisticated threats on a near-daily basis. It could also help cybersecurity professionals to recognize behavioral patterns in order to more quickly react to indicators of attack. 

Intelligence-driven automation can provide deeper visibility on endpoint behavior, further enhancing organizations’ ability to detect threats based on known threat signatures.

Speaking at a recent GovCIO Media & Research event, Thomas Kenny, CDO of the United States Special Operations Command, said: “Some of the challenges we’re starting to see are the exponential increase in the capability of AI agents to conduct both offensive and defensive operations on top of cyber networks.

“The human solution doesn’t exist anymore. If we’re truly going to protect our cyber networks, the answer is not more bodies or a larger security operations center. The capabilities that we need to be able to employ have got to be intelligent systems that have the ability to learn, to recognise in milliseconds the potential for an attack, and to be able to make decisions.”

While so-called “offensive” AI will harness the technology’s ability to learn and adapt to create a new era of cyber-threats that are highly customizable and scalable, organizations will also have defensive AI to help them fight back — and it’s becoming more clear that this will be a critical tool to win the war against threat actors, both machine and human.

Learn how Bitdefender uses artificial intelligence, machine learning, and anomaly-based detection to provide real-time insights into the global threat landscape.

 

CONTACT AN EXPERT