9 min read

Securing AI: The Next Gen of Enterprise Cybersecurity

Ericka Chickowski

March 05, 2020

Securing AI: The Next Gen of Enterprise Cybersecurity

Recently, a facial recognition vendor that consolidates billions of photos to fuel its artificial intelligence (AI) people-searching platform admitted to a major breach. On its surface, the incident is a pretty standard exposure of client list details. But scratch a little deeper and the problems inherent with the breach highlight some of the dangers and cyber risks hiding under the gigantic iceberg that is AI technology today.

Recently brought to public light by the New York Times for its secretive facial recognition app, Clearview AI provides law enforcement, government entities, banks, and other clients with the means to take a picture of an unknown person, upload it and identify them. That's done using an AI matching system powered by 3 billion pictures scraped from online sources. The Clearview business model has critics complaining of dystopian practices as it is.

The latest wrinkle in the Clearview saga is that beyond the ethical issues, the company may not be protecting its information assets very well, either. The Daily Beast reported that the firm "disclosed to its customer that an intruder 'gained unauthorized access' to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted."

While the company says that it didn't detect any compromise of its AI systems or network, the exposure at Clearview casts seeds of doubt about the robustness of the company's defenses.

“If you’re a law-enforcement agency, it’s a big deal, because you depend on Clearview as a service provider to have good security, and it seems like they don’t,” David Forscey, managing director of the public-private consortium Aspen Cybersecurity Group told The Daily Beast.

This is a problem for a company whose entire business model is built on the scaffolding of implied informational and system integrity.

Not only does an AI system like the one Clearview runs handle extremely powerful aggregations of citizen data with wide-reaching privacy implications, but the queries being made against it are extremely sensitive. More fundamentally, the decisions being made from the output of those queries assume a level of infallibility of the data and the AI model used to process it. So what happens if that data or the model itself is compromised? Does the company have mechanisms in place to monitor whether they've been maliciously changed or manipulated?

This idea of data and AI model poisoning is a thorny issue being tackled by a growing cohort of researchers who call this field adversarial machine learning. Most recently, FedScoop reported that the U.S. Army Research Office is pouring funds into this area. Its researchers seek to mature the kind of defensive software it can put around machine learning and AI databases to protect them from being sabotaged through backdoor attacks that could mistrain and break AI systems.

"The fact that you are using a large database is a two-way street," MaryAnne Fields, program manager for intelligent systems at the Army Research Office told FedScoop. "It is an opportunity for the adversary to inject poison into the database."

The Army push is part of a growing body of research about the possibilities of adversarial machine learning. The implications are both broad and weighty.

For example, take the AI that runs autonomous vehicles. Attackers who can hack the AI behind self-driving cars can put lives in serious jeopardy. Research released several weeks ago found that adversarial machine learning techniques could lead Tesla cars with driver assistance features to be fooled into misreading traffic signs, potentially causing dangerous collisions.

In other instances, researchers reported being able to attack machine-learning algorithms that generate automated responses to email messages using malicious text that can train the model to send back sensitive data like credit cards, reported MIT Technology Review.

And it's not just the direct attacks against automated machinery of AI-backed systems that organizations will need to worry about, warns David Danks, chief ethicist at Carnegi Mellon University's Block Center for Technology & Society.

"Instead, we should worry about the corruption of human situational awareness through adversarial AI, which can be equally effective in undermining the safety, stability, and trust in the AI and robotic technologies," he wrote in a recent piece for IEEE Spectrum, warning that just providing faulty intelligence to human decision-makers could cause huge problems.

In enterprises this could mean distorted and poisoned results or intelligence about supply chains, sales forecasting, or other business-critical matters. In other public safety or defense use cases it could be even more extreme. For example, Danks touched on the intersection of nuclear weapons and AI, which could play a role in the intelligence, surveillance, and reconnaissance (ISR) systems that inform decision makers about whether or not to use these weapons:

"The worldwide sensor and data input streams almost certainly cannot be processed entirely by human beings. We will need to use (or perhaps already do use) AI technologies without a human in the loop to help us understand our world, and so there may not always be a human to intercept adversarial attacks against those systems.

Our situational awareness can therefore be affected or degraded due to deliberately distorted “perceptions” coming from the AI analyses."

As enterprises increasingly deploy AI-backed systems across their technology stack, security and risk professionals are going to need to come to a greater understanding of the risks posed to their operations by adversarial machine learning. The attack surface grows as AI swiftly moves from experimental to business-critical deployments. As a result, expect in the coming years for CISOs and their reports to tack on the duty of securing AI to their growing list of responsibilities.

Forward looking security leaders who want to get ahead of this problem can get started by following the work of adversarial machine learning researchers. One great resource to jump-start the learning process is a recent threat modeling document written by some big thinkers at Microsoft and the Berkman Klein Center for Internet and Society at Harvard University.

Released in November 2019, Failure Modes in Machine Learning compiles information from hundreds of research efforts written about adversarial machine learning over the past couple of years. It's got a pretty comprehensive list of all of the types of attacks and unintentional corruptions to AI systems that could cause them to malfunction or otherwise compromise their confidentiality, integrity, or availability. This is the taxonomy by which Microsoft has changed its own security development lifecycle and led its data scientists and security engineers to start thinking about how to model threats against their ML systems before deploying to production.

Sharing it was meant to get software developers, security incident responders, lawyers, and policy makers on the same page about the problem so they can start making headway on the risks to AI.

tags


Author


Ericka Chickowski

An award-winning writer, Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading. Chickowski’s perspectives on business and technology have also appeared in dozens of trade and consumer magazines, including Consumers Digest, Entrepreneur, Network Computing and InformationWeek.

View all posts

You might also like

Bookmarks


loader