Deepfakes can compromise AI-driven industrial systems - Embedded.com

Deepfakes can compromise AI-driven industrial systems

From the standpoint of cybersecurity, the use of AI and machine learning on the factory floor has both strengths and weaknesses. Both can help improve monitoring, detection and prevention of threats and attacks, especially for Industry 4.0 endpoints. But smart manufacturing systems that rely on these technologies can be probed and manipulated by bad actors.

A well-known example of the vulnerability of AI-driven systems is deepfakes: faked images, videos and text created by deep learning techniques. To the human eye, they look exactly like the originals; only AI can detect differences.

Threat actors have used this technology in attempts to manipulate public opinion, but facial recognition security systems are also vulnerable, McAfee Labs noted in a blog discussing its 2020 Threats Predictions Report. Faked images could successfully fool these AI-driven systems into unlocking smart phones or allowing intruders entry into a building using fake IDs.


When a machine model is compromised, it can misclassify examples that are only the tiniest bit different from images normally classified correctly, with differences invisible to the human eye. (Source: IBM)

So-called “adversarial machine learning,” or AML, is often perpetrated by bad actors, but it’s also a tool in the fight against them by cybersecurity researchers and providers. When used by attackers, AML can include poisoning the data used for model training. Both image recognition and natural language processing (NLP) systems are vulnerable. Or training data can be revealed and industrial or company secrets divined.

AML can also include mimicking valid user profiles by multiple methods, including fooling automatic speech recognition systems by producing audio waveforms that are 99 percent identical to an existing actual sound clip. Instead, they contain falsified phrases.

White hat hackers and researchers can use AML to fight adversaries, and to improve AI-based technology by making models more robust, Pin-Yu Chen, chief scientist with the Rensselaer-IBM AI Research Collaboration, told EE Times. “For example, in computer vision it can help improve deep learning models based on neural networks, to generate better data and get very high-quality images,” he said.

Vulnerabilities of smart manufacturing systems

The cybersecurity challenges of smart manufacturing are many.

In Industry 4.0, also known as digital transformation, “Everyone wants access to everything: devices and data stores and applications in the cloud,” Sid Snitkin, vice president of cybersecurity services for ARC Advisory Group, told EE Times. “The whole idea is to leverage this connectivity of devices to do new things you haven’t even thought of yet. But all these connections are opening up new security holes, which can mean potentially compromised operations, because from a security perspective you don’t know where data is coming from or where it’s going to on the other end.”

Visibility is the biggest cybersecurity challenge for both smart manufacturing and AI/ML on the factory floor since it’s impossible to protect what you can’t see, according to Justin Fier, director of cyber intelligence and analytics at Darktrace. “Before implementing Industry 4.0 technologies you need to know what the security ramifications are. But we tend to deploy Industry 4.0 technologies first, and then security as an afterthought.”

Lack of visibility is especially critical to links in the supply chain. Companies such as Intel Corp. are building security into their hardware modules, said Snitkin. “But the biggest issue with devices is the software supply chain, a very non-trivial problem. The software package you’re developing uses software from other sources, but you only get alerts when the main package needs a patch.”


Federico Maggi of Trend Micro

Because industrial manufacturing systems are still designed as closed systems, they’re assigned different types of protection from those assigned to high-value enterprise targets. “Designers assume that attackers will never be able to directly connect to or directly breach those systems,” noted Federico Maggi, senior threat researcher at Trend Micro. “That may be true, but there are indirect ways an attacker can find their way through and get to the target system.”

A report released by Trend Micro in May showed that even an isolated smart manufacturing system probably includes industrial IoT devices custom-designed by external consultants as well as employees. These, in turn, contain custom-designed software that includes third-party components. “The chain of relationships from the person who designs and programs IIoT devices to the machine that ends up containing that piece is very long, and it’s easy to lose control of what’s going on in all the links of the chain,” said Maggi. “An attacker can easily inject malicious components and cause machines to malfunction by leveraging the weakest links.”

The report, Attacks on Smart Manufacturing Systems, is a security analysis, including threats and defenses, of simulated goods production in the Industry 4.0 Lab in Italy. The laboratory manufactures toy cell phones with the same basic principles used on a full-fledged smart manufacturing floor. These supply chain weaknesses were one of the report’s major findings.

AML on the Factory Floor

AML targets either AI used in manufacturing and other systems or it mimics the actions of human operators and then attacks at scale, said Darktrace’s Fier. “For example, spear phishing campaigns may use NLP to emulate and falsify emails so they appear to be sent by real people.”

In smart manufacturing, machine learning is used in several areas, including anomaly detection, said Rainer Vosseler, manager of threat research at Trend Micro. “Even if you operate under an AML assumption, your data has to be good enough and trusted enough that at some point you give it to the model. Since data flowing into the system can be manipulated, an attacker can also manipulate the model.”

Several machine learning models are vulnerable to AML, even state-of-the-art neural networks, according to an IBM blog. The compromised models misclassify examples that are only the tiniest bit different from images they would normally classify correctly.

Especially in operational technology (OT), ML is very specific to the task assigned, explained Derek Manky, chief of security insights and global threat alliances at Fortinet’s FortiGuard Labs. For example, a mix of OT-specific threats still prey on Windows/X86/PC-based interfaces, as well as many ARM-based threats. “So ML models must learn and understand everything from Linux code to ARM code to RISC code, and many others,” Manky said. “An inherent problem now is, How do we connect these different models based on different OT protocols and systems or environments? This is the next generation: federated machine learning, a system analyzing all these protocols and systems or environments.”


IBM’s Pin-Yu Chen

Some real-world damage from adversarial AI has already occurred, said IBM’s Chen. “A typical example is autonomous driving, where it’s easy to modify a stop sign and trick the system so an autonomous car doesn’t stop where it needs to be stopped.”

Because AI is being developed and implemented so quickly, users can’t stay current with what’s been developed, and what can and can’t be done, he said. “Our job is to determine this, so users can have realistic expectations of the technology, and be more cautious of the impact of deployments.” Since users can be overly optimistic about implementing AI, IBM has created new Fact Sheets that tell them what the risks are in deploying it.

Using AI to fight AI

The primary reason for using machine learning in cybersecurity is simple: it can process data insanely fast, at least from the human perspective. It’s also dynamic, instead of rules-based like more traditional cybersecurity methods, so algorithms can be more easily automated and retrained much faster. Cloud service providers, for example, are incorporating ML techniques into their own cybersecurity defenses.

Some companies are partnering to produce AI-driven cybersecurity solutions tailored for specific industrial sectors. For example, Siemens said last year it’s combining expertise in OT security with SparkCognition’s expertise in AI in DeepArmor Industrial. The cybersecurity tool delivers antivirus, threat detection, application control, and zero-day attack prevention to remote energy endpoints in power generation, oil and gas, and transmission and distribution.

Much of the work to combat ALM is being done by cybersecurity firms based on products that use AI and machine learning to help improve monitoring, detection and prevention of threats and attacks, especially for endpoints such as IoT and IIoT devices. For example, Darktrace’s protocol-agnostic Industrial Immune System learns what “normal” looks like across OT, IT and IIoT environments. Its ML-powered Antigena Network “can interrupt attacks at machine speed and with surgical precision, even if the threat is targeted or entirely unknown,” according to the website.

Since adversaries are definitely doing their own AML research, companies must invest in AI defenses, said Fier. “It’s no longer bleeding edge—its a must-have in the stack. Time to detection and mitigation used to be 200 days, but not anymore.”  Because of AI’s very high processing speeds, “If an AI is working against you, chances are you’ll never see it or you’ll be so late to the game you’ll never recover,” he said. “That’s why I think AI fighting AI is the best matchup.”

Fortinet’s cybersecurity is also AI-driven. Three things are needed to guard against AML attacks and attackers, said Manky. “First, you need processing power, which isn’t much of a challenge anymore. Next, you need data—and fresh reliable data, lots of data from different sources, including the data we get from our almost six million security devices deployed worldwide. The third element is time. You really need to get ahead of the curve, especially when dealing with emerging or already-here verticals, like OT.”


Fortinet’s Derek Manky

Companies like IBM are developing better AI technologies to understand what’s causing vulnerabilities based on data collection flaws, said Chen. “We play a similar role to white hat hackers: We identify the vulnerabilities and understand the ethical impacts on the market before products are introduced.”

An adversarial attack can occur in any of the three phases of model development: collecting data, training the model or deploying it in the field. There are different countermeasures and technologies for addressing each. IBM’s model sanitization service describes good models, then returns a clean model. Another service provides benchmarks for robust models.

Coming soon: AI-driven malware?

Unfortunately, making a model more robust to attacks generally means trading off performance, since more robust models are also less agile. Also, deep learning models are complex and difficult to integrate. “The fact that we don’t know how a model solves a task makes it more difficult to know whether it’s secure,” said Chen. “How do we know it really learns how to solve the problem?”

Another obstacle is trying to keep up with the simple volume of attacks. As with security research, “how can patches be made reliable enough, and secure enough, for a future attack?” Chen asked. One answer may be a certification process, such as the one IBM is developing. It could categorize safe regions or operating zones for an AI system, especially important for AI used in critical jobs.

AI-based malware may be coming soon, warned Darktrace’s Fier. “Although AI-driven malware isn’t here in full capacity yet, we’re starting to see it emerge—it’s on the near horizon,” he said. “Adversarial AI or ML is not in the wild just yet for the [industrial control system] space, as far as we know. But I envision a piece of malware that sits in your ICS environment, learning from it before making its next move. What will probably have the most impact on the industrial space is scaled-up damage.”

But so far, most attacks are using automation, not machine learning, said Fortinet’s Manky. “That’s the good news, since automation is much easier to defeat than ML. We see two million viruses a day coming into our lab, and most of them are trivial automation. But we are starting to see indicators of some ML and AI to evade security, so it’s certainly coming.”

>> This article was originally published on our sister site, EE Times.

 


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.