Protecting Deep Learning Systems from Adversarial Attacks

Figure 1. Using the ShapeShifter attack, researchers at Georgia Tech and Intel have shown how vulnerable self driving cars’ computer vision systems are to attack.
Figure 2. UnMask combats adversarial attacks (in red) by extracting robust features from an image (“Bicycle” at top), and comparing them to expected features of the classification (“Bird” at bottom) from the unprotected model. Low feature overlap signals an attack.

Defending Deep Learning Systems using UnMask

Figure 3. Across multiple experiments, UnMask (UM) can protect deep learning systems 31.18% better than adversarial training (AT) and 74.44% than no defense (None).

Want to read more?

--

--

--

PhD student @ Georgia Tech. I work at the intersection of applied and theoretical machine learning.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Why we might not realize when technological singularity happens

Deepfake Looks Real?

Automation

Natural Language Processing (NLP) in Litigation: Can Alexa Help You Win Your Next Mass Torts Case?

Why Your CDO is Failing to Scale AI-Powered Products Companywide

AI’s Phoenix Project Moment

Innovation Series: Artificial Intelligence in the Office of Finance

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Scott Freitas

Scott Freitas

PhD student @ Georgia Tech. I work at the intersection of applied and theoretical machine learning.

More from Medium

TensorFlow and DJL Machine Learning Pipeline

Learn Deep Learning Deep Learning through 10 Neural Network Projects in 2022

Learn AI Model from Scratch in 15 Minutes

Tools and frameworks for AI, Machine Learning, and Deep Learning