Adversarial Objects Against LiDAR-Based Autonomous Driving Systems:
Deep neural networks (DNNs) are found to be vulnerable against adversarial examples,
which are carefully crafted inputs with a small magnitude of perturbation aiming to induce
arbitrarily incorrect predictions.
Recent studies show that adversarial examples can pose
a threat to real-world security-critical applications: a “physically adversarial Stop Sign”
can be synthesized such that the autonomous driving cars will misrecognize it as others
(e.g., a speed limit sign). However, these image-based adversarial examples cannot easily
alter 3D scans such as widely equipped LiDAR or radar on autonomous vehicles.
In
this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving
detection systems, by proposing an optimization based approach LiDAR-Adv to generate
real-world adversarial objects that can evade the LiDAR-based detection systems under
various conditions.
We first explore the vulnerabilities of LiDAR using an evolutionbased blackbox attack algorithm, and then propose a strong attack strategy, using our
gradient-based approach LiDAR-Adv. We test the generated adversarial objects on the
Baidu Apollo autonomous driving platform and show that such physical systems are indeed
vulnerable to the proposed attacks.
We 3D-print our adversarial objects and perform
physical experiments with LiDAR equipped cars to illustrate the effectiveness of LiDARAdv.
https://autonomous-driving-news.tumblr.com/post/186441642289