Researchers Cite New Danger With Autonomous Automobiles






A new report from researchers at the University of California, Irvine, states that autonomous vehicles use first and second-generation LiDAR systems for roadway navigation. The Irvine team cites those LiDAR systems as being vulnerable to hackers fooling those systems, potentially causing autonomous vehicles to perceive objects that are not, or miss objects that are, on the roadway.

What Is a Lidar System and How Can It Be Fooled?

A report from UCI News states that a report from UCI and Japan’s Keio University’s computer scientists and electrical engineers has demonstrated how lasers can fool LiDAR, which stands for Light Detection and Ranging and is the system many autonomous vehicles (such as Waymo or Cruise) use for navigating roadways.

According to UCI News, the researcher’s findings show that lasers can “fool” LiDAR systems into perceiving “objects that are not present and missing those that are,” potentially leading to undue and unsafe braking or collisions. While presenting at the Network and Distribution System Security Symposium in San Diego last week, the study’s lead author, Takami Sato, a UCI Ph.D. candidate in computer science, spoke of the potential dangers he and his colleagues found for “spoofing attacks” on nine LiDAR systems.

Their findings indicate that later-generation systems are as vulnerable to these attacks as the older ones. “To date, this is the most extensive investigation of LiDAR vulnerabilities ever conducted. Through a combination of real-world testing and computer modeling, we were able to come up with 15 new findings to inform the design and manufacture of future autonomous vehicle systems,” Sato said at the symposium.





Fake Object Injection

To test first-generation LiDAR systems, the researchers used a “fake object injection” attack that fools LiDAR into sensing a pedestrian or another vehicle when nothing is there. It causes the system to signal to the self-driving car’s onboard computer that there is an oncoming hazard in its path. The danger is that this false hazard can trigger unsafe driving choices, such as emergency braking when the road is safe.

Sato said that first-generation LiDAR systems are vulnerable to “fake object injection” attacks, “This chosen-pattern injection scenario works only on first-generation LiDAR systems; newer-generation versions employ timing randomization and pulse fingerprinting to combat this line of attack.”

However, the UCI and Keio researchers discovered that next-generation LiDAR systems were not off the hook from other attacks. The researchers used a custom laser and lens device to hide up to five cars from the LiDAR system’s sensors. Qi Alfred Chen, a senior co-author of the study and an assistant professor of computer science at UCI, said at the symposium, “The findings in this paper unveil the unprecedentedly strong attack capabilities on LiDAR sensors, which can allow direct spoofing of fake cars and pedestrians and the vanishing of real cars in the AV’s eye. These can be used to directly trigger various unsafe AV driving behaviors such as emergency brakes and front collisions.”

It’s a very eye-opening study that potentially sheds some light on why robotaxis can struggle with detecting objects or vehicles on the road, which has consistently been an issue for them.



Source link







Posted in Uncategorized