Autonomous vehicles and digital speedometer showing cybersecurity challenges

EU Agency for Cybersecurity Says Autonomous Vehicles Highly Vulnerable to Various Cybersecurity Challenges

The EU Agency for Cybersecurity (ENISA) and Joint Research Centre (JRC) released a report warning that autonomous vehicles carry serious cybersecurity risks.

The report noted that the use of artificial intelligence (AI) in autonomous vehicles could endanger road users’ and pedestrians’ lives.

The “Cybersecurity Challenges in the Uptake of Artificial Intelligence in Autonomous Driving” report analyzes the cybersecurity risks connected to artificial intelligence (AI) in autonomous vehicles and provides recommendations for mitigating them.

ENISA describes itself as the “center of network and information security expertise for the EU, its member states, the private sector, and Europe’s citizens.”

Cybersecurity challenges facing autonomous vehicles

The European agencies’ threat model categorized cybersecurity risks connected to artificial intelligence into unintentional and intentional software and hardware vulnerabilities.

“The increased uptake of AI technologies has further amplified this issue with the addition of complex and opaque ML algorithms, dedicated AI modules, and third-party pre-trained models that now become part of the supply chain.”

Intentional threats originate from the malevolent exploitation of AI and ML vulnerabilities and limitations to cause harm. Threat actors could also introduce new vulnerabilities to expand the attack landscape for maximum impact.

Unintentional harm stems from limitations, malfunctioning, and poor design of AI models. Regardless, ANISA and JRC say that cybersecurity challenges in autonomous vehicles pose serious risks.

“Cybersecurity risks in autonomous driving vehicles can have a direct impact on the safety of passengers, pedestrians, other vehicles, and related infrastructures. It is therefore essential to investigate potential vulnerabilities introduced by the usage of AI,” the report states.

To this end, the report noted that autonomous vehicles are susceptible to adversarial machine learning techniques such as evasion or poisoning attacks. This threat model involves spoofing the pattern and facial recognition systems.

Evasion attacks manipulate the data fed into the systems to alter the output for the attacker’s benefit. Similarly, poisoning attacks corrupt the training process to cause a malfunction benefiting the attacker.

“The growing use of AI to automate decision-making in a diversity of sectors exposes digital systems to cyberattacks that can take advantage of the flaws and vulnerabilities of AI and ML methods,” the report authors say. “Since AI systems tend to be involved in high-stake decisions, successful cyberattacks against them can have serious impacts. AI can also act as an enabler for cybercriminals.”

Autonomous vehicles are also vulnerable to cybersecurity challenges affecting physical sensors, controls, and their connection mechanisms, according to the joint report. The most notable cybersecurity challenges associated with physical components include:

  • Sensor jamming, blinding, spoofing, or saturation: Attackers could blind or jam sensors to gain access to the autonomous vehicles. This allows malicious actors to feed AV with artificial intelligence models with wrong or incomplete data to undermine model training.
  • DDoS attacks: hackers could execute distributed denial of service attacks blinding the vehicle to the outside world. DDoS attacks would interfere with autonomous driving leading to stalling or malfunction.
  • Manipulation of autonomous vehicle’s communication equipment: attackers could hijack communication channels and manipulate sensor readings or wrongly interpret road messages and signs.
  • Information disclosure: Autonomous vehicles store large amounts of sensitive personal and AI data. Attackers could cause data breaches on AVs to access sensitive information.

“Resilient and safety-critical systems nowadays must be designed with a potential attacker’s perspective in mind,” says Ilya Khivrich, Chief Scientist at Vdoo. “This problem is especially complicated for systems reliant on machine learning (ML) algorithms, which are trained to behave properly under normal circumstances, and may respond in unexpected ways to engineered manipulation or spoofing of data they receive from the sensors. This is a challenging gap to bridge, and we believe that new tooling will be required to cope with these issues.”

Recommendations for mitigating autonomous vehicles cybersecurity risks

The joint report on AVs’ cybersecurity challenges provided recommendations for the safe usage of AI in autonomous vehicles. ANISA-JRC report recommended that manufacturers should adopt the security-by-design approaches to guarantee AI security on the roads.

  • Systematic security validation of AI models and data by collecting large amounts of data and conducting a risk assessment on AI models and algorithms.
  • Addressing the supply chain AI cybersecurity challenges through compliance with AI security regulations and sharing responsibilities across the supply chain from developers, manufacturers to end-users and third-party services providers.
  • Incident handling and vulnerability discovery related to AI and lessons learned- stakeholders are advised to simulate various attack scenarios, conduct drills, and establish cybersecurity incidents handling and response teams.
  • Addressing the limited capacity and expertise on AI cybersecurity in the automotive industry through creating diverse teams in the cybersecurity and ML fields. The creation of special courses would also bridge the AI technical gap in the automotive sector.
  • End-to-end holistic approach for integrating AI cybersecurity with traditional cybersecurity principles. This includes investment in R&D, proper governance of AI cybersecurity policy, and the adoption of AI cybersecurity culture across the automotive sector.

Successful autonomous vehicle AI exploits in the past

The agencies noted that various AI techniques were used to exploit driverless cars in the past.

Deceiving Autonomous caRs with Toxic Signs (DARTS) techniques deceive autonomous vehicle’s traffic sign recognition systems.

For example, Tesla autonomous cars were tricked into accelerating past the speed limits displayed on road signs by extending the middle line in “3” to read 85 mph instead of 35mph.

Israeli researchers from the Ben-Gurion University of the Negev demonstrated how to trick Tesla’s autopilot system using split-second images.

Similarly, Tesla 3 was affected by a cross-site scripting (XSS) vulnerability leaking the car’s vital information.

Tencent researchers also tricked Tesla’s autopilot into swerving into the wrong lane using stickers.

Researchers also used spray paint to trick an autonomous car into misreading a stop sign as a speed limit.

“The ENISA report specifically discusses how AI-driven autonomous driving systems can be tricked into not recognizing or misrecognizing traffic, road conditions, or signs,” says Paul Bischoff, privacy advocate and research lead at Comparitech. “Autonomous vehicles use painted lines on roads to stay in lane, for example. An attacker could paint false lines on a road or vandalize traffic signs to interfere with the AI.”

However, Bischoff says that the lack of a financial motive to hack autonomous vehicles would discourage such attacks.

Unsurprisingly, threat actors could blackmail vehicle occupants into paying a ransom by threatening to cause accidents after taking over the vehicle and rendering it inoperable. It’s almost certain that threat actors will invent tricks to monetize cyberattacks on autonomous vehicles.