Waymo has announced plans to test self-driving cars in 10 more U.S. cities, including San Diego and Las Vegas, in 2025. After years of slow progress and technical roadblocks, this expansion signals that autonomous vehicle (AV) technology is finally gaining ground.
But not everyone’s on board. A recent AAA survey revealed that 6 in 10 U.S. drivers are scared to ride in a self-driving car. This fear is holding back adoption, even as companies like Waymo, Tesla, and Cruise continue testing and expanding AV programs across the country.
With this in mind, The General decided to look into the safety of autonomous vehicles using real-world data from the National Highway Traffic Safety Administration (NHTSA), the U.S. Department of Transportation, and The Robot Report. This article breaks down how the technology works, crash rates, disengagement factors, and what problems still need solving.
How Autonomous Vehicle Technology Works
Autonomous vehicles use a mix of sensors, machine learning, and real-time decision-making to drive without human input, but not all systems are created equal. The NHTSA makes an important distinction between Automated Driving Systems (ADS) and Advanced Driver Assistance Systems (ADAS). ADS controls the car entirely in specific conditions, while ADAS supports human drivers with features like lane-keeping or automatic braking.
Self-driving cars use radar, lidar, cameras, and onboard computers to monitor their surroundings and respond quickly to what’s happening on the road. These tools help the vehicle track traffic, road conditions, and obstacles. Artificial intelligence takes in all that information and makes fast decisions, similar to how a human driver would react. Even with all that tech, companies like Waymo still rely on human drivers during testing as part of their safety practices. These drivers are trained to jump in if something goes wrong.
Waymo’s autonomous vehicles have driven over 56 million miles in the U.S. so far. Data from these experiences helps refine the technology, improve safety protocols, and build trust with the public and regulators.
Safety Performance: What the Data Shows
According to the NHTSA Standing General Order Crash Reporting Database, AVs have been involved in over 3,900 crashes from 2019 through mid-2024. Of those, 496 resulted in injuries or fatalities. Here’s how those numbers have grown since 2021:
- 2019: 4 crashes
- 2020: 25 crashes
- 2021: 641 crashes
- 2022: 1,450 crashes
- 2023: 1,353 crashes
- 2024 (Jan–June): 473 crashes
As self-driving cars log more miles, the big question isn’t how many crashes they’re involved in, but how often. Raw numbers don’t tell the whole story, especially as more AVs hit the road.
From 2021 and 2023, self-driving cars had a much higher crash rate per 1,000 vehicles than human-driven ones. The gap has narrowed, dropping from 85.5 per 1,000 in 2022 to an estimated 35.6 in 2024, but that’s still almost double the human driver rate, which hovered near 20 per 1,000 vehicles.
AV tech still has room to grow, but it’s improving. According to Waymo’s safety impact report, their self-driving cars have significantly reduced the rates of crashes involving injuries, airbag deployments, and police reports compared to human drivers in Phoenix and San Francisco.
Disengagements: When Humans Take Over
Even the most advanced self-driving cars aren’t fully self-sufficient yet. A key measure of progress in AVs is the disengagement rate, when the vehicle either hands control back to a human or the human safety driver decides to take over. These moments are important for spotting weak points in the technology.
A recent U.S. Department of Transportation report breaks down the most common reasons AVs disengage. The most common causes included:
- Control errors. The car failed to execute the driving plan correctly, such as turning, stopping, or steering.
- Planning failures. The system couldn’t figure out a safe, legal path to keep driving.
- System issues. The car didn’t perform the way it was supposed to during regular driving conditions.
- Perception gaps. The AV struggled to detect nearby objects or traffic accurately.
- Prediction errors. The system couldn’t correctly forecast how other drivers or pedestrians would move.
Ongoing Issues and Challenges
One of the biggest challenges for self-driving cars is handling complex, unpredictable situations. Things like human behavior, construction zones, and bad weather can throw them off. They also struggle with subtler moments, like reading signals from a traffic cop or reacting to drivers who break the rules. These movements can be difficult to account for in advance, which makes it hard to guarantee safety in every scenario.
Today’s AI isn’t ready for everything the road can throw at it. That’s why companies like Waymo are putting so much time and energy into making it smarter. Waymo is building AI models that don’t just follow the rules. They’ll also be designed to see and react to the world around them more like a human would. These models aim to combine driving experience with broader reasoning skills, so the system recognizes patterns, predicts how others might behave, and makes smarter decisions in real time.
But there’s still the problem of AV data collection and reporting. Safety and disengagement reports vary from one manufacturer, state, and testing program to the next. A lack of standardization or clear benchmarks makes it difficult to compare performance results and get a clear picture of what’s working, what’s not, and how safe these vehicles actually are.
Moving Forward: Trust, Testing, and Transparency
No longer a futuristic concept, self-driving cars are here, and they’re improving. AVs have logged millions of miles and learned from thousands of incidents. They’ve also started to narrow the safety gap with human drivers, but they’re not there yet when it comes to fully replacing people behind the wheel.
Disengagements, perception errors, and planning failures are still common. To move forward, the industry needs transparent safety data and standardized reporting. Without that, it’s hard to know which systems are truly getting better and which just look good on paper.
Public trust also matters. Most drivers still feel uneasy about self-driving cars. Building confidence will take more real-world testing, smarter regulations, and continual tech improvements, not just promises from AV companies.
Autonomous driving isn’t perfect, but it’s evolving fast. If the industry can balance innovation with responsibility, the next few years will be a turning point.
Methodology
This study leveraged the NHTSA CrashStats, NHTSA ADS-Equipped Vehicle Crashes, and The Robot Report to compare the accident rate of human driving vs. automated vehicles. Additionally, it referenced the 2023 U.S. Department of Transportation Highly Automated Systems Safety Center of Excellence report to understand the most common ADS disengagement causal factors.
This story was produced by The General and reviewed and distributed by Stacker.