Myth-Busting Self-Driving Cars
Separating Facts from Fear in the Conversation About Safer Roads
When we first mentioned self-driving cars, some readers expressed concern. That’s understandable — headlines about autonomous vehicle crashes tend to grab attention, even though human drivers cause the vast majority of accidents.
So let’s take a closer look.
What Is a “Self-Driving” Car, Really?
Most new cars already include driver-assist features — like:
Lane-keeping alerts
Emergency braking
Adaptive cruise control
These tools are steps toward autonomy but still require human oversight.
The U.S. Department of Transportation classifies automation in five levels:
Level 1–2: Driver assistance (most vehicles today)
Level 3–4: Conditional automation (used in testing fleets)
Level 5: Full autonomy — no driver input (not yet commercially available)
So... Are They Safer?
In many cases, yes — especially in reducing crashes caused by human error.
Here’s a comparison:
Waymo vehicles have logged over 10 million miles, with most incidents caused by other drivers.
Tesla’s Autopilot (which still requires driver attention) reports one crash every 4.85 million miles — nearly 10x safer than the national average.
Important context:
Tesla’s crash data is self-reported.
Waymo’s data is independently verified.
Tesla Autopilot is not full self-driving — it assists, but doesn’t replace, a human driver.
The Bottom Line
The real danger isn’t machines — it’s distraction, speeding, and human mistakes.
If smart technology can help prevent even a fraction of those, it’s worth serious attention.
At Hunter’s Fund, we support any tool or idea that saves lives — especially those of young people like Hunter.
Stay tuned for the next issue in this series, where we’ll explore how young drivers and new technologies can partner to make the road ahead a safer place for everyone.