Views expressed in opinion columns are the author’s own.
In 2018, a driver in a Tesla car was killed after the vehicle’s autopilot mode failed to identify the presence of a concrete barrier. The next year, a self-driving Uber car hit and killed a pedestrian after it was unable to recognize her jaywalking.
These accidents raise important concerns about the safety of autonomous vehicles. Currently, regulating these vehicles falls on the states, and rules for self-driving cars vary widely. Without a rigorous set of federal guidelines, there’s little to stop companies from sacrificing human lives to advance their technology.
Despite the horrific nature of the accidents mentioned, it’s important to note that self-driving cars — for the most part — are already relatively safe. Automation eliminates a lot of our human tendencies that could potentially lead to accidents; human error ultimately caused more than 90% of traffic accidents in 2018. Since self-driving vehicles aren’t prone to distractions and are constantly aware of their surroundings, they are likely significantly safer than a car with a drunk or fatigued driver behind the wheel.
However, the lack of federal regulation is still troubling because it leaves consumers and pedestrians at the mercy of these companies. No one is exactly sure how “safe” these vehicles are despite anecdotal evidence. Since the only federal guideline — a request of detailed safety information from manufacturers — is optional, efforts to better understand and regulate these self-driving vehicles have been unsuccessful. Only 16 companies have submitted a report, and the quality of the information provided has been inconsistent.
What’s worse is that these companies seem reluctant to release this pertinent information. In 2019, a policy think-tank called RAND Corporation collaborated with Uber to release an extensive report on the safety of autonomous vehicles. However, throughout the entire process, RAND researchers commented on the cynicism that they faced as they attempted to collect information; apparently, manufacturers were more concerned about not revealing anything proprietary than attempting to develop a universal safety framework for these vehicles.
Without this information, there’s nothing stopping these companies from neglecting basic safety concerns in favor of technological advancements. A self-driving car should be, at the bare minimum, safer than one driven by a human; the entire purpose of automated vehicles is to make driving safer. If manufacturers cannot or refuse to verify this, it would be egregiously unethical to allow these cars to enter the market or even include autopilot technology.
Further complicating matters are the six different levels of automation. Contrary to what’s implied in “autopilot mode,” Tesla cars only currently utilize the third level of automation in their autopilot mode: conditional automation. At this level, the driver must be behind the wheel and ready to take control at any time. Federal guidelines should not only attempt to increase transparency around safety measures but implement specific regulations for each level of automation, including measures to reduce potential confusion.
The commercial hype surrounding self-driving cars — alongside the internet’s obsession with prominent advocate Elon Musk and his eccentricity — has overshadowed any reasonable skepticism about the safety and reliability of these cars. Internet clout appears to absolve these companies from taking the blame and has lulled consumers into a false sense of safety. Until the federal government decides to thoroughly investigate and regulate these vehicles, please don’t use autopilot mode as an excuse to fall asleep on the highway.