The autonomous vehicle industry has a data problem, and it is not the kind that engineers can patch with a software update. Across the sector, a pattern has taken hold: companies developing self-driving technology are collecting enormous volumes of safety-relevant information while releasing only what regulation strictly compels them to share. The phrase "a stunning lack of transparency" did not emerge from a consumer advocacy group or a congressional hearing. It came from observers embedded in the mobility beat itself, people who watch this industry daily and have grown visibly frustrated with what they cannot see.

This matters because the public is being asked to share roads with vehicles whose safety records are, in meaningful ways, opaque. California's Department of Motor Vehicles requires AV companies to report disengagements, those moments when a human must take over from an automated system, as well as collisions. But the reporting frameworks are inconsistent enough that comparisons between companies are nearly meaningless. A disengagement on a quiet suburban street and one on a rain-slicked freeway interchange carry vastly different implications, yet they can appear as equivalent data points in a public report. The numbers exist. The context often does not.
To understand why transparency is so thin, it helps to follow the incentives. AV companies are simultaneously technology firms, regulatory subjects, and competitors in a race where perceived safety leadership is a market asset. Releasing granular safety data creates multiple risks from a corporate strategy standpoint: it can hand competitors insight into where a system struggles, it can attract regulatory scrutiny, and it can generate negative press coverage that erodes public trust before a product is commercially ready. The rational move, absent a legal mandate to do otherwise, is to say as little as possible while maintaining a posture of openness.
This is not unique to autonomous vehicles. Pharmaceutical companies, aviation manufacturers, and financial institutions have all navigated versions of the same tension between proprietary information and public safety interest. What makes AV transparency particularly consequential is the speed of deployment. Waymo is operating commercial robotaxi services in San Francisco, Phoenix, and Austin. Tesla's Full Self-Driving system is active on public roads across the country, driven by millions of consumers who paid for access to a product that federal regulators are still actively investigating. The gap between deployment scale and disclosure depth is widening, not narrowing.
The National Highway Traffic Safety Administration has taken some steps, most notably its Standing General Order, which requires manufacturers to report crashes involving automated driving systems. But the order has faced criticism for definitional ambiguities and enforcement limitations. A 2023 analysis found that Tesla accounted for the vast majority of reported incidents, a figure that reflects both the company's enormous fleet size and the breadth of scenarios in which its driver-assistance system is used, making raw comparisons to smaller robotaxi fleets almost analytically useless without normalization.
The deeper systems-level risk here is not any single accident or any single company's reporting gap. It is what happens to public trust if a serious, high-profile AV incident occurs and the subsequent investigation reveals that warning signs existed in data that was never disclosed. The aviation industry learned this lesson across decades of painful crashes and subsequent reforms. The result was a safety culture built on mandatory, standardized, and often anonymized data sharing through bodies like NASA's Aviation Safety Reporting System, a structure that allowed the industry to identify systemic risks before they became catastrophic patterns.
Autonomous vehicles have no equivalent institution. What exists instead is a patchwork of state DMV reports, NHTSA filings of variable quality, and voluntary transparency reports that companies publish on their own schedules in their own formats. If the AV industry continues to scale under these conditions, the feedback loop that would normally catch and correct systemic safety problems is running with incomplete inputs. That is not a technology failure. It is a governance failure, and it is one that the industry's own momentum is making harder to address with each passing quarter.
The irony is that genuine, standardized transparency might actually serve the long-term commercial interests of companies with strong safety records. If Waymo's data is as good as the company suggests, a rigorous independent disclosure framework would demonstrate that advantage clearly. The reluctance to embrace such a framework, across the industry broadly, suggests that either the data is more complicated than the press releases imply, or that competitive anxiety is overriding strategic self-interest. Neither possibility is particularly reassuring for the millions of people sharing roads with these systems right now.
What the AV industry builds next, in terms of disclosure norms and accountability structures, will shape not just its own trajectory but the template for how society governs the next generation of AI-driven physical systems. That is a much larger design choice than it currently appears.
Discussion (0)
Be the first to comment.
Leave a comment