The California Department of Motor Vehicles released its annual cache of autonomous vehicle testing and disengagements data that, depending how one chooses to interpret the data, shows stunning progress or stagnation.
The data, which every company testing autonomous vehicles on public roads in California must submit, tells a winding and sometimes contradictory tale of growth, consolidation and priorities. The total number of autonomous miles driven in 2019 rose 40% to more than 2.87 million, thanks largely to a notable uptick in public on-road testing by Baidu, Cruise, Pony.ai, Waymo and Zoox as well as newcomers Lyft.
And yet, the rise in total autonomous miles and permitted companies don’t tell the whole story. While the number of companies with testing permits grew to 60 in 2019, the percentage of companies actually testing on public roads fell to about 58%. In 2018, about 62% of the 48 companies that held permits tested on public roads.
Some companies scaled back public testing in California, either to move operations out of state or prioritize simulation. Aurora, for instance, saw its total on-road autonomous testing drop 59% to 13,429. Meanwhile, Aurora ramped up its simulation efforts, conducting more than 735,000 tests per day, an increase of over 100 times from 2018.
“While on-road testing is useful for collecting targeted data and performing late stage validation of self driving systems, we find that large-scale, on-road autonomous testing is a slow, and inefficient approach to development relative to more sophisticated, virtual techniques,” Aurora co-founder and CEO Chris Urmson wrote to the DMV.
Others, like Drive.ai, no longer exist. Two companies, Roadstar.ai and Ximotors.ai, failed to submit a disengagement report and have had their testing permits revoked.
The upshot: It’s not the who-is-winning-the-race narrative many might expect or try to tell. Those kinds of rankings and comparisons are nearly impossible for a number of reasons, including that testing on public roads is conducted in areas with varying degrees of complexity. Companies also aren’t required to report testing on private roads or tracks, out of state or in simulation, all of which provides a better assessment of an AV developer’s technology.
But the biggest issue is how companies interpret “disengagements,” a term that describes each time a self-driving vehicle disengages out of autonomous mode because the technology failed or when a human safety driver takes manual control due to safety reasons. Companies not only have different views of what qualifies as a disengagement, but that interpretation can change over time.
The DMV contends these reports are not intended to compare one company with another or reach broad conclusions on technological capabilities. Instead, the DMV told TechCrunch that it uses the reports for public awareness.
“From the reports we can see that as a whole, autonomous miles driven continue to increase annually, as do the number of permit holders, test vehicles and safety drivers,” a DMV spokesman wrote in an email.
Now industry grumbling over these disengagement reports is moving from behind-closed-doors lobbying to public commentary on social media and other forums.
This year, a growing number of companies, including Aurora, Cruise and Waymo issued public statements that DMV disengagement reports don’t provide relevant insights into performance and are a poor way of measuring progress or competency.
Moments after the DMV released the disengagement reports, Waymo took to Twitter to log its concerns, noting that the report doesn’t “provide relevant insights into the capabilities of the Waymo Driver or distinguish its performance from others in the self-driving space.” Waymo also noted that most of its public road testing is outside of California in markets like Detroit and Phoenix. The “real-world driving” that Waymo does conduct in California is “predominately engineering development, and not production releases.”
Waymo’s public criticism marks a shift within the company. In previous years, Waymo has celebrated its progress in glossy reports. This year, the company has become a vocal critic even as this latest report shows a year-over-year improvement in its disengagement rate as it increased its total number of miles. Waymo drove 1.45 million miles in autonomous mode in 2019, a 200,000 mile increase from the previous year, while its disengagement rate dropped to from 0.09 to 0.076 per 1,000 self-driven miles.
Other companies as well as analysts and industry watchers echoed Waymo’s sentiments. Several weeks ago, Cruise co-founder and CTO Kyle Vogt published a blog post that argued these disengagement reports should not be a proxy for the commercial readiness or safety of self-driving cars.
This airing of grievances did not produce an alternative metric that would accurately measure competency, readiness and progress. Waymo did say in its series of tweets that it is preparing to share more on a safety framework it has developed. Vogt’s post also suggested that Cruise is also working on a more comprehensive metric.
The reports have their shortfalls. However, they’re often the only window into a company’s autonomous vehicle program. Comparisons between companies might be ineffective, but examining multiple years of data from one AV developer can be helpful in connecting the dots on a business strategy or an imminent demise.
Take Cruise as an example. The company has amassed a $7.25 billion war chest and a chunk of that capital is being poured into putting more vehicles on the road for longer periods of time. Cruise reported 228 registered autonomous vehicles in 2019, a 40% increase since the previous year. Over that same time period, Cruise’s total mileage has increased by more than 85%.
Or take a look at Pony.ai. The company, which announced earlier this week that it has raised $400 million from Toyota Motor Corporation, reported 22 registered autonomous vehicles in 2019, three times more than the previous year. The startup reported 16,356 total AV miles in 2018. That figure skyrocketed to nearly 175,000 miles in 2019.
Despite all of the data —flaws and all — that these reports provide, they get no closer to revealing what metric companies use internally to determine progress, competency and answer the critical question of how safe is safe enough?