Over the weekend, Tesla rolled out a new feature called Safety Score that grades drivers based on their driving behavior. The score will be used by the company to determine which customers get access to Tesla's "Full Self-Driving" beta program, which used to be invite-only but now anyone paying $200 per month for FSD can request.
The concept of a driver safety score, based on the actual driving habits of the car's owner, is not a particularly controversial or even novel idea. It is the same concept as Progressive's Snapshot tool that has been used to provide discounts for safer drivers since 2008. But, in a tremendous irony considering how often Tesla refuses to disclose basic safety-related information, Tesla has decided to be remarkably transparent with the Safety Score, explaining exactly how it is calculated and allowing owners to see their real-time score.
This has had the entirely predictable effect of gamifying driving, which would be well and good if the incentives were aligned with actually driving safely. Unfortunately, that isn't turning out to be the case. One driver said he’s improving his score by "not braking for the cyclist who crossed against the red in an intersection" and "going around the block a few times before going home."
The five factors Tesla is using for the Safety Score are: forward collision warnings per 1,000 miles, hard braking, aggressive turning, unsafe following, and forced Autopilot disengagement (speeding is conspicuously absent from the list). And, while insurance companies typically reward driving less—since the less often one is on the road the less likely a crash is to occur—the Safety Score is incentivizing some drivers to spend even more time on the road.
On top of that, the Tesla score, at least for now, will be a rolling average of 30-day usage windows. This is a very short amount of time for a driver to be on their best behavior. By comparison, Progressive requires the use of a tracking device or app for at least six months.
To be sure, Tesla's Safety Score is based on some more technologically advanced tracking features than a basic insurance tool. But technologically advanced doesn't always equate to better. For example, Amazon recently installed cameras in delivery vehicles to encourage safe driving habits, but they've proven buggy and punish drivers for "mistakes" they don't actually make. If these drivers were to respond to these incentives like some Tesla drivers now are, they would stop checking their mirrors or stopping at blind intersections.
Both the Amazon cameras and Tesla's safety score are prime examples of Goodhart's Law, originally an obscure theory on monetary policy from the 1970s that was generalized in 1997 by anthropologist Marilyn Strathern to observe: "When a measure becomes a target, it ceases to be a good measure." Tesla has effectively turned safety measures into targets, which is how someone driving a Tesla can conclude not braking for a cyclist is actually the better choice.
Like all things related to Tesla's self-driving ambitions, the claim is not that the Safety Score works well now, but that it is still in beta and will one day be great. "Very much a beta calculation," Musk tweeted upon its release. "It will evolve over time to more accurately predict crash probability." The only thing more suspect than turning a measure into a target is turning it into a moving target.