In a February 4, 2016 letter, the National Highway Traffic Safety Administration gave Google an important break in its quest to bring self-driving cars to the marketplace.
This project had been stymied by legal concerns about just who should be regarded as the "driver" of such vehicles, when they aren't being directly piloted by any of their passengers. The answer, according to the NHTSA, is that the driver is the car itself, and that Google's cars "will not have a 'driver' in the traditional sense that vehicles have had drivers during the last more than one hundred years."
The Google car, we are told, will revolutionize transportation and improve all our lives (while also, presumably, making a lot of money for Google). But as is often the case with new technologies, the quest for a self-driving car demonstrates the ways in which individualized, free market solutions aren't enough. It takes collective investments in infrastructure to enable us to live better through technology.
There remain many unanswered questions about this new technology, and the NHTSA itself insists that it remains an open question whether self-driving cars can be made compatible with federal safety standards. One sticking point is that Google's current design doesn't give the rider any ability to take control of the car, even in an emergency, since it has no steering wheel and no brake pedals.
But there are problems that go beyond the caution of government bureaucrats. One of them has to do with the piecemeal, individualized way in which Google wants to introduce this technology. And the second gets to a common misapprehension about the introduction of labor-saving technologies and the myths of perfect automation that accompany them.
Some question the logic of telling humans to subordinate ourselves to the logic of the machine, when the machine was originally supposed to be making our lives easier
Robot cars have become remarkably good at navigating the physical environment and navigating around other cars—provided that those cars are self-driving as well. But a study by two University of Michigan researchers found that when mixed with regular traffic, self-driving vehicles got into crashes at higher rates than traditional cars.
The California Department of Motor Vehicles has reported 12 accidents involving self-driving vehicles through February 2016. In all but the most recent case, the accident was ascribed to human error, not to the car software. In nine cases, another human-piloted vehicle caused an accident (by rear-ending a car at an intersection, for example), while in another two cases a human in a non-Google self-driving car assumed manual control and then got in an accident.
Boosters of automatic driving systems point to this as a "win" for companies like Google. Don't blame the technology, blame the human drivers! But others might question the logic of telling humans to subordinate ourselves to the logic of the machine, when the machine was originally supposed to be making our lives easier.
The 12th and most recent accident, in which a Google car crashed into a bus, demonstrates the basic problem that arises when driverless vehicles are introduced into regular traffic. It is not that the car's software is faulty, exactly. Rather, the problem, as Donald Norman of the University of California, San Diego told the New York Times, is that "the car is too safe." Because it is programmed to perfectly follow the rules of the road, it doesn't bend the law and react to conditions the way a human driver would, which can cause confused reactions in drivers who are accustomed to their local human driving culture. As Samuel English Anthony puts it in his write-up of the most recent accident at Slate, Google's cars lack the "intuitive fluency" of a human driver.
These facts don't mean that the self-driving car is a hopeless project. But our experience with these cars so far suggests that it might be better and safer if the nation's transportation system could be converted to driverless cars all at once, rather than having them leak onto the roads bit by bit. That, of course, is contrary to the market-centered, individual approach that Google is taking, which sees individual consumer decisions as the way to better transit. The accident findings suggest that the issue is more systemic than that, and probably requires some kind of state-directed intervention. But once you strip out the market logic, it is no longer clear that a self-driving car is always superior to other forms of transit, such as high-speed rail, that have largely been stymied for political rather than practical reasons.
But even if this obstacle can be overcome, future cars will probably not take the purist form of Google's no-steering-wheels-or-brakes design. The company's propaganda evinces total confidence in the perfectibility of its software, and the desirability of black box design that shuts out the human operator entirely. Anyone who has experienced a crashed web browser or a malfunctioning smartphone will perhaps be less willing to put such trust in our Silicon Valley overlords.
When it comes to something like operating a vehicle, automation is more about changing the relationship between human and machine, rather than removing the human entirely. Cruise control and anti-lock brakes are, after all, small steps on the way to the automated vehicle. You have to learn how to interact with and monitor such systems, as well as things like GPS navigation devices, lest you end up like the people profiled in a recent New York Times article, who mindlessly followed the computer's instructions until they found themselves plunging off a bridge or going hundreds of miles off course.
It might be better and safer if the nation's transportation system could be converted to driverless cars all at once, rather than having them leak onto the roads bit by bit
This is an old issue in the world of commercial aviation. The pilot and writer Patrick Smith has often complained about a media narrative that modern planes mostly "fly themselves," which leads to crashes being blamed on deskilled pilots who don't know what to do in an emergency. The reality, Smith says, is that monitoring and interacting with autopilots and other advanced control system is simply a different set of skills, and one that still requires the pilot's close attention.
And while it might be possible to move toward pilotless planes, this would pose infrastructural challenges just as in the case of cars. Smith notes that this ranges "from the designing and testing [of] a whole new generation of aircraft, to an overhaul of the entire air traffic control system."
Perfect automation makes for a vivid horizon, but an unreachable one, unless we want our machines to simply feed off our sleeping bodies as in The Matrix. Our destiny is to become cyborgs—or rather, we have always been cyborgs, unique among animals due to the centrality of tool use throughout human history.
But as the self-driving car shows us, we can't expect to just become better cyborgs by ourselves, trusting the market and individual decisions to get us there. Public investment in infrastructure is part of a New Deal for the cyborg citizen, one that helps put technology in the service of improving our lives and reducing the burden of work, rather than simply enriching a few tech titans. Instead of a privatized Google car, perhaps what we need is a publicly-run car sharing service—think of Zipcar, but where the car comes to you. A car that drives itself could be in use much more often than private vehicles, which spend the vast majority of their time idle. In conjunction with things like high-speed rail, the car of the future could become part of a comprehensive and ecologically sensitive transportation system, rather than perpetuating the fantasy of fully individualized and privatized travel.