Self-driving cars: the first potentially deadly robots?

One of Google's self-driving cars.
One of Google's self-driving cars.
Courtesy: Google

Ever since Google introduced its self-driving Prius in 2010, there’s been a lot of excitement over driverless cars. Automakers from Ford to Audi to Mercedes are developing their own autonomous vehicles. But while there seems to be a lot of promise here, we need more than the manufacturers’ word before we start deploying 3,000-pound robots on public streets.

Anyone who’s ever written more than a dozen lines of code knows that you can’t test software just by using it. As the U.S. Food and Drug Administration has said of life-critical systems such as pacemakers, “testing alone cannot fully verify that software is complete and correct. … Except for the simplest of programs, software cannot be exhaustively tested.” You have to read and analyze the code. Even seemingly simple software can be deceptively complex, and buried deep in all the if-then-else statements and self-referential subroutines may be dormant errors: division by zero, stack overflows, corrupted variables and other programming nightmares that can produce catastrophic failures.

And a self-driving car is more than a dozen lines of code; it’s more like tens of millions. Those cars live at the intersection of two wildly complex types of software. On the one hand, they’re “expert systems” or “artificial intelligence” – software and hardware that includes human-like capabilities and judgment. But they’re also “hard real-time systems” – software that interacts with the physical world in real time and where delay can result in injury or death. Yet Google (GOOG) and the auto manufacturers want to write regulators out of the picture. The ethical dimensions are also complex. Despite the incredible feat of engineering that driverless cars represent, teaching them to do the best thing when there are no good choices – when an accident is unavoidable – may prove to be especially challenging.

For humans, the rules are pretty basic: be attentive, stay sober, stay awake, don’t speed, don’t text, and don’t leave the scene of an accident. But we can and should expect more from a computer, which can react faster than a person.

We might ask, for instance, if a child darts in front of a driverless car on a one-lane road, with the car boxed in by opposing traffic on its left, should the car swerve to its right onto an apparently clear sidewalk to avoid the child? That’s not legal, presumably, but maybe it ought to be. “Stay in your lane and hit the brakes” isn’t necessarily the best answer if the car knows that death will be the result. And unlike humans, who are forced to react on instinct, a driverless car can assess the situation quickly enough that it actually gets to make a choice.

Indeed, the car will have to make a choice. In fact, the decision will have been programmed well in advance by its engineers. Each driverless car that pulls off the lot will have already made Sophie’s choice, or a multitude of them. And as cars get better at knowing their environment, the hard decisions will multiply.

But who is making those choices, and how?

Today, the answer is unclear. It could be a lone programmer on a tight deadline, working late to complete an overdue piece of code. Or it could be a lawyer seeking to limit company liability. Yet there’s a public interest here that should be represented. These are social, societal decisions. They’re public roadways. And there’s already evidence that individuals don’t want companies unilaterally making these decisions.

Driverless cars are code with consequences, and the public interest needs to be represented. We believe there should be Institutional Review Boards (IRBs) set up as adjuncts to the companies. Analogous to such boards in medicine, self-driving car IRBs should be legally required to adhere to standards in both their composition and their operation, to encompass the relevant expertise and major stakeholders including the companies, public, lawyers, government and ethicists. That would allow applying a range of nuance and expertise.

When it comes to safety, privacy, ethics and a host of issues, what we do today matters – and not just for cars. Self-driving cars are the first potentially deadly robots the public will meet, but they won’t be the last. Indeed, Google itself recently bought a half-dozen robotics companies – including a 6-foot tall humanoid that looks like the Terminator and Nest, a company that wants to put sensors in your home. At some point, we may need a broader solution, like a federal robotics commission.

Jonathan Handel is an entertainment and technology attorney in Los Angeles and former computer scientist. Grady Johnson is an Associate Technologist at New America’s Open Technology Institute.