There has been much debate about how to regulate safety and the initial operation of self-driving cars, and how to even tell how safe they are. Our current “rules of the road” govern by safety and traffic flow. They have been built by observing all the ways in which human drivers can’t be trusted to be safe and cooperative on the road, then passing laws forbidding those, and sending out police to catch and punish offenders.
There are a billion drivers, each a different entity, so the use of the law makes sense. But there will never be more than a handful of robocar driving systems in any given area, and probably not more than a hundred or so world-wide. Unlike the human drivers, it will be possible to get representatives from each robocar system in a town or nation in one room at the same time. There, it will be possible for them to discuss, with themselves and with regulators, what the right rules are. Once they are agreed upon, they can also be enforced directly with those entities.
Most of the rules of the road break down into these two goals
- Be safe
- Share the road (ie. do not unfairly impede others)
There are a variety of local regulations for specific streets, such as declaring one-way streets and parking zones, though usually these are created in order to support the two goals in special ways on certain streets.
This creates the potential for a vastly simpler vehicle code. One could declare such principles, and then have a sort of court or ruling body that can determine if a particular practice violates them. With a ruling given, all would implement it. If anybody didn’t, it would quickly become apparent, and enforcement could be applied as needed directly with the developer.
Game theory teaches a lesson
There is the potential for even more though, something which can happen largely in the absence of regulators. Today, we must expect that any given individual driving the roads will be be selfish. We can even expect that any given brand of robocar might drive selfishly as well. But for a group, there is a way to stop that, and it comes to us from the field of game theory, and its most famous problem, “The Prisoner’s Dilemma” and its top solution, known as “tit for tat.”
We can effectively strengthen the 2nd principle to include a new concept of “Give, and you will receive more in return.”
If we suppose that we have the developers of all the robocars in an area in the room, they can discuss how they can cooperate. In the Prisoner’s Dilemma, the problem is that while cooperation is a win for both sides, if one side “defects” they get a bigger win in that particular circumstance, which makes defecting “smart.” If we take turns on the road all the time, everybody wins, but if one person cuts everybody off, they “win” but everybody else loses more. Everybody is better off if you can all agree to cooperate.
Researchers discovered that when you have multiple encounters with the chance to cooperate or not, the overall winning strategy is one named “tit for tat.” It means you cooperate by default, and presume others will, but as soon as somebody doesn’t cooperate, you remember that, and you (and those allied with you) don’t cooperate with the defector again, at least until they learn the lesson.
To do this you examine the best solutions to any problem of how to share the road and find the one that is the best win for everybody. Everybody implements it, but if, while driving, they notice another car that won’t cooperate, they can note what type of car that is and share that record. After that, nobody in the “club” will cooperate any more with the defecting type of car until it cleans up its act. That means that if Tesla TSLA, Cruise, Zoox, Waymo and EvilCar are all driving a city, and an EvilCar cuts off a Cruise in violation of the cooperation agreement, then not just all Cruises but also all the other cars will no longer play nice with EvilCars. That one brief victory for that one EvilCar would be followed by a permanent nightmare of an unfriendly road for all EvilCars. Which means that EvilCar would be crazy to do that and never would.