Brad Templeton Home
Robocars Main Page
Brad Ideas (My Blog)
Robocar Blog
My Forbes Articles
(Book from 2008)
The Case
Sidebars: Charging
|
Roadblocks to robocars
I was promised flying cars!Predictions of the future, notably in transportation, don't have a great track record. (Though as you can see, our metaphors are full of car references.) While this article is not about the technological issues in building robocars, it would be silly to pretend they aren't there. Today's prototypes are exciting, but there are many problems remaining, some of which are hard to quantify research problems, and others which feel more like general engineering problems that people predict can be solved just by pouring in money. Unlike the flying cars, which require both advances in computer piloting and serious challenges at the level of the basic physics, the physics and engineering of cars is well understood. The problems are almost entirely related to computing, sensing and machine vision. (The navigation problems are not trivial, but still much simpler.) Thanks to Moore's "law," which should be with us for at least another decade, we will continue to get access to more and more computing power. Many believe that this power may only become available in the form of arrays of thousands of individual processors, but I think this is a good thing. My intuition is that many of the problems involved in sensing and mapping the environment are the sort that will parallelize (which is to say, adapt to clusters of computers) well. To present a transportation analogy, if you can build one mile of subway in one year, you can probably do 5 in 5 years and 10 in 10 years. With information technology, you go one 1 mile in 1 year, 32 miles in 5 years and 1000 miles in 10 years -- the last 500 in the final year.
Thanks to Moore's law, and the general laws of scale in electronics, anything that we want in quantities of many millions quickly becomes quite inexpensive. The fact that today's sensors are expensive is no barrier to believing the technology will become affordable. Humans don't scan with lasers. We use binocular vision and parallax, focal distance, motion detection and most of all our big pattern-matching brains to map what we see into a 3-D model of the road. Though, if pressed, we can drive sufficiently well with just one eye, at night, and without sound. Pretty impressive. We're even known to do it while otherwise unconscious -- many people report occasional episodes of arriving at a destination with no memory of the trip there. A horse can do this too. Lots of animals can move easily in swarms without hitting anything. Even tiny brained bugs can do it. But in the end, driving is not that hard on the grand scale of A.I. problems. I feel this is a question of how long it will take rather than whether it is possible, and many agree. The bar of human driving is higher than the 43,000 deaths/year statistic may suggest. That's one death per 70 million vehicle miles. That's a lot of miles. As a side note, it is not out of the question that robocars could help deliver on a version of the flying car vesion. LegalityOf course, today a motor vehicle has to be operated by a licenced driver. So empty robocars are not street legal. And it will be some time until the vehicle codes are amended to change this. Fortunately vehicle codes are somewhat local laws, which allows for jurisdictional competition and innovation. That is to say, cities may change their laws to encourage innovation in their area. Update: Nevada has passed a law which asks its DoT to draft regulations for robocar operation on Nevada roads, so this is happening faster than expected. LiabilityThe U.S. system of liability presents a major obstacle to companies selling robocars. It would be foolish to expect them to be perfect before they go out. They will have accidents. They will hurt people. They probably will kill people. That's not something most software people are used to dealing with. In articles about robocars, I frequently see people feel the liability question is the most essential one. But the question they ask is the wrong one -- they wonder who will be liable in a robocar crash. It turns out that question is largely uninteresting. The real question is how much liability will be assigned. In an ordinary car crash, liability almost always falls on a private individual -- a driver. Sometimes a component of a car is blamed, but that's rare. So insurance companies pay up, but damages tend to be tied to how much insurance the at-fault driver has. Some areas use no-fault systems. But whether the insurance company pays or the car manufacturer pays, in the end the owner/driver of the car is actually the one paying, either in premiums or in a higher price for the car. So the question of "who" just shifts some money around. I can assure you that in the first crashes, the vendor will be sued no matter what the circumstances of the crash are. Much more interesting is the question of whether robocar crashes will cause damage awards much greater than crashes caused by negligent human drivers or even drunks. The general aviation (small plane) industry has a different history. For many years leading to the 80s, almost every single plane crash resulted in a lawsuit. And many of those lawsuits found something wrong with the plane, even if they only gave it 5% responsibility for the crash. Juries like to blame machines over people in lawsuits by the families of dead pilots. The deep pockets of the airplane companies, like Cessna and Piper, were a great target for lawsuits. Over time, the insurance cost started to exceed the cost of the aircraft, or any reasonable profit. Cessna stopped making small aircraft in the 80s. Only after lobbying to get the liability rules tweaked (to limit the duration of liability so people could not sue over 20 year old planes) did things start up again, but at high prices and low volumes. We could see this with robocars. Imagine we get to the point where the robocars kill about 1/100th of the number of people killed by human driving. In other words, fully deployed, about 500 people year year instead of 45,000. Remarkably, we might be more frightened of that, and find ourselves, as a society, choosing to take the 45,000 human-caused deaths over the 500 computer caused-deaths. We have certain irrational responses to risk that make us favour, even in the legal system, high risks over which we feel some sense of control and individual responsibility compared to low risks beyond our control. We're particularly scared of being killed due to computers. Even the most rational of us understand that fear. It is something real we must strive to find a way around. (I sometimes wryly remark that if aliens came to Earth and offered to cure cancer, so long as we would give them 100 of the otherwise doomed cancer victims every year to eat, we would refuse the offer. Of course computers are not deliberately trying to eat us yet, but we sometimes treat it that way.) We will have to decide at what number this makes sense to society. How good do the safety numbers need to get -- probably in other countries -- before we will assure the legal regime under which robocars can be manufactured by companies that stay in business. This will be complicated by the gradual change in where deaths come from. As accident-avoiding cars become more common (initially among the rich) the deaths from HDVs (Human driven vehicles) will decline. They will also decline because the increasing number of robocars on the road will work hard to not allow HDVs to hit them. As such the deaths and injuries from robocar mistakes will stand out more, and people won't always realize why this is. The answer to this lies in the powerful lobbies that will stand to gain from robocars. Once again the mantra: That every year we delay this, another 33,000 Americans, will die, mostly in the prime of life, may swing the day. If it doesn't swing it in the USA, it can more easily swing it in a number of other countries, such as China, Singapore, India or Japan. These high-tech contries have very different liability systems, and indeed very different forms of lawmaking that can simply wipe away the liability problem with the wave of a pen. Dramatic legal change to the liability system is not without precedent, and not just for aircraft. Twelve U.S. states have "no-fault" insurance systems that modify liability rules as much or more than would be required to have robocar vendors able to survive. Once a robocar vendor knows they can't be sued out of existence by any single accident or small number of accidents due to their deep pockets, insurance is a simple matter. The robocars must, of course, develop an accident rate per mile that is much lower than that for humans. Spread over the entire fleet, the cost of accidents will be much less than the current cost of insurance for human drivers. (This is not a minor cost, but it's obviously affordable.) In fact, insurance will almost certainly come bundled with the car. Initial robocar technologies are already being sold as "safer car" technologies and this will continue until there are cars that can drive themselves but which come with big warnings for liability reasons that "you should never do this except in an emergency." People will ignore the warnings and let go of the wheel. Once the safety record is well demonstrated, perhaps in other countries, it will be easier for the lobbies to push for better liability rules. FearOur fear of death by computer will extend beyond how we construct the legal system. Many people will refuse to ride in a robocar, refuse to go on streets with them. Even if other powerful forces make them legal, some will protest them, or demand they suffer a wide set of restrictions, even if legal. And sometimes these cautions will be right, as there are legitimate safety concerns. This fear will fade with time, but that may take a whole generation. Younger generations have different attitudes towards robots and computers, and technology as a whole. The case for fear will be strong. I can already predict the jokes about the "blue screen of death" and more mundanely, computer "crashes." If you want to make people afraid of computers, it's not hard, and in many cases for good reason. BugsLike all software systems, robocars will have bugs. Sometimes dangerous ones. Getting them reliable enough to deploy will be harder than getting them working in the first place. This is a major software engineering challenge. Fortunately, in spite of what you might think as a user of a typical PC operating system, technology for reliable software is getting better, and it is possible with the right effort, and the right application of money. It is for this reason that I have dubbed robocars as a potential "Apollo project." Indeed, it was for space flights that one of the best reliable systems technologies was developed. Spacecraft will often have 3 different computers, all programmed to do the same task by 3 completely different teams of programmers. The programs get together and vote on what to do. Usually all 3 agree. Sometimes one will disagree and 2 will agree, in which case the majority wins, but a problem is also flagged. Principles like these will guide robocar development. In some cases, even a single vote difference will be enough to pay attention. For example, if 1 system says there is a pedestrian in the road, and 2 do not, it will make sense to act like there is a pedestrian, or at least look harder, unless the false alarms are too many, in which case that one system needs redesign. However, this is not to downplay the fact that reliability, safety and security are among the biggest challenges in the design of robocar software. Full shutdown on bug discoveryToday, when a product has a major safety flaw, a recall is issued. With robocars, vendors will have to decide what to do if they discover a safety or security flaw in their code. They will discover minor ones quite regularly, and more serious ones less often. Updating software is fortunately easy and can be done remotely. The "recall" can take place in moments without the vehicle coming home. (If a physical recall is needed, the vehicle can probably bring itself in for service when the owner is not using it.) However, there will be hard questions about what to do between the time a problem is discovered and a fix is ready and sufficiently tested. If the vendor feels pressure to issue a "Do not operate this vehicle until we have the fix" notice, or worse, has a command it can issue to disable all vehicles until a fix is available, this has dramatic consequences. With most regular recalls, people take the risk and keep going. If customers find their valued and needed transportation keeps declaring itself unsafe to use they will be very unhappy campers. Even being warned every time they turn it on that there's an unlikely but non-zero chance of a safety problem will drive them away, even if that chance is far lower than the old chance of human-caused accident we experience every day in 2008. If software monoculture makes a problem show up in a large fraction of machines, the problem could be very serious. Even though we discover safety problems in our auto fleet all the time today, pulling them all off the road is not even remotely possible, and is not discussed. Murder and terrorismRobocars, and drive-by-wire cars in general, could make a nasty terrorist or assassin's weapon. Load the car or truck with explosives, and send it to the target. No suicide bomber required. Even without the robocar, terrorists could put a toy airplane's remote control onto the typical drive-by-wire car and operate it from 1,000 feet away. (And let's face it, many wonder if some elements of the military also hope that killbots arise out of this technology.) I personally think that you can't really defend every location against truly determined terrorists, but this won't stop people from calling for serious regulation of robocars, or even their prohibition in many areas, should this happen. There are technologies that can detect if a human is in the car (but these may no doubt be fooled.) Certainly the vehicle will know its weight. But even so, attempts to do this require a very high level of centralized control over robocars, control which will crush their pace of innovation. I think this is like the unsolvable DRM problem, but nastier. The way to combat terrorism is not to lock down our society and technology, but to go after the root causes. However, if robocars start being used routinely for crimes of this nature, there will be forces that try to ban them or heavily regulate them, even if the crimes kill far fewer people than human drivers were killing. Computer IntrusionRobocars don't strictly need to be network connected, but it's hard to imagine they won't be. At the very least the passengers will want it, but the car will also want to get live updates related to roads and traffic, and will want to communicate with other cars about certain items. With networked communication comes the risk of deliberate intrusion. A computer virus hitting your car is a scary thought, and fear of this will, with considerable justification, scare people away from robocars. We will work to design our systems with layers, including fail-safe layers that are not connected to the outside world at all, but this is a hard problem. I have more details on intrusion in the downsides article. Legal oppositionWhile robocars are not currently legal or illegal, there will be efforts by their opponents to make them illegal. This will vary from location to location. (Famously, Dean Kamen's somewhat pushy plan to get cities to endorse the Segway scooter on city sidewalks led some cities, like San Francisco, to vote instead to ban them from there.) There are people who stand to lose from robocars. The Teamsters and other professional drivers. The oil companies, once they start seriously cutting gasoline consumption. Transit operators who don't adapt. They will all find ways to try to push their opposition into law. Of course they will not claim selfish motives. They'll say they are doing it for safety or other reasons. But those who stand to lose will still try to get the best law money can buy. Legal supportAs a side note, it's also the case that at some point we will see things swing the other way, and laws promoting and supporting robocars will arise. We already see many laws supporting green cars, and if robocars (or whistlecars) truly are seen as so much greener, they could start on this path soon. You may now wish to consider the ways that robocars change the design of cars to understand their advantages. There is also a sidebar on common objections and misconceptions which discusses the problems that I think can be reasonably surmounted. |