JavaScript is required to use Bungie.net

originally posted in:Secular Sevens
Edited by Ryan: 10/9/2013 8:49:40 PM
5

The Ethics of Autonomous Cars

[quote]Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes? Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver. But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time. Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed. Autonomous cars may face similar no-win scenarios too, and we would hope their operating programs would choose the lesser evil. But it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers may have to do exactly that. Again, ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors often come into play. If you complain here that robot cars would probably never be in the Trolley scenario—that the odds of having to make such a decision are minuscule and not worth discussing—then you’re missing the point. Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes). Programming is only one of many areas to reflect upon as society begins to widely adopt autonomous driving technology. Here are a few others—and surely there are many, many more: 1. The car itself Does it matter to ethics if a car is publicly owned, for instance, a city bus or fire truck? The owner of a robot car may reasonably expect that its property “owes allegiance” to the owner and should value his or her life more than unknown pedestrians and drivers. But a publicly owned automated vehicle might not have that obligation, and this can change moral calculations. Just as the virtues and duties of a police officer are different from those of a professor or secretary, the duties of automated cars may also vary. Even among public vehicles, the assigned roles and responsibilities are different between, say, a police car and a shuttle bus. Some robo-cars may be obligated to sacrifice themselves and their occupants in certain conditions, while others are not. 2. Insurance How should we think about risks arising from robot cars? The insurance industry is the last line of defense for common sense about risk. It’s where you put your money where your mouth is. And as school districts that want to arm their employees have discovered, just because something is legal doesn’t mean you can do it, if insurance companies aren’t comfortable with the risk. This is to say that, even if we can sort out law and ethics with automated cars, insurers still need to make confident judgments about risk, and this will be very difficult. Do robot cars present an existential threat to the insurance industry? Some believe that ultra-safe cars that can avoid most or all accidents will mean that many insurance companies will go belly-up, since there would be no or very little risk to insure against. But things could go the other way too: We could see mega-accidents as cars are networked together and vulnerable to wireless hacking—something like the stock market’s “flash crash” in 2010. What can the insurance industry do to protect itself while not getting in the way of the technology, which holds immense benefits? 3. Abuse and misuse How susceptible would robot cars be to hacking? So far, just about every computing device we’ve created has been hacked. If authorities and owners (e.g., rental car company) are able to remotely take control of a car, this offers an easy path for cyber-carjackers. If under attack, whether a hijacking or ordinary break-in, what should the car do: Speed away, alert the police, remain at the crime scene to preserve evidence…or maybe defend itself? For a future suite of in-car apps, as well as sensors and persistent GPS/tracking, can we safeguard personal information, or do we resign ourselves to a world with disappearing privacy rights? [/quote]

Posting in language:

 

Play nice. Take a minute to review our Code of Conduct before submitting your post. Cancel Edit Create Fireteam Post

You are not allowed to view this content.
;
preload icon
preload icon
preload icon