The moral “crumple zone”

Listening to Episode 1 of “Bellweather”, a podcast by Sam Greenspan (https://www.kickstarter.com/projects/bellwether/b-e-l-l-w-e-t-h-e-r-a-podcast-of-speculative-journalism) investigating the first pedestrian death from collision with a self-driving car. In this case, it looks s though the car, having determined that a collision was imminent in the next few seconds, relinquished control to the drive without alerting her, expecting her to realize what was happening, figure out what to do and apply emergency braking and swerve all within four seconds or so.

He introduces the idea of a “moral crumple zone”. A traditional crumple zone, of course, is the part of the vehicle that is designed to fail in an emergency to protect the occupant. The analogy is that, in this case, the designated part to fail is the human driver, and the entity being protected is the company that built the car. In this case, delegating the authority to the human relieves the company of the moral responsibility for the crash.

I’m realizing that this sort of principle eliding responsibility from the company has become pervasive in a lot of systems design. There’s a lot of engineering around making sure the company can’t be held responsible for any malfunction, at the expense of the end user; DRM comes to mind.

This reminds me about Asimov’s three laws of robotics from the 50s, of which the first states

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

I’m realizing we are really at the point in systems design where the Three Laws need to be designed in as system requirements. And the fact that it’s optional is a flaw.

Leave a Reply

Your email address will not be published. Required fields are marked *