Asimov's Three Robot Laws

I just watched I, Robot, which was pretty good despite Will Smith's "I'm an American black so I can get away with being an arrogant ass" approach to comedy. It reminded me, though, that I've always thought Isaac Asimov's three laws for robots were dumb. To refresh your memory, they are:

(1) Never harm a human being, or refrain from action that would result in a human's harm.
(2) Do whatever a human being says, so long as it doesn't violate rule (1).
(3) Protect yourself, so long as it doesn't violate rules (1) or (2).

I have always thought that rules (2) and (3) should be reversed, in other words, that a robot should place its own safety above the commands of a human. Otherwise, why couldn't a 12-year-old punk kid tell my $100,000 robot to snap itself in half?

And jeez, now that I think of it, why couldn't someone tell a robot to go smash a bus (so long as it was empty at the time)? There's nothing in the rules about violating property.

I think robots in the future should be programmed to (1)' respect property rights (this satisfies Asimov's first rule), and to (2)' obey every human command consistent with rule (1)'. Should we start translating The Ethics of Liberty into machine language?

Comments

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness