welcome covers

Your complimentary articles

You’ve read all of your complimentary articles for this month. To have complete access to the thousands of philosophy articles on this site, please

If you are a subscriber please sign in to your account.

To buy or renew a subscription please visit the Shop.

If you are a print subscriber you can contact us to create an online account.


Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen

Tony Beavers considers a timely understanding of machine ethics.

Can a machine be a genuine cause of harm? The obvious answer is ‘affirmative’. The toaster that flames up and burns down a house is said to be the cause of the fire, and in some weak sense we might even say that the toaster was responsible for it. But the toaster is broken or defective, not immoral and irresponsible – although possibly the engineer who designed it is. But what about machines that decide things before they act, that determine their own course of action? Currently somewhere between digital thermostats and the murderous HAL 9000 computer in 2001: A Space Odyssey, autonomous machines are quickly gaining in complexity, and most certainly a day is coming when we will want to blame them for deliberately causing harm, even if philosophical issues concerning their moral status have not been fully settled. When will that day be?

Without lapsing into futurology or science fiction, Wallach and Allen predict that within the next few years, “there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight” (p.