welcome covers

Your complimentary articles

You’ve read all of your complimentary articles for this month. To have complete access to the thousands of philosophy articles on this site, please


If you are a subscriber please sign in to your account.

To buy or renew a subscription please visit the Shop.

If you are a print subscriber you can contact us to create an online account.

Future Shocks

Pascal’s Artificial Intelligence Wager

Derek Leben computes the risks of general AI.

In 2008 European physicists at CERN were on the verge of activating the Large Hadron Collider to great acclaim. The LHC held the promise of testing precise predictions of the most important current theories in physics, including finding the elusive Higgs Boson (which was indeed successfully confirmed in 2012). However, some opponents of CERN’s activation raised an almost laughable objection, encapsulated in a lawsuit against CERN from the German chemist Otto Rossler, that switching the LHC on might create a miniature black hole and destroy the Earth. In response, most physicists dismissed the chances of such a catastrophe as extremely unlikely, but none of them declared that it was utterly impossible. This raises an important practical and philosophical problem: how large does a probability of an activity destroying humanity need to be in order to outweigh any potential benefits of doing it? How do we even begin to weigh a plausible risk of destroying all humanity against other benefits?

Recently, prominent figures such as Sam Harris and Elon Musk have expressed similar concerns about the existential risks to humanity posed by the creation of artificial intelligence.