Tech is benign, right?
As an article from The New York Times put it: "The medical profession has an ethic: First, do no harm. Silicon Valley has an ethos: Build it first and ask for forgiveness later."
As a result, Harvard University and M.I.T. are offering a new course on the ethics and regulation of artificial intelligence (AI).
It's about time.
As I wrote in an earlier blog this year, when it comes to AI, almost all agree that the goal should not be undirected intelligence, but beneficial intelligence. The main concern isn't with robots, but with intelligence itself — intelligence whose goals are destructive. As Max Tegmark, author of Life 3.0: Being Human in an Age of Artificial Intelligence notes: "we might build technology powerful enough to permanently end [social] scourges – or to end humanity itself. We might create societies that flourish like never before, on Earth and perhaps beyond, or a Kafkaesque global surveillance state so powerful that it could never be toppled."
Inherent within this is outsourced morality. Here's a simple example: a self-driving car faces a life-and-death situation. Swerve away from hitting a pedestrian or save the life of the occupants in the car. It can and will decide, but on what basis? As we grow in our dependence on AI, we will increasingly allow it to make our decisions for us, and that includes ethical ones. The more AI is able to think independently, the more we will have to face where we limit its autonomy.
If we are even able to.
The progression is frightening:
Step 1: Build human-level AGI (artificial general intelligence).
Step 2: Use this AGI to create superintelligence.
Step 3: Use or unleash this superintelligence to take over the world.
Again, Tegmark: "Since we humans have managed to dominate Earth's other life forms by outsmarting them, it's plausible that we could be similarly outsmarted and dominated by superintelligence."
Tesla and SpaceX CEO Elon Musk told the National Governors Association last fall that his exposure to AI technology suggests it poses "a fundamental risk to the existence of human civilization." Cosmologist Stephen Hawking agreed, saying that AI could prove to be "the worst event in the history of civilization." Facebook founder Mark Zuckerberg, however, calls such talk "irresponsible."
No wonder it has been called the most important conversation of our time. Whether it proves to be or not, it is certainly a conversation that has Christian minds informed and engaged.
Let's welcome Harvard and M.I.T. to the party.
James Emery White
Natasha Singer, "Tech's Ethical 'Dark Side': Harvard, Stanford and Others Want to Address It," The New York Times, February 12, 2018, read online.
Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (Knopf, 2017).
Marco della Cava, "Elon Musk Says AI Could Doom Human Civilization. Zuckerberg disagrees. Who's right?", USA Today, January 2, 2018, read online.