Robots may be taking our jobs, but they can’t take over the most basic human responsibility: thinking. People alone make the decisions that matter. They can surrender this duty to robots no more today than they could have yielded it to the primitive slide-rule calculator in the seventeenth century.
In the term “artificial intelligence,” the crucial word is “artificial.” Computers do not have independent cognition. Humans are getting better at programming computers to process and evaluate huge volumes of data: The New Yorker reported recently that a computer system “outperformed expert dermatologists” in recognizing cancerous skin lesions. This language is sloppy: the same author would be unlikely to write that a hammer “outperforms expert home-improvement specialists” in driving a nail into a wall.
The computer system to detect lesions is a sophisticated version of a keyword search—a tool and nothing more. It may, indeed, take away doctors’ jobs over time—just as an efficient combine harvester took away farmworkers’ jobs. But it doesn’t replace human cognition, empathy, and action. Only humans can recognize that skin cancer is bad and not good, that a patient with the condition will likely be afraid, and that it’s necessary for someone to pay for her treatment if she can’t do so herself. Robots can aid doctors in these tasks, but they don’t care whether someone lives or dies.
Computers do what their programmers program them to do—a fact on gruesome display in several workplace fatalities, including that of a Michigan factory worker crushed to death in 2015 when a robot moved heavy equipment onto her as if she weren’t there. Language describing the accident was imprecise, again: the robot “went rogue,” reports said, and “managed to load a . . . part on top of her.” The robot did not go rogue and did not “manage” to do anything by itself. People programmed it poorly for the real-world environment in which it was operating.
Facebook executives understand that, in some areas, human decency supersedes a pure marketplace for words and pictures. The company has long policed child pornography, for example (albeit not perfectly). But after two people committed live-on-Facebook murders this spring, the social media giant said that it would hire 3,000 more people to review video content. This solution falls short: the firm’s extra employees can respond more quickly to user complaints, but they won’t be able to prevent people from posting disturbing content, because Facebook’s supposedly intelligent algorithms don’t care if you use the service to post a live stream of your child’s birthday party or your own suicide. Only people can render the judgment that one is good and the other is not.
Driverless cars are coming soon, we keep hearing—Google and Lyft are hard at work on models. But automated cars, like regular cars, will be servants, not masters. If making it easier and cheaper to ride in a car results in more demand for car travel, people in cities, in particular, will have to decide whether to restrict access to their roads. If driverless cars allow for safer highway trips at speeds well above 100 mph, suburban residents will have to decide whether they want to use this increased mobility to live farther apart from one another, and thus become even more dependent on motor transportation.
Silicon Valley cheerleaders have distorted the meaning of many words, from “sharing” to “disrupt.” It’s important to keep in mind that “artificial” means “made or produced by human beings rather than occurring naturally, typically as a copy of something natural.” People make robots, and people make robots do what people want—good, bad, and ugly.
Photo by Stephen Brashear/Getty Images