Photo courtesy of César A. HidalgoCésar A. Hidalgo, a scholar known for his work on economic complexity, data visualization, and applied artificial intelligence, is head of the Center for Collective Learning at the Artificial and Natural Intelligence Institute of the University of Toulouse, as well as an honorary professor at the University of Manchester and a visiting professor at Harvard. He recently spoke with City Journal associate editor Daniel Kennelly about his new book, How Humans Judge Machines (co-authored with Diana Orghian, Jordi Albo-Canals, Filipa de Almeida, and Natalia Martin), which explores how our moral intuitions differ depending on whether a human or a machine actor is involved.

What are the main ways we judge humans and machines differently?

First, people tend to have much more consequentialist judgments of machines compared with humans. In fact, in the book we conclude that there is a principle at play where “people judge humans by their intentions and machines by their outcomes.” We also find that people judge human intentions bi-modally. That is, we either think people did something intentionally, or we excuse them completely. When it comes to machines, we don’t assign as much intention to them, but we also don’t fully excuse them in accidental scenarios. A consequence of these two principles is that we can be very harsh when judging machines in accidental scenarios, while being forgiving to humans in the same situations.

We also look at scenarios where machines or humans did something positive (for example, when they prevented a disaster or corrected an unfair situation). In those cases, we find a tendency for people to take machine improvements for granted.

Finally, we find that people’s judgment of machines compared with humans is particularly harsh in scenarios involving physical harm, but that this effect can sometimes reverse in scenarios involving fairness (such as those dealing with automation, algorithmic bias, plagiarism, and so forth).

Could you describe some of the scenarios used in your research?

We had more than 80 scenarios. Some were organized around topics, such as invasion of privacy, loss of jobs due to automation, and algorithmic bias (for instance, in university admissions or policing). We also had scenarios focused on accidents, such as with self-driving cars, or natural disasters, such as failed rescue operations in hurricanes or tsunamis; and involving lewd behavior and plagiarism in creative industries.

The scenarios were presented to groups of people as either the action of a machine or of a human. For example, in one scenario, an excavator digging up a site for a building unearthed a grave. This was presented to some people as the action of an autonomous excavator and to others as the action of an excavator operated by a human being.

What about algorithmic bias? Are we more or less liable to see a machine as “fair” in situations involving decisions about, say, employment or school admissions?

For the most part, people judge biased humans and machines quite similarly. This is consistent with the law in the United States, which states that discriminatory actions are illegal, independent of intent. But we do find a small tendency for people to judge biased humans slightly more harshly than biased machines.

What did your research reveal about how humans judge machines in labor-displacement scenarios—more harshly than when humans were the cause of the displacement (such as outsourcing, for example)?

In chapter five of the book, we compared people’s reaction to labor displacement due to automation and to foreign workers with temporary visas, outsourcing, and offshoring. For the most part, we found that people tend to be more accepting of labor displacement due to automation; they are more likely to want to ban foreign workers with temporary visas, outsourcing, and offshoring. Yet we find this effect reduced slightly for more knowledge-intense industries. (For example, people are more accepting of foreign nuclear technicians than foreign truck drivers.)

Are your findings cross-cultural, or specific to the U.S.?

Our results are based on a sample of about 6,000 people in the United States, so they cannot be generalized to other cultures or geographies.

Why do these differences in judgment matter? What are some of the implications for the rising use of algorithmic decision-making in business and everyday life?

Algorithms are becoming part of our society. But we need to know when to accept them, when not to, and when to give them another chance. A lot of the discussion about algorithms on the Web is being dominated by reactionary positions. The purpose of our book was to help create a new body of data that we could use to develop a more nuanced understanding of when and why we judge machines differently from humans. I believe this understanding is key for our own self-reflection. Without it, we are judging machines, and their adoption, without the necessary self-criticism. This is not to say that people should judge humans and machines equally. But we need to understand these differences and the reasons behind them.

The book draws insights from works of fiction, ranging from Mary Shelley’s Frankenstein to Isaac Asimov’s Robot series. What role have stories played in shaping our moral intuitions about machines?

Works of fiction often help us explore moral problems with a nuance that can transcend science. In fiction, actions are not limited to generalized principles. They can be the result of the personal history and emotions of characters. This gives fiction a marvelous and profound range of creative expression, even if those expressions are hard to generalize. In stories, like those of Asimov, characters can react differently to the same situation, but they also often encounter the same problems. Well-constructed fiction can help us separate those common threads from idiosyncratic behaviors. I see this tension between the premise of a story and the journey of the character as a place of constant inspiration.

Photo: SIphotography/iStock

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next