More than 20 percent of Americans express little or no confidence in scientists to “act in the best interests of the public,” according to Pew’s latest polling. Just 13 percent gave that answer in 2019, before the Covid-19 pandemic. The figure varies sharply by political ideology, rising to a third among Republicans and falling to a tenth among Democrats.
It’s easy to see where this polarization and declining public trust come from. The ongoing “reproducibility crisis” in the social and behavioral sciences has revealed that a lot of high-profile research is flawed; some is even fraudulent. Other research is widely perceived as ideologically skewed.
Finally, a reason to check your email.
Sign up for our free newsletter today.
Our new study helps explain some of the public’s skepticism toward scientists. It might also point the way toward stronger science in the future.
The story begins with a large-scale experiment published in 2022. One of us (Breznau), working with several collaborators, recruited 73 research teams and gave them all the same task: using the same data, study whether immigration reduces political support for social programs.
This is an important question that bears on whether immigration affects social cohesion more broadly. The data included public-opinion polling from the International Social Survey Program, as well as basic immigration statistics.
Though they worked on the same question with the same data, the research teams didn’t all reach the same results.
The teams estimated more than 1,200 statistical models. More than half of the models found no clear effect in either direction. A quarter suggested that immigration reduces support for social programs. The remaining 17 percent found immigration strengthens support.
What role did the political leanings of the researchers play in these disparate outcomes? To measure this, the researchers were asked about immigration at the start of the study: Did they think that immigration laws should be stricter, or more lenient?
It’s well understood that methodological choices about how to examine data can push results in one direction or another. The original study found that, after accounting for those choices, the researchers’ opinions about immigration didn’t seem to matter to their results. In other words, if two teams had different opinions about immigration but used similar research methods, they reached similar conclusions.
The other of us (Borjas) was intrigued by the paper and dug into the data, which the original team had posted publicly. He suspected that ideology might be more of a factor than the initial results implied. Sure, researchers with different ideological views reached similar results if they used the same research methods. But what if those methods differed in the first place because of ideology? In other words, what if more immigration-friendly researchers gravitated toward methods that made immigration look better for social cohesion, and vice-versa?
Our new paper reanalyzes the data and finds support for this hypothesis: the teams indeed gravitated toward methods that pushed the results in the direction of their ideology. A handful of technical decisions were especially important in determining the direction of the results. These included whether the model measured immigration levels as a snapshot at a particular moment or a rate of flow over a period of time, and how the analysts manipulated the original polling data on public support for social programs.
While the average impact of this skew was small, the extremes stood out. The pro- and anti-immigration combinations of decisions were used only by teams sharing these respective biases.
One can read our study as a reason to be more skeptical about scientific findings. Our results also lend some credence to conservatives’ complaints of bias. The participating researchers leaned heavily in the direction of thinking immigration laws should be looser. Fewer than one in ten wanted stricter laws; half wanted laws relaxed.
But our study also offers useful lessons for both scientists and the public. When we combine the efforts of multiple teams, rather than focusing on a single team’s findings, we see the broader universe of possible results. Consumers of social science should read broadly, look at the same question from different angles, and avoid putting too much stock in one study. And the scientific “consensus” itself can be biased, because the bulk of scientists might feel strongly—and mostly in the same direction—about a specific policy issue.
Our study also suggests that, by posting data and code publicly whenever possible, researchers make it easier for others with different perspectives to weigh in. The two of us are living proof that this need not be a hostile process: because the original study followed a high standard of transparency and reproducibility, a fresh perspective led to a new collaboration and an updated analysis.
The rise of artificial intelligence is also worth considering in the context of our results. AI agents are trained to do what a human user wants. The risk is that they thus become tools for the ideologically motivated, making it much easier to find research methods that produce results supporting their prior position.
At the same time, Generative AIs can be trained to identify biases in existing research and to conduct research without those specific biases. Steering the technology toward these beneficial uses should be a high priority.
Done correctly and transparently, science is invaluable. It offers us the best way to understand both the physical world and the behavior of people. But it’s also a gradual process, carried out by fallible human beings capable of losing public trust. Acknowledging and addressing these shortcomings, not ignoring them or using them as an excuse for cynicism, can show us the way forward.
Photo: Monty Rakusen / DigitalVision via Getty Images