The exponential growth in scientific knowledge, and the myriad technological innovations it has spawned over the past two centuries, has given rise to the expectation that scientific progress will continue to accelerate. Superficially, this remains the case—there have never been more journals, papers, and scientists. But a deeper analysis reveals that science, like technology and economic growth, exhibits symptoms of stagnation.
Scientific productivity has significantly declined over recent decades. Today’s scientific progress cannot be compared with past scientific revolutions. Even in fields where there’s still robust progress, current advances require far more research effort and funding than their predecessors did. Drug discovery in biotech, for example, is becoming slower and more expensive over time. The cost of developing novel drugs doubles every nine years, an observation referred to as Eroom’s law. In essence, the forces that govern scientific progress invert the dynamics that gave rise to Moore’s law in the semiconductor industry.
What explains this decline in scientific innovation? We identify three trends. First is the tendency toward scientific risk aversion and conformity: the current institutional system that organizes scientific research is structured in a way that rewards and instills orthodoxy. Second is the ever-expanding bureaucratization of science, which has resulted in the disturbing finding that researchers spend more than 40 percent of their time compiling and submitting grant proposals. These two trends are accompanied by an increasing drive toward the third: hyper-specialization. Researchers and academics have to become ever more specialized to make progress in an ever-narrowing field of study or research. Hyper-specialization is, to some extent, an inevitable consequence of the success of scientific progress. Due to the exponential accumulation of scientific knowledge over the past three centuries, specialization has become a practical necessity because it reduces the cognitive load that researchers in any given scientific field face.
The key question that emerges from considering these three trends is whether scientific stagnation results from cultural and institutional factors, such as the current incentive structure of science, or whether science itself is approaching an intrinsic epistemic limit. Is the epistemic closure of the scientific frontier unavoidable? This is a matter of debate, but in our view, what prevents us from making more scientific progress is not some fixed natural limit to knowledge but rather the institutional structure of science itself. Put another way, we are seeing a lot of what the philosopher of science Thomas Kuhn called “normal science”—science without any revolutions.
Kuhn posits that scientific innovation progresses through revolutions, or “paradigm shifts.” In this framework, the steady and continuous process of scientific development—that is, normal science—is sporadically disrupted by scientific revolutions. These are catalyzed by the emergence of “anomalies” incompatible with the existing paradigm that dominates normal science. A scientific revolution follows a crisis when a novel paradigm, which is “incommensurable” with the practice of normal science, supersedes the previous paradigm. For Kuhn, it is entirely possible that science could enter a state of eternal normalcy, in which the decoding of the “book of nature” has exhausted itself. In this scenario, there will be no further revolutions and revelations. Instead, all that remains is permanent “puzzle solving,” or the fine-tuning of existing theories and models.
We are highly skeptical that the “book of nature” has been exhausted, and that no secrets are left to unlock. But within scientific institutions seized by the forces of stagnation, that certainly appears to be the case. The projects that receive funding, get published, and attract citations are incremental “one to many” improvements rather than radically novel “zero to one” innovations, to borrow investor Peter Thiel’s formulation. Why? Again, we can trace this slowdown in scientific progress to a system that stimulates and rewards risk aversion.
A core driver of the rise of scientific risk aversion is the dominance of citation-driven metrics to evaluate research. This trend is inextricably linked with the continued bureaucratization of science, which demands the total quantification of scientific production. Scholarly journal publications and citation measures, which Google Scholar has made easier than ever to track, have become the dominant factors in publication, grant-making, tenure, and promotion decisions. An inevitable consequence is a bias toward incrementalism, as crowded scientific fields attract the most citations. High-risk, exploratory science gets less attention and less funding because it is less certain to lead to publishable results. Reducing science to a popularity contest is a good way to ensure that breakthroughs never happen. When they do happen, it is often in defiance of risk-averse, citation-counting bureaucrats. For example, the discovery of clustered regularly interspaced short palindromic repeats, better known as CRISPR, began as an area of basic research, only later becoming the basis of a technology that can be used to edit genes. It took more than 20 years for the world to recognize CRISPR’s promise. For a long time, research on the subject didn’t attract many citations. As recently as 10 years ago, leading scientific journals rejected papers on CRISPR that would ultimately help win its discoverers the 2020 Nobel Prize in Chemistry. A major breakthrough essentially occurred while no one was looking.
The case of CRISPR demonstrates how important it is to give scientists time to develop novel ideas, even as novel research struggles to gain acceptance by the scientific community. Investment in the exploration of radical ideas, which are often high-risk and without apparent and immediate application, is critical for enabling future breakthroughs. But it’s precisely the high-risk and exploratory stage in the scientific process that doesn’t get rewarded or funded by the citation-driven institutional system of contemporary science. In the current regime, “progress” requires quantifiable criteria to rationalize and justify funding.
As citation emerged in the 1970s as the dominant measure of scientific productivity, researchers became more conservative—a shift that coincided with the larger techno-economic slowdown. A large-scale analysis of millions of papers and patents published over 30 years in biomedical chemistry revealed that most published research on chemical relationships “connect chemicals that have already been connected . . . Chemicals in disconnected components are next most frequent. All other distances are extremely rare.” This suggests that “scientists pursue progressively less risk,” the authors wrote.
But it’s not just the researchers—the policymakers and grant makers responsible for funding are highly risk averse, too. In fact, it’s these administrators, policymakers, and bureaucrats who instill risk aversion in the scientists. Collectively, funders ignore high-risk research projects and proposals, a major reason why breakthrough science is becoming rarer. What gets funded are conservative research projects that primarily build on established science. The obvious rejoinder is that publishing a highly innovative or risky idea will get more citations than publishing the results of incremental research. Yet, taking this approach is dangerous for scientists because attempting high-risk research means potentially failing to win grants and get published—which can end careers.
In sum, systematic evidence showing the decline of scientific progress doesn’t imply that we have unlocked all of nature’s mysteries. Rather, it shows that the causes of stagnation are institutional. Science’s bureaucratic and administrative systems, optimized for incremental research, don’t incentivize high-risk research. These systems don’t allow scientists to try, fail, and try again.
One imaginable solution to this problem is to do more science. If lots of research is happening, important discoveries will occur, right? Not necessarily. A recent large-scale study that examined 1.8 billion citations in 90 million papers across 241 scientific subjects found that scientific progress slowed as scientific fields grew. As the number of scientific publications explodes, cognitively overloaded researchers and reviewers need to resort to citations to assess the constant flow of information and data. The result is that the papers that get published are those that cite more existing research, particularly research that forms a canonical point of view that a reader can easily recognize. Novel ideas, which don’t fit within a well-established canon, are significantly less likely to be produced, published, and widely read. This self-reinforcing dynamic fuels the logic of preferential attachment that controls research, as each newly published paper disproportionately adds citations to papers that are already well cited. And as the arrival rates of papers and ideas increase, it becomes harder for new ideas to penetrate the canon. Consequently, truly disruptive research remains on the fringes of scientific paradigms, which ossify. Science starts out as a discovery problem, but as fields mature it turns into an information-organization problem.
The obsession with citations and other quantitative metrics for evaluating scientific productivity reinforces the trend toward hyper-specialization, which also promotes risk aversion. Specialization may be effective for reducing cognitive overload, but it also causes expertise to become myopic. If you’re mastering a domain within a subfield of your discipline, how can you keep up with what’s going on outside your subfield, let alone outside your discipline? The consequent diminishing of the frontier marks a regression to epistemic nihilism, which abandons any attempt to construct a universal and synoptic knowledge of nature. After all, a project like that would never result in publication, citations, funding, or a tenure-track position.
Another source of increasing scientific risk aversion is the aging of researchers. As knowledge accumulates exponentially, it takes more time to get up to speed in a scientific field. Since the 1970s, the amount of time the average bioscience Ph.D. spends in graduate school has increased from five to eight years. As researchers get older, however, their productivity declines, a finding replicated by many studies. A large study analyzing more than 244 million scholars who contributed to 241 million articles over the last two centuries found that as scientists age, they are less likely to disrupt the state of science. Instead, they become more resistant to novel ideas and are more likely to criticize emerging research. Moreover, increasing age correlates with a decrease in risk tolerance, which reinforces scientists’ tendency to work within a well-defined paradigm and contribute to an established canon. In sum, the shifting age dynamics of researchers contributes significantly to scientific stagnation. Moreover, older, more established scientists tend to control not only more resources, such as lab equipment and funds, but also access to prestigious academic positions—for example, by serving on the editorial boards of major journals and on university tenure-review committees. Since they tend to be more invested in the paradigm they operate within, it’s less likely they will be able to divorce themselves from it.
Another obstacle to scientific progress is the formalization and over-regulation of the funding process. University faculty members spend about 40 percent of their research time writing grant proposals. In order to increase the probability that they will stand out from the competition and get funded, they must also submit an increasing number of proposals. The National Science Foundation (NSF) found that between 1997 and 2006, the average applicant had to submit 30 percent more proposals to receive the same number of awards. This competition suppresses risk-taking even further. A 2014 NSF report found that concerns about the overregulation of science and the increasing administrative workload associated with federal regulations had been documented in surveys and reports for more than a decade. Researchers commonly cited financial management, the grant proposals process, and time-and-effort reporting as sources of administrative burdens.
Other assessments replicate these findings. The Faculty Workload Survey, for example, estimated in 2012 that researchers spend 42.3 percent of their time on “tasks related to research requirements (rather than actively conducting research).” The latest survey, from 2018, reports that the average amount of time researchers spend on bureaucratic compliance has increased to 44.3 percent. How many more discoveries or breakthroughs would have been made if researchers hadn’t allocated half of their time to administration?
These mounting regulatory and administrative demands also make research harder for smaller and less established institutions, which don’t always command the resources needed to comply. This dynamic further reinforces scientific risk aversion, since larger institutions and teams are less likely to be truly disruptive. An analysis of more than 65 million papers, patents, and pieces of software demonstrates that between 1954 and 2014, smaller teams were more likely to generate novel ideas, while larger teams engaged in incremental science. One possible explanation for this phenomenon is that because larger teams require more funding, they become more sensitive to reputational risks and their research choices become more conservative. Ironically, then, more research funding leads to less disruptive science. This helps explain why the return on science is diminishing and the rate of breakthroughs is declining, even though more money and time are being invested and many projects are becoming cheaper as the cost of computing power, gene sequencing, and lab equipment have all fallen exponentially.
The rigid, hyper-regulated, and highly formalized bureaucracy that controls funding not only degrades scientific performance but also reveals a deep bias against creativity, novelty, and risk-taking. For example, when He Jiankui, a Chinese biophysics researcher, announced in 2018 that he had created babies genetically edited to have HIV immunity, the global scientific community reacted with moral outrage. The eminent geneticist George Church, one of the few prominent scientists to defend He Jiankui, stated that the “most serious thing I’ve heard is that he didn’t do the paperwork right. He wouldn’t be the first person who got the paperwork wrong.” While this particular example is polarizing, ethically charged, and far from unambiguous, the extreme reaction can be understood as a symptom of the dominance of the “global bureaucratic empire of science,” which seems to privilege regulatory compliance over scientific novelty.
Radical scientific novelty—which is uncertain and probabilistic, and as such lacks validation—is incompatible with bureaucratic rationality, which is geared toward repeatability, control, and procedure. Apparent novelty needs to be integrated into a bureaucratic or regulatory machine that is not optimized for overturning paradigms. As the highly politicized debates around climate change or the Covid-19 pandemic demonstrate, scientific novelty does not lend itself to the simplicity and unambiguity that policymakers, regulators, and administrators demand. The bureaucratic machine of science is therefore an important cause of stagnation.
But another fundamental source of stasis might be the scientific method itself. Over the past decade and a half, we have witnessed the eruption of a reproducibility crisis, starting with the 2005 publication of John Ioannidis’s landmark paper “Why Most Published Research Findings Are False.” The paper showed that, when the design and publication of a study is biased toward positive results—which is almost invariably the case—most of the results that get published are false. The culprit? “P-hacking,” whereby data are manipulated to make patterns appear statistically significant. The reproducibility crisis identified in Ioannidis’s paper has since been confirmed by myriad empirical studies. Almost every scientific field has been affected, from clinical trials in medicine to research in bioinformatics, neuroimaging, cognitive science, epidemiology, economics, political science, psychiatry, education, sociology, computer science, machine learning, and AI. But it’s not just the social sciences that are affected by the reproducibility crisis—even the so-called hard sciences are infected by it. Two of the most hyped results in physics, the supposed discoveries of primordial gravitational waves and superluminal neutrinos, were quietly retracted in the early 2010s.
Most studies in these fields cannot be reproduced or replicated. In 2015, an attempt to reproduce 100 psychology studies could replicate only 39 of them; a large 2018 study that aimed to reproduce prominent studies in psychology found that only half of the 28 could be replicated. An attempt to reproduce peer-reviewed and widely cited research published in the leading journals Nature and Science concluded that only 13 of the 21 results could be reproduced. Meanwhile, studies conducted by pharmaceutical companies such as Bayer could not reproduce more than 80 percent of selected experiments published in prestigious journals. While scientific journals now retract about 1,500 articles each year—up almost 40-fold since 2000—the number of replicable papers has not substantially increased.
The reproducibility crisis again has its roots in bureaucracy. The obsession with citation-based metrics to measure productivity has spawned a plethora of peer-reviewed journals, some of which are of low quality. The scientific method itself has become a metaphysical abstraction that figures as an almost mystical source of epistemic authority—a process believed automatically to generate truth, understanding, and control.
The eminent physicist Richard Feynman compared the blind evocation of the scientific method with “cargo cult science”—a ritual masquerading as science. Many papers submitted to journals ritualistically evoke elements of the scientific method, such as the arbitrary p<0.05 significance test that is dogmatically summoned in most scientific papers. The ritualistic quality of method is reflected in the cultural hegemony science has acquired over the past decade. We are told to “trust the science,” “believe the science,” “follow the science.” While it makes sense to trust science and engineering at a micro level—indeed, most of us do so daily—when “follow the science” becomes a hardened theology that does not allow discovery to progress, we no longer leave room for heterodox theories to advance or for novel discoveries to emerge.
Undoubtedly, advances such as the rapid development of the Covid-19 mRNA vaccine, for which Moderna developed a design in just two days, are awe-inspiring. But this is quite separate from the politically charged cultural perception that science has a monopoly on truth—one either believes reflexively or is accused of being a “denier.” The cult of the scientific method demands the suspension of critical thought and the suppression of non-consensus ideas. At its core, this quasi-religious faith in science assumes that nature can be reduced and subjected to standardized procedures defined by the scientific method, a process that results in permanent progress. This dogmatic view of method, which has been referred to as “scientism,” is at odds with a conception of science as an activity geared toward the radical unknown and the truly novel, which cannot be routinized, rationalized, or ritualized.
There are multiple potential remedies that could reverse the decline of scientific progress. But whatever the changes to the existing system, reengineering the institutional machine of science can only take us so far. Another possibility is to revolutionize the very epistemic core of science.
One of the most radical proposals is philosopher of science Paul Feyerabend’s appeal to “epistemic anarchy.” As Feyerabend argued in his 1975 book Against Method, “the events, procedures and results that constitute the sciences have no common structure.” For him, “the success of ‘science’ cannot be used as an argument for treating as yet unsolved problems in a standardized way.” Feyerabend argues that the history of science is so complex that it cannot be reduced to a general methodology; asserting a general method will inevitably inhibit scientific progress, as any unifying and static method would enforce restrictive conditions on new theories. Epistemic anarchy would therefore represent a radical alternative that might liberate us from the tyranny of the scientific method. As Feyerabend shows, a careful study of the history of science reveals that science, by rejecting non-standard modes of knowledge as heretical, has become uncritical and mired in orthodoxy. Instead of systematically studying the occult or obscure, the standard scientific reflex is simply to “curse them, insinuating that their curses are based on strong and straightforward arguments.”
By fundamentally questioning the methodological hubris of science and challenging a quasi-superstitious belief in its epistemology, Feyerabend is not undermining the foundations of science but defending the epistemic integrity of a form of scientific knowledge that has not yet been sanctioned by peer-reviewed journals and funding agencies. While it’s beyond our scope here to envision how such an epistemic anarchism could be implemented in practice, it’s worth noting that taking a closer look at heterodox areas that diverge from the mainstream of science—besides physics and astronomy, Feyerabend often refers to voodoo and astrology—could unlock new discoveries and knowledge. As history indicates, from geocentrism and the ether to phlogiston and global cooling, the scientific consensus often turns out to be flawed or rapidly becomes outdated. If the present is similar to the past, the possibility—at once frightening and liberating—remains that the epistemological and methodological foundations of what we call “science” might be less stable than they appear.
Photo: Pete Linforth/Pixabay