The press is full of stories criticizing the pharmaceutical industry for marketing “blockbuster” drugs that are big on sales but low on innovation. “Pharmaceutical executives, like movie moguls, have focused on creating blockbusters,” wrote Melody Petersen in a Los Angeles Times op-ed last January. “They introduce products that they hope will appeal to the masses, and then they promote them like mad. . . . The strategy had a flaw that executives have long ignored: It required extraordinary amounts of promotion at the expense of scientific creativity.” Similarly, Marcia Angell, former editor-in-chief of the New England Journal of Medicine, told PBS’s Frontline: “The pharmaceutical industry likes to depict itself as a research-based industry, as the source of innovative drugs. Nothing could be further from the truth.”

Angell and her fellow travelers may concede that the early versions of some blockbuster drugs are useful. But then, they claim, the industry gins up unconscionable profits for marginally useful “copycat” drugs that are barely distinguishable from the original blockbusters, which have often become cheaper generics. If only companies would skip the mass marketing and focus on research, the argument goes, the pharmaceutical industry would enter a golden age of innovation.

The critique is popular, but it’s more myth than reality. For one thing, blockbuster drugs are the fruit of decades of painstaking research and investment. After the Second World War, pharmaceutical companies like Eli Lilly and Merck began trying to develop a system for the deliberate design of new drugs based on the known structure and activity of chemical compounds in the human body. This new approach would eventually supplement the older model, which consists of mass screening of natural products (looking for the next penicillin in soil samples, for example), and allow the industry to produce safer and more effective medicines.

The R&D programs implemented in the fifties, sixties, and seventies set the stage for the blockbuster era (typically thought of as lasting from the eighties through the present). Take Tagamet, the granddaddy of all blockbusters. Until the 1970s, peptic ulcers could cause severe stomach lacerations, bleeding, and even death; the only available drug treatment was antacid pills, which brought merely temporary relief, and stomach surgery was sometimes a last resort. But even in the early 1960s, researchers at Smith Kline and French, a British company, knew that histamine triggers stomach-acid secretion when it binds to a histamine receptor in the stomach lining. After finding the receptor, Sir James Black and his Smith, Kline colleagues tested and rejected hundreds of compounds to try to block it. Some of these worked but had serious side effects. After years of testing, they finally launched Tagamet in 1976. Tagamet was not only a new drug, it was developed in a new way—“designed logically from first principles,” according to the American Chemical Society.

By the mid-1980s, Tagamet had become the world’s leading prescription medicine. Follow-on drugs, such as Zantac and Prilosec, exploited its lucrative niche and addressed the original drug’s shortcomings, causing fewer side effects and improving on its ability to control stomach acid. In 1988, Black received the Nobel Prize in medicine, in part for his breakthrough work on Tagamet and histamine receptors.

The blockbuster era that Tagamet ushered in produced enormous health benefits for a variety of serious and life-threatening diseases. In 2008, the Centers for Disease Control announced that “deaths in the United States from heart disease and stroke are down 25 percent since 1999,” amounting to “a staggering 160,000 lives [saved] in just six years.” Many of those lives were saved by heavily marketed statin drugs—including the first statin, Mevacor, and then its successors—which have helped reduce the number of people with high cholesterol by nearly 20 percent. AIDS mortality peaked in the U.S. in 1994 and 1995, but has declined 70 percent since then, thanks to a drug cocktail developed in 1996 known as Highly Active Anti-Retroviral Therapy. In 1987, scientists at Eli Lilly invented Prozac, the first selective serotonin reuptake inhibitor; SSRIs are effective antidepressants with few side effects, unlike earlier antidepressants, which had serious side effects, were poorly tolerated by patients, and were often used in suicide attempts. And according to the National Cancer Institute, the cancer death rate fell by about 1 percent per year between 1993 and 2002 and about 2 percent per year from 2002 to 2004. Improvements in treatment for some cancers have been nothing short of astonishing: partly because of therapies like Taxol, Herceptin, and Arimidex, the five-year survival rate for early-stage breast cancer is now over 95 percent.

It’s worth noting that the drug industry went from spending $3.3 billion on drug research and development in 1985 to over $22 billion in 2006—an effective response to critics who say that Big Pharma doesn’t spend enough on R&D. But the critics are correct when they say that many drugs treat the same condition using similar mechanisms. Do we really need all of these copycats or “me-too” drugs? Couldn’t we get by with just a few drugs in each class?

At root, the criticism of me-too drugs, or follow-on innovations, stems from the widespread belief that health-care products are somehow different from other products like computers and televisions. But in health care, as in other economic sectors, breakthrough technologies are followed by incremental innovations that compete by making the original product more user-friendly. Fierce competition among incremental innovations then occurs until another breakthrough comes along that makes the original discovery obsolete. “Me-too” technologies, in other words, are good for consumers and good for the economy.

Consider the birth control pill—the original lifestyle drug. (Pregnancy, after all, isn’t a disease.) The FDA approved the first oral contraceptive in 1960, and yet the industry continues to roll out new drugs with basically the same efficacy but with other, more marketable characteristics. Pills sold today typically have lower (and safer) hormone dosing; more convenient delivery systems (implants that release hormones under the skin, for example); and different side-effect profiles (some boast the ability to combat acne, reduce menstrual periods, or moderate mood swings). They are marketed in slick TV advertising campaigns, many featuring pretty, hip young women. And yet no one complains about the proliferation of birth-control options, because a powerful consumer demographic demands them.

Critics also ignore the need for therapeutic diversity. Patients who don’t respond well to one drug may respond to another drug in the same class; many drugs, like statins, vary in strength and side-effect profiles. Patients with serious mental illness, like depression, typically try several therapies before they find one that works. Mixing and matching different drug cocktails for cancer and AIDS is essential to keeping those diseases at bay. When it comes to medicine, one size doesn’t fit all.

As for all that marketing, its power is exaggerated, argues John Swen, vice president for science policy and public affairs at Pfizer. Many heavily marketed drugs fail to gain a marketplace foothold. “The fact is that it is difficult to predict what the sales of a drug are going to be before they actually come to market,” Swen says. “Many drugs, like Lipitor, treat many more patients than anticipated, often by an order of magnitude. Others, like Exubera [for diabetes], perform much worse [than expected].”

Ironically, the very science that sustained mass-market medicines may eventually bring about their demise—and dissipate the criticism of drug companies. Since the deciphering of the human genome in 2000, companies have sunk billions of dollars into new technologies ranging from gene mapping to bioinformatics. The quest is to treat disease at its genetic roots, with medicines tailored to each person’s biology. Some of these investments have been overhyped, but others will eventually produce breakthrough innovations, just as the investments of the sixties and seventies did. And when they do emerge, new technologies (including much more sophisticated diagnostics) will allow doctors to choose drugs for patients most likely to benefit from them. The advent of personalized medicine will also give companies powerful new marketing and pricing leverage. The size of the market for particular drugs may shrink—and drug companies may become smaller and more nimble to exploit fast-moving scientific discoveries—but insurers and governments will find it much more difficult to ration access to targeted therapies.

In short, the industry has market demand on its side, and like it or not, at the end of the day it’s the market—millions of patients and their physicians—that gets the final vote on what constitutes a valuable new treatment. Affluent retirees in the U.S. and Europe are unlikely to accept the ravages of aging—arthritis, cancer, and Alzheimer’s—quietly. They will turn to the pharmaceutical industry for help, and the industry will receive a premium for bringing new therapies to market. This will usher in a new era of personalized blockbuster drugs—one that Americans will celebrate, not condemn.

Donate

City Journal is a publication of the Manhattan Institute for Policy Research (MI), a leading free-market think tank. Are you interested in supporting the magazine? As a 501(c)(3) nonprofit, donations in support of MI and City Journal are fully tax-deductible as provided by law (EIN #13-2912529).

Further Reading

Up Next