After suggesting in March that President Ronald Reagan had taken action to help the gay community during the AIDS crisis, a campaigning Hillary Clinton found herself pilloried by gay activists and others certain that he had done nothing of the sort. They were mistaken. In dealing with AIDS, Reagan did what he so often did well—he appointed people who shared his political convictions but could be relied on to make sound decisions based on apolitical facts and solid science. These appointees framed and announced such decisions in ways that would not result in politically polarizing efforts—in this case, efforts to fight a disease that disproportionately afflicted the gay community.
To begin with, Reagan appointed Dr. C. Everett Koop as surgeon general. When Koop addressed the public about AIDS, he declared: “This is a battle against the disease, not our fellow Americans.” And as the Washington Post noted shortly after his death in 2013, Koop was an “unsung hero” and “a pivotal figure” who saved many lives by persuading key members of Congress to set aside their hostility to the gay community and focus on the broader threat that the contagious disease presented.
More important, both of the two Food and Drug Administration (FDA) commissioners Reagan appointed during his presidential tenure were doctors who made the right calls in leading the assault on AIDS. As the policies they implemented would demonstrate, both understood that doctors could play an invaluable role in getting the right drugs into patients to beat this dreadful new disease. This marked the beginning of an important learning process that has recently resurfaced. The future of molecular medicine now depends largely on our willingness to give today’s doctors as much flexibility and responsibility as was given to doctors engaged in the early battle against AIDS.
The first report of patients suffering from what would be called AIDS appeared in June 1981, when the Centers for Disease Control (CDC) reported five cases, two fatal, of a rare form of fungal pneumonia that had struck young men living in Los Angeles. A month later, a second CDC report described clusters of Kaposi’s sarcoma—a rare, aggressive form of skin cancer caused by a strain of herpes virus. At the time, no FDA-approved treatments existed for the viral infection itself, or for fungal pneumonia, Kaposi’s sarcoma, and many other obscure diseases on the long list of afflictions that assailed patients when their immune systems collapsed and bacteria, protozoa, and viruses invaded to feast on their brains, lungs, blood, liver, heart, bone marrow, guts, skin, and eyes. And it was unlikely that any would be approved soon.
The key clauses of the federal drug law had been put into place after the world learned a brutal lesson about drugs. A drug called thalidomide had been marketed in other countries as a gentle, effective sedative recommended for use by pregnant women suffering from morning sickness. But thalidomide, it turned out, had a diabolical power to stop a fetus from growing ears, arms, legs, and other body parts. The thousands of thalidomide babies born in Western Europe and elsewhere helped move major amendments to the federal drug law through Washington in late 1962.
One amendment specified that the FDA could approve a new drug only if “substantial evidence” of its efficacy had been obtained in “adequate and well-controlled” clinical trials. The trial protocols that the FDA subsequently developed required that a new drug demonstrate positive clinical effects, which meant conducting trials that couldn’t be completed any faster than the targeted disease typically progressed to the point of causing clinical symptoms—usually about five years, for HIV infections. The delays associated with conducting such large and long trials would have been a death sentence for many AIDS patients.
Alongside the drug companies, the most important players in the AIDS saga were the front-line doctors.
As the gravity of the AIDS threat became clear, the Reagan FDA began writing new rules that spelled out when significant parts of the old rules wouldn’t be fully or rigorously enforced. By doing so, the agency accelerated patient access to desperately needed drugs. Pharmaceutical companies quickly began coming on board once new policies were in place that would speed up the approval of their drugs. In short order, the firms delivered a slew of powerful new drugs, using the new tools for designing precisely targeted drugs that were coming of age at that time. As the National Academy of Sciences later noted, the extraordinarily fast development of drugs that ended up in the cocktails now used to control HIV had a “revolutionary effect on modern drug design.”
Alongside the drug companies, the most important, if least celebrated, players in the AIDS saga were the front-line doctors, who—endangering their own lives as they handled sharp instruments while treating their infectious patients—were determined to save lives, one life at a time, using any drug they could obtain that they believed might help, regardless of what, if anything, had yet been done to get the drug approved by the FDA.
Some doctors had started down that road even before Washington first recognized that a new epidemic was under way. The CDC’s 1981 cluster report, pointing a first official finger at the as-yet unnamed epidemic, described the treatment of five patients who had contracted the rare form of fungal pneumonia that infects about three out of every five AIDS patients. The first of the five had been treated with pentamidine, a drug developed 40 years earlier to treat Gambian sleeping sickness but never approved by the FDA. In the early 1970s, the CDC, which sometimes dispenses non-FDA-sanctioned drugs approved in other countries when a tropical disease lands in a U.S. hospital, had started supplying pentamidine to treat fungal pneumonia as well, apparently on the strength of a few reports from doctors using it to treat organ-transplant patients with immune systems deliberately suppressed by drugs.
Then there was the leprosy drug. The FDA hadn’t approved it, either, but at some point in the mid-1970s, the Public Health Service (PHS) began making it available to U.S. doctors. Soon after AIDS surfaced, some patients concluded that the drug might help them, too, and, unable to get it from the PHS, they began organizing buyers’ clubs to smuggle it in from Brazil, where it had been approved. The drug was then used to treat oral and genital canker sores, wasting syndrome, and Kaposi’s sarcoma, and it had positive effects on all three of these AIDS-related disorders. When Washington threatened to prosecute, the clubs brazenly declared that they would keep doing business with Brazil until the federal government offered them a better deal. The FDA relented, quietly reaffirming its existing policy of not prosecuting patients who smuggled in foreign drugs for personal use. Not long after, the PHS began making the leprosy drug available to U.S. doctors to treat AIDS-related conditions.
The FDA transformed an ad-hoc process of dodging its own rules, one drug at a time, into a diverse set of modest, uncontroversial policies to bypass older rules. Under normal FDA procedures, the sponsor of a drug that is ready to undergo clinical trials is granted an “investigational” license that authorizes use of the drug under FDA-approved protocols in the tightly controlled treatment of a limited number of patients, with half the patients typically receiving a placebo. But the FDA has broad discretion, for “compassionate” or other reasons, not to enforce its rules or to authorize other federal agencies, hospitals, and even individual doctors to “investigate” unlicensed drugs far outside the bounds set by the agency’s standard trial protocols.
The treatment-investigation policy was applied almost immediately in the fight against AIDS. Soon after AIDS surfaced, Washington began begging drug companies and researchers with virus-killing expertise to send in whatever they might have on the shelf for testing in the government’s secure HIV labs. A biochemist at Burroughs Wellcome (today GlaxoSmithKline) sent a drug called zidovudine (AZT) to scientists at the National Cancer Institute and Duke University. In lab tests, the drug looked promising. A clinical trial of AZT was then launched, but had to be terminated prematurely when the placebo’s dead-patient ratio reached 19 to one against the drug’s. Doctors can’t ethically keep prescribing a placebo just to run up the score once it becomes clear—to them, at least—that the drug being tested works. The FDA immediately authorized a treatment protocol for broader use of AZT. More than 4,000 AIDS patients were treated with AZT before the FDA finalized its approval as the first AIDS drug, now sold under the brand name Retrovir, in 1987.
Collectively, the various regulatory loopholes that the FDA then went on to create or broaden allowed doctors to start treating patients with some drugs (typically ones licensed in other countries) before they had even entered the FDA licensing process, and other drugs before FDA-approved trials had assessed anything much beyond short-term safety issues (and sometimes not even that), and still others before the drug sponsor had completed the last phase of the FDA’s standard, three-phase testing script. The FDA would start accepting off-script investigation “just as soon as we have the information to make a reasonable judgment.” For patients with “immediately life-threatening conditions,” that would typically mean as soon as the initial, short-term safety testing had been completed and “some evidence of therapeutic benefit” was obtained. Patients in merely “serious” trouble would have to wait longer, but not for final approval. The overarching objective was to let doctors treat “as many patients as possible,” as early as possible, with every “promising” drug available. The FDA conceded that the treatment-investigation approach was “not primarily to gain information . . . but to treat certain seriously ill patients.”
Alongside the CDC, the National Institute of Allergy and Infectious Diseases (NIAID) emerged as the main designated dodger of the usual drug-approval process. In the late 1980s, NIAID began funding “community-based AIDS research”—studies of not-yet-approved drugs in doctors’ offices, clinics, community hospitals, drug-addiction treatment centers, and other primary-care settings. One objective was to offer “greater treatment access for groups of AIDS patients who had not always had full opportunity to participate in existing studies: intravenous drug users, blacks, Hispanics, women (including pregnant women, whose babies are at risk of being born with AIDS), and patients not living near major research facilities.” A second objective was to catch up with medical scofflaws. Treat-and-learn programs could involve “drugs or therapies currently in wide use—whether they’ve been formally studied or not.”
The first drug that NIAID began distributing was Neutrexin, for use in treating fungal pneumonia in AIDS patients who couldn’t tolerate pentamidine. The FDA would license the drug five years later. A NIAID-sponsored consortium of 300 San Francisco doctors then established that by using a nebulizer to administer a monthly puff straight to the lungs, the drug worked prophylactically; the FDA expanded pentamidine’s license to cover that mode of delivery, too. The FDA acknowledged that the long-term risks of inhaling the drug were unknown and probably would never be investigated in the usual way. The San Francisco crowd had learned too much, the word had spread, and neither HIV-positive patients nor their doctors would be willing to participate in standard, placebo-controlled trials.
By 1995, the FDA had granted treat-and-learn licenses for 29 drugs, 24 of which eventually completed the standard FDA licensing process successfully.
The process of learning how or when to use the leprosy drug followed a different trajectory. It would end up bridging the old world of pharmacology—heavily reliant on guesswork—and the new world of precision medicine, firmly anchored in molecular biological science.
The drug’s utility in treating leprosy had been discovered in 1964 by Jacob Sheskin, an Israeli physician, after he admitted to his ward a frantic woman suffering from the excruciatingly painful eruptions that often develop in the later stages of the disease. In an attempt to calm her, he prescribed a leftover sedative that he found on his shelf. Overnight, to his astonishment, her skin lesions and mouth ulcers were dramatically reduced. Brazil began using the drug widely in 1965, and Sheskin conducted successful clinical trials in Venezuela, where leprosy was also common. But no one yet knew how the drug worked.
The drug, it turned out, didn’t attack the leprosy bacterium; it alleviated symptoms that develop when the infection sends a patient’s immune system into overdrive. Gilla Kaplan, an immunologist at the Rockefeller University in New York, tracked the drug’s mechanism of action to tumor necrosis factor (TNF), one of three intercellular signaling proteins that the drug suppresses. TNF plays important roles in the communication system that the body uses to fight germs as well as cancerous human cells. But when engaged in a losing battle, the body sometimes produces too much TNF, which can then cause painful lumps and lesions on the skin. TNF overloads can also cause wasting syndrome, a common condition in the late stages of AIDS.
Other indications that the leprosy drug was calming down patient immune systems were emerging at the same time, and AIDS doctors, knowing that their patients often developed autoimmune diseases, began prescribing the drug to treat other AIDS-related disorders. In 1990, French doctors reported that the drug had proved effective in treating painful oral and genital canker sores in 73 AIDS patients. In 1994, several teams reported that it had reversed wasting syndrome. Other doctors began looking for additional TNF-related problems and were soon investigating the drug’s effects on various skin disorders and other inflammatory conditions, as well as autoimmune diseases such as lupus and rheumatoid arthritis.
The PHS, which had begun making the drug available to U.S. doctors not long after other countries started using it to treat leprosy, extended that policy to AIDS-related conditions shortly after doctors first reported success in prescribing it to treat them. NIAID sponsored studies and trials to explore the drug’s molecular and clinical effects, among them some conventional, placebo-controlled trials. In 1995, the FDA, in an action that set the stage for the most brazen dodging of its own rules yet, asked several companies to consider cashing in on a leprosy epidemic that wasn’t sweeping across America. Celgene responded, and conducted a small study in the Philippines; the drug was quickly approved for sale in the United States in 1998—but only to treat leprosy. As everyone knew would happen, sales boomed anyway, overwhelmingly to AIDS patients whose doctors prescribed it off-label. Thus, a drug that no one had ever expected to see again reappeared on the market.
The sedative that Sheskin had plucked off his shelf 40 years earlier was thalidomide.
By that point, AIDS patients had come to trust the judgments of the doctors treating them more than the FDA’s. They had seen how doctors on the front lines had been willing cautiously to explore new uses of old drugs in treating AIDS-related disorders and had discovered effective new treatment options that saved many lives—which may explain why some patients were dismayed when Washington legalized the use of thalidomide and, in an attempt to prevent its use by pregnant women, began closely tracking who was using it and for what purposes. Once the FDA approves a drug, the agency often exercises its authority to limit or forbid the promotion of off-label uses. “We are concerned that removal of thalidomide from the buyers’ clubs will make it more difficult to explore new possible uses for the drug,” one observer wrote in 1995. “Official clinical research is usually years behind the leading front-line scientists, physicians, and patients.”
Meanwhile, other doctors had discovered that thalidomide inhibits the development of healthy new blood vessels. To starve cancerous tumors, oncologists began adding the drug to some of their multidrug treatment therapies. The first reports of good results were published in 1999. A medical report describing the successful use of thalidomide in treating Kaposi’s sarcoma came out not long after Celgene secured the leprosy license, and, until overtaken by other drugs, thalidomide would play an important role in fighting that affliction in AIDS patients. In 2005, the FDA licensed thalidomide to treat a bone-marrow disorder that often causes severe anemia and leads to a form of leukemia in about a third of those patients. A 2006 license covers use in treating another blood and bone-marrow cancer that kills more than 10,000 Americans a year. Even before the license was issued, thalidomide’s annual sales—still, officially, exclusively as a leprosy drug—had risen to $300 million.
Doctors today often do what other doctors did in the early battle against AIDS—but they do it much better, without resorting to trial and error. They search for an already-approved drug that targets a molecular pathway that drives one disease and, if precise modern diagnostic tools reveal that the same pathway is involved, prescribe it to treat a new patient’s different disease. The FDA has acknowledged that “off-label uses or treatment regimens may be important and may even constitute a medically recognized standard of care.”
Unfortunately, Reagan’s successors didn’t see to it that FDA commissioners would continue to build on the treatment-investigation policies and start developing a new process in which the trials required to win approval of a drug are integrated into the process of treating patients. For its part, the Reagan FDA never seized the opportunity to have doctors systematically gather information on why the as-yet unapproved drugs that they were prescribing to AIDS patients performed well or badly.
The precision in “precision medicine” is obtained by working out how complex arrays of molecular factors, which often vary widely across patients, can drive a disease and affect its response to drugs designed to target one of the driver molecules. That usually means developing and analyzing very large databases that include a broad range of molecular and clinical information collected from large and diverse groups of patients. Over time, the patient-by-patient collection and analysis of both molecular and clinical effects enable medicine to zero in on the molecular factors (biomarkers) that define each biologically distinct form of a disease, along with other factors that can influence a drug’s performance, such as those that affect how a drug is metabolized or interacts with the patient’s body to cause unwanted side effects. And because biology is endlessly variable and keeps changing, the study of how these factors will have such effects must, to remain complete, continue for as long as the drug is prescribed.
Clinical trial protocols that include procedures to start developing these databases, as well as the complementary analytical tools that identify biomarkers that affect the drug’s safety and efficacy, take medicine’s ability to predict how a drug will perform in a patient far beyond what is learned from one-dimensional clinical trials and statistical correlations normally relied on by the FDA in the drug-approval process. The federal drug law directs the FDA to certify a drug’s future safety and efficacy only insofar as the drug is used “under the conditions of use prescribed, recommended, or suggested in the labeling thereof.” The FDA-approved labels that currently accompany most drugs provide almost no scientific information to address “conditions of use.”
Because HIV mutates so fast, the importance of studying the disease itself, as well as patient responses to drugs prescribed to treat it—and then monitoring and adjusting treatment protocols—was recognized early on. As of late 2011, the largest implementation of a precision HIV treatment strategy—Europe’s EuResist network—was using data from 49,000 patients involving 130,000 treatment regimens associated with 1.2 million records of viral genetic sequences, viral loads, and white blood-cell counts. As described by its manager, the network is “continuously updated with new data in order to improve the accuracy of the prediction system.” In a study dubbed “Engine Versus Experts,” the IBM computer that manages the network and analyzes the data was presented with 25 case histories not already in its database; EuResist beat nine out of ten international experts in predicting how well the treatments had performed.
The first opportunity to begin systematically investigating how patient chemistry will affect a drug’s performance comes in the clinical trials required to bring a new drug to market. The data-collection procedures needed to develop biomarker science, however, are not included in the FDA’s standard trial protocols. Well-formulated “adaptive” trials, by contrast, conduct on-the-fly studies of molecular biomarkers that account for different patient responses and then develop and analyze databases. By doing so, these trials come up with precise scientific criteria for identifying patients who will respond well in the future. So far, the FDA has expressed little willingness to accept trials of this kind.
Experts in the field have suggested that, because of the limits that the standard trial protocols place on what the doctors involved in those trials may do, better information could be obtained by integrating the drug-approval process with the treatment of patients. “On Propelling Innovation in Drug Discovery, Development, and Evaluation,” a 2012 report by the President’s Council of Advisors on Science and Technology, stated: “Most trials . . . imperfectly represent and capture . . . the full diversity of patients with a disease or the full diversity of treatment results. Integrating clinical trial research into clinical care through innovative trial designs may provide important information about how specific drugs work in specific patients.” The British government recently announced plans to integrate clinical treatment into drug-development efforts on a national scale. As described by life-sciences minister George Freeman: “From being the adopters, purchasers and users of late-stage drugs, our hospital we see as being a fundamental part of the development process.” The U.S. National Center for Biotechnology Information has concluded that “cancer research is . . . poorly served because of the many existing clinical trials from which we currently learn almost nothing. . . . It is transformative to consider the possibility of linking the efforts of physicians, researchers, and patients in advancing cancer research. . . . Increasingly, randomized trials will be forced to share the stage with innovative trials that deeply investigate cancer within individuals.”
Powerful analytical tools now available, or under development, can use data networks to recommend treatments.
Here again, doctors and other experts have taken the initiative by embracing “rapid learning health care,” a term coined in 2007 by a group of health-care experts convened by the Institute of Medicine. In brief, the workshop participants proposed a process for continuously improving drug science using data that doctors collect in the course of treating patients, with a particular focus on groups of patients not usually included in drug-approval clinical trials. By 2008, as discussed in a recently published paper authored by two experts in the field, several major cancer centers had established networks for pooling and analyzing data collected by doctors in their regions. These systems are being used to identify new biomarkers, analyze multidrug therapies, conduct comparative effectiveness studies, recruit patients for clinical trials, and guide treatments. Several commercial vendors now offer precision oncology services.
As noted in the same paper, the powerful analytical tools and protocols now available, or under development, can use data networks to recommend treatments—both on-label and off—that would “avoid unnecessary replication of either positive or negative experiments . . . [and] maximize the amount of information obtained from every encounter”—and thus allow every treatment to become “a probe that simultaneously treats the patient and provides an opportunity to validate and refine the models on which the treatment decisions are based.”
Oncologists often prescribe cancer drugs off-label, guided by matching what the drug was designed to target with what an analysis of the tumor reveals to be active in the patient. In such cases, the treating doctors are prescribing drugs without relying on the FDA’s approval process, other than what was learned about the drug’s safety and side effects in the original trials. The success that oncologists have had with this approach suggests a fundamental change in the drug-approval process that would, as in the battle against AIDS, accelerate the delivery of new drugs to treat patients, while establishing much stronger foundations for prescribing the drug safely and effectively to future patients.
Oncologists are leading the way because they must. Cancers are extremely complex and endlessly variable diseases, the treatment of which is the epitome of “personal medicine.” But numerous studies have revealed that at the molecular level, many common disorders—common as conventionally defined by their clinical symptoms—are, in fact, clusters of biochemically distinct disorders. So the rapid learning process need not be confined to cancer patients. Other diseases can be equally lethal, and many other patients would benefit from drug-approval protocols centered on allowing skilled doctors to use the new drug to treat them and collect data to develop the foundations of high-precision prescription protocols. Ideally, the comprehensive analysis of how the patient-side chemistry can affect a drug’s performance will lead to a large database that grows as more patients are treated with the drug and as doctors keep gathering molecular and clinical data from each patient.
By failing to develop the best molecular science for identifying the conditions in which a new drug performs well, existing clinical trial protocols increase the likelihood that a drug that helps some patient cohorts involved in the trial will be rejected because it doesn’t perform well in others whose chemistry interacts badly with the drug. In well-formulated adaptive trials, the trial converges on patients who respond well and develops scientific criteria for determining which patients will respond well in the future—before the drug is prescribed. The new drug that would otherwise have been rejected gets approved because criteria for prescribing it to patients who will respond well are progressively developed during the trial.
Adaptive trials are much more efficient—they can obtain statistically robust results even when they involve fewer patients. They cost less and are more likely to culminate in the approval of the drug. Smaller adaptive trials can be shorter than conventional trials—many years shorter, by at least one estimate. These differences almost certainly mean that lower prices will follow because current clinical trials account for an estimated 50 percent or more of drug-development costs.
Faster trials, when successful, mean earlier patient access to new, potentially lifesaving drugs. They can thus address the increasingly vocal “right to try” demands from patients suffering from serious diseases who desperately want immediate access to any drug that might help. With serious diseases, earlier access can have an enormous effect. A recent study conducted by a team of U.S. and Canadian researchers of the clinical trials of drugs that targeted incurable cancers, published in 2000–2015, found that the median range of the time elapsed between drug discovery and approval was 12 years and calculated that in North America, the approval delays cost more than 240,000 life years. Taking into account lives saved by the additional safety-related information obtained in longer trials reduced the losses by about 0.001–0.002 percent.
Critics of proposals that drugs should be prescribed to patients without first convincing the FDA that they are safe and effective often insist that this means using patients as guinea pigs. But conventional clinical trial protocols already do so. Doctors do so again when they prescribe drugs off-label. The debate can’t be about whether patients should be involved, because they must be. It’s about whether we learn as much as possible when they are involved. The unethical option is to cling to outdated drug trial protocols.
The Reagan FDA demonstrated that it is possible to find substantial numbers of doctors with the skills to prescribe drugs effectively, even when those drugs haven’t first been evaluated in lengthy FDA-choreographed trials. The tools of precision medicine greatly extend what doctors can do and learn in the course of treating patients. Given the authority to collect data from consenting patients, doctors participating in today’s rapid-learning implementation of treatment-investigation can do much more than treat and save lives sooner. The databases that they build and the analytic engines that get developed to pluck information from the databases will guide the precise prescription of the drugs to future patients, helping to save lives for decades to come.
Top Photo: Activists, like those pictured here, have long claimed that the Reagan administration didn’t do enough to fight AIDS. (LEE SNIDER/THE IMAGE WORKS)