How a Drug Is Approved in the U.S.
When the U.S. Food and Drug Administration (FDA) approved the first AIDS medication in March 1987, the agency did so in the wake of mounting public pressure.
In the early 1980s, budgetary constraints kept the agency from hiring the requisite personnel to process the massive amounts of data in new drug applications. As a result, FDA review times for new drug approvals increased to more than two and a half years, according to a special communication authored by Harvard researchers Jonathan Darrow, Jerry Avorn and Aaron Kesselheim, and published in 2020 in the Journal of the American Medical Association (JAMA).
The delay in treatment approval prompted fervent activism, the ripple effects of which are still felt in the policies of the FDA, the government entity tasked with regulating substances and approving or disapproving new drugs through the Center for Drug Evaluation and Research (CDER).
"AIDS activism does seem to have prompted changes in FDA policy," noted Darrow, S.J.D., an assistant professor of medicine at Harvard Medical School in Boston.
Several FDA programs were rolled out as a result. As Darrow and his colleagues detailed in their JAMA article, the Orphan Drug Act of 1983 was implemented to help expedite drug development and approval by distinguishing testing requirements for rare-disease drugs and common-disease drugs. This was followed by the Hatch-Waxman Act in 1984, which spurred the generic drug industry, authorizing an abbreviated new drug application process for drugs approved after 1962 and instituting barriers to patents for brand-name drugs.
"It made clear that the clinical trial requirements were for a product to come to market as a generic, and the result was a vast influx of generic drugs," said Peter Lurie, M.D., M.P.H., president and executive director of the Center for Science in the Public Interest, based in Washington, D.C.
If testing could show the generic drug achieved bioequivalence—that is, comparable blood measurements—with the reference product, the generic could be approved if it was identical in strength, dosage form and mode of administration, regardless of differences in appearance and inactive ingredients, without independent demonstration of clinical outcomes, according to the JAMA article. The paper also noted that the proportion of prescriptions filled for generic drugs increased from 9 percent in 1970 to 43 percent in 1996 and about 90 percent as of 2017.
The new business paradigms
Lurie explained that in the 1980s, '90s and early 2000s, drug companies often focused on making money with "me-too drugs" or "follow-on drugs." Since every chemical is considered unique, manufacturers presumed that minor molecular changes could constitute a new drug from the FDA's perspective. Thus, the easiest way to turn a profit was not to develop an entirely new drug from scratch, but to instead make insignificant alterations to the chemical structure of already approved treatments at relatively low cost to then sell them to a large market with high demand. Usually, this would be a treatment for common, sometimes chronic conditions.
"That's what happened with Viagra, right?" Lurie said. "Viagra was first on the market, and then boom, boom, boom, boom, five others, right?"
There's no demonstrable advantage to the similar erectile dysfunction (ED) medications, but they represented a profitable business model.
"[Because a company's] probability of success was so much higher with a copycat drug than starting from a completely fresh, risky compound, people made a lot of money that way and that's the way it was done," Lurie said. "But, of course, it was at the expense of true innovation, right? And so the products that came to market tended to be similar. So, yes, thank you, it's nice to have Cialis. But already, you had Viagra. So what is that population for whom Viagra doesn't work and for whom Cialis does? Is there actually even that population? How big is it? Most of the problem was taken care of with the first drug."
Over-the-counter (OTC) anti-inflammatory drugs (for example, NSAIDs) have enjoyed comparable "me too" or "copycat" proliferation, and generic OTC and prescription drugs are as available as ever.
"The problem is the other 10 percent [brand-name drugs], because that's where the money is really to be made," Lurie said.
"So now what we have is people making money based on vast charges [and] huge costs to sometimes small numbers of patients," Lurie said.
The new business paradigm focuses on less common diseases and niches, he said. This could include "biologics," a biological product that differs from chemically synthesized drugs, but not always.
Darrow and his co-authors attributed the high costs of prescription drugs, in part, to patent protections. While patents provide property rights, which the U.S. Patent and Trademark Office can grant anytime during the development of a drug, FDA approval automatically confers exclusivity rights.
"Exclusivity refers to certain delays and prohibitions on approval of competitor drugs available under the statute that attach upon approval of a drug or of certain supplements," according to the FDA website. There are different kinds of exclusivity, and the period it covers can last from 180 days for competitive generic therapy approved through an abbreviated new drug application to a full seven years for "orphan drug" exclusivity. (Orphan drugs are intended to treat diseases that are so rare, production of the drug would not be profitable.)
"The most important factor that allows manufacturers to set high drug prices is market exclusivity, protected by monopoly rights awarded upon Food and Drug Administration approval and by patents," wrote Kesselheim, Avorn and Ameet Sarpatwari in their 2016 article for JAMA about the high cost of prescription drugs in the United States. They proposed stricter requirements for awarding and extending exclusivity to firms to address the affordability issue in the short term.
ACT UP spurs faster drug approval
Prior to the advent of the new money-making model for pharmaceuticals, HIV/AIDS activists helped popularize criticism of government inaction during a growing epidemic disproportionately killing gay men. The drug industry later leveraged this criticism of the FDA to make it easier to get products approved for sale.
Activism continued apace into the late '80s as people continued to die from HIV/AIDS following the delayed response during the Reagan administration. In March 1987, the AIDS Coalition to Unleash Power (ACT UP) was created. In the same month, the FDA approved the first drug to combat AIDS: azidothymidine (AZT) or zidovudine, perhaps better known now by its brand name, Retrovir.
Also during the same month, ACT UP staged a demonstration in New York City that resulted in 17 arrests. Rebel artists plastered the city with posters reading, "SILENCE = DEATH" beneath a pink triangle, as documented in an essay by Deborah Gould included in the edited book, "Passionate Politics: Emotions and Social Movements."
"In 1987, as momentum intensified to expedite the access, development and approval of drugs during the AIDS epidemic, 'expanded access' regulations formalized existing policies allowing patients with serious or life-threatening illnesses to receive experimental substances before FDA approval, provided certain conditions were met," Darrow, Avorn and Kesselheim wrote in JAMA. "These policies are sometimes referred to as 'compassionate use' programs, but this term is problematic because it may not necessarily be compassionate to provide a patient with access to an untested substance that could confer risks but no benefit."
In 1988, hundreds of ACT UP activists surrounded the FDA Parklawn Building headquarters in Rockville, Maryland, to protest the approval process they believed to be preventing access to potential therapies and costing lives.
This "pushed [the] FDA to promulgate new accelerated approval regulations to accompany new treatment regulations for investigational new drugs implemented in 1987, both of which enabled desperately ill patients access to promising new therapies," the FDA website states.
The "Fast-Track" program introduced across 1987 and 1988 declared that two phases of trials conducted as part of the approval process could be considered sufficient, as opposed to the previously required three phases, in hopes of expediting new therapies for life-threatening conditions.
Steps in the evolving process of drug approval
As part of the drug development and review process, studies of humans aren't typically conducted until after the FDA has reviewed an investigational new drug application, which usually—though not necessarily—relies on preclinical testing often involving animals.
"The FDA statute does not require animal studies for a drug approval," clarified Chanapa Tantibanchachai, M.S., a press officer with the administration. "The role of the FDA in the early stages of drug research is small. So the FDA physicians, scientists and other staff review test results submitted by drug developers. The FDA determines whether the drug is safe enough to test in humans and, if so, after all human testing is completed, decides whether the drug can be sold to the public and what its label should say about directions for use, side effects, warnings and the like."
Under normal circumstances with would-be prescription drugs, once the investigational new drug application is filed and approved by the FDA and an institutional review board, a company is granted permission to run experimental tests involving humans, which are completed in three phases.
"Phase one is usually a fairly small study, a few dozen people, perhaps," Lurie said. These studies don't involve testing the drug on patients with the targeted disease.
He said the second phase, which may or may not include people who have the disease, involves anywhere from a few dozen participants to a couple hundred, depending on the disease the drug is intended to treat. This phase is geared toward establishing more information on the drug's safety, figuring out appropriate doses and making initial assessments about efficacy.
"Once you've got that far, you now decide to do a phase three study, and these are the ones that really define most of what we know about these drugs," Lurie explained. Under normal circumstances, these studies involve people afflicted by the condition the new drug is supposed to treat.
"So, usually, it's a randomized trial but not always," he added. "Often, there's a placebo group, not always, and the investigators are 'blind.'"
Lurie said these studies tend to be randomized, controlled trials that follow patients for different lengths of time based on the illness doctors are trying to treat. This research produces data comparing what happened in the treated group to what happened in the control group that didn't receive the treatment.
"If you're the drug company and you're lucky, you can show a difference," Lurie said.
Accelerated approval, user fees and related concerns
The accelerated approval program established by the FDA in 1992 allowed for use of a surrogate endpoint or marker—lab results, radiographic image and/or a physiological indicator—that's not a measure of clinical benefit but is believed to predict it as the basis for drug approval. The use of a surrogate endpoint enabled faster approval of drugs, according to the FDA.
"There is a lot of attention these days on improving accelerated approval," said Joshua Sharfstein, M.D., vice dean for public health practice and community engagement at the Johns Hopkins Bloomberg School of Public Health in Baltimore. "It's in everyone's interest to know whether the medications have clinical benefits as quickly as possible."
Drug approval policy changes have often come about due to grassroots advocacy, especially in light of the massive suffering caused by neglecting a crisis such as the HIV/AIDS epidemic. But the pharmaceutical industry, too, has lobbied for expedited approval measures, which some researchers suggest could cause more harm than good.
With the Prescription Drug User Fee Act (PDUFA), created in 1992, the FDA committed to a review deadline of six months for priority applications and a year for standard ones. It was also authorized to collect fees from drug manufacturers.
Darrow, Avorn and Kesselheim explained in their 2020 article that to pay for the extra resources and labor required for priority review en route to hastened development and approval of new drugs, the PDUFA authorized "industry-paid user fees" as a source of funding from pharmaceutical companies submitting new drug applications. Review times sharply declined as the collected fees increased from less than half a billion dollars between 1993 and 1997 to $4.1 billion between 2013 and 2017.
'There is a lot of attention these days on improving accelerated approval. It's in everyone's interest to know whether the medications have clinical benefits as quickly as possible.'
"User fees create a conflict of interest and are inefficient and wasteful," Darrow explained. "The FDA resources needed to administer the fee collection program could be better spent on drug evaluation, review, surveillance, communication, etcetera. Congressional budget appropriations are the better way."
In a 2013 article for the Journal of Law, Medicine & Ethics titled, "Institutional Corruption of Pharmaceuticals and the Myth of Safe and Effective Drugs," Darrow, Donald Light and Joel Lexchin noted the shorter "review times led to substantial increases in serious harms. An in-depth analysis found that each 10-month reduction in review time—which could take up to 30 months—resulted in an 18.1 percent increase in serious adverse reactions, a 10.9 percent increase in hospitalizations and a 7.2 percent increase in deaths."
As Lurie explained, in exchange for the pharmaceutical industry covering costs with fees, which are negotiated every five years, the FDA agreed to move faster with approvals, sacrificing a little rigor for more speed.
"Back in '92, when it first began in the drug center, even progressive people felt if the industry wants to pay for it, that's their business," Lurie said. "I'm not sure that people quite understood what the implications would be."
Within a few years after user fees came into existence, a large cohort of backlogged drugs hit the market, but many ended up getting removed because they offered little or no significant therapeutic benefit.
"It's by a considerable distance a second-best choice," Lurie said of the pharma-funded user fees. "The best choice is to have adequate congressional appropriations for the agency."
Publicly financed studies and renewed activism
Increased regulation and oversight of pharmaceutical company-sponsored drug testing could help decrease incidences of adverse effects, but as Marc Rodwin argued in a Saint Louis University Journal of Health Law & Policy paper published in 2012, this approach is limited because it permits manufacturer-sponsored testing as part of the approval process.
"Having the federal government contract with organizations to test drugs would be a significant change but constitutes a more modest alteration than another reform proposal: public financing of clinical trials to test drugs," wrote Rodwin, J.D., Ph.D., a professor at Suffolk University Law School in Boston.
There are arguments for publicly funded clinical trials that could be conducted either by the FDA's CDER or an independent agency. Ideally, such trials would reduce bias in assessments; reduce the cost of drug development currently shouldered by singular firms; eliminate the perceived need for exclusivity rights or patents; lower drug prices; increase treatment access for low-income populations; and improve global healthcare overall.
Granted, as Darrow and Light pointed out in a February 2021 commentary for Health Affairs journal, there's a tremendous amount of under-recognized direct and indirect public pharmaceutical funding. The funding includes government underwriting of manufacturer costs with deductions and tax credits (for example, the $1.24 billion the industry earned in credits from 1981 to 1990); federal agencies using taxpayer money to support 776 interventional drug trials started in 2019; the FDA awarding more than $320 million in research grants for orphan drugs for rare diseases; orphan drug designation exempting companies from the almost $3 million review fee for each new drug application; and federal and state subsidies that cover education and training for scientific researchers.
A 21st-century social movement in the same tradition as ACT UP might help create the political conditions for publicly funded and democratically controlled research and development, as well as independent third-party phase testing of drugs.
Indignant direct action in the '80s played a role in reshaping how drugs are developed and approved, and today, as Lurie noted, about two-thirds of all drugs on the market anywhere are first available in the U.S. He said that's a remarkable fact, though he has argued the FDA should improve transparency and release the letters it sends to companies explaining why drugs do not get approved to the public.
"The FDA's requirements for approval of new and generic drugs and biologics are among the highest standards across the globe," Tantibanchachai said. "Prior to FDA approval, the manufacturer must prove the product is safe, effective and of high quality."
While HIV/AIDS activism catalyzed a movement that led to speedier development and approval of AZT in 1987, the original synthesis of zidovudine was carried out by Jerome Horwitz and associates in 1964 at the Michigan Cancer Foundation in an unsuccessful attempt to cure cancer.
Many unanswered questions remained following AZT's approval, and its efficacy was soon in question as more and more side effects were noted, according to a 2017 Time magazine article. Despite those concerns and the development of 40 additional drugs to treat HIV/AIDS, most patients seeking treatment today are still prescribed AZT as part of their therapy.