Whenever threads about Libertarianism pop up, liberals usually cite the FDA as an example of why government is needed, why the market fails, and claim that if the FDA didn’t exist, drugs would be much less safe and on average more people per year would die from drug-related problems than die now under the FDA’s regulatory regime.
I happen to think this is an example of exactly the kind of wrong-headed thinking that pervades most arguments about the value of government programs. Namely, once a program is enacted it’s no longer possible to see what would have happened if it didn’t exist. Successes of the program are trumpeted by politicians and government supporters, and failures ignored. The opportunity costs of regulation are never discussed, nor the economic costs of the regulation itself - since it’s absorbed by the market it doesn’t appear on government balance sheets and is thus ignored.
So, given all that, I think it’s worthwhile to look a little closer at the FDA and try to determine if it really deserves its status as poster boy for the goodness of government.
I’ll start with some history of the various regulations that created the modern FDA, then talk about what the FDA actually does today, then level some criticisms against it.
History of the FDA
The first regulatory act controlling food and drugs was the Pure Food and Drug Act of 1906. This act came about because of public hysteria over food quality after the publication of Upton Sinclair’s “The Jungle”. Before this act was passed, you could buy and sell any drugs on the free market, and you didn’t need a doctor’s prescription. There was no public outcry for regulation until this alarmist book was published. This first regulation did little except provide criminal penalties for mislabeling foods and adulterating the content of foods and drugs. Note that there was no testing for safety or efficacy by the government - just tests to make sure that the food or drug was what you said it was.
In 1912, the act was amended again, this time to add criminal charges against false claims of efficacy.
The act remained unchanged until 1937, when a bad form of a sulfa drug was put on the market, resulting in the deaths of 107 children. The fact that this happened to children added to the public outcry, and the Roosevelt Administration was in the height of its regulatory expansion, so the Food, Drug, and Cosmetic act of 1937 was passed. This was the first time that new drugs actually had to be subject to regulatory approval before they could be sold. But the regulatory process was very light - producers had to submit a form with testimonials from doctors and documentation of tests that were done to prove the product safe. Regulatory approval was automatic after 60 days unless the FDA spotted a problem with the documentation.
There was also an ‘exemption’ clause for labeling requirements, ostensibly to allow manufacturers some flexibility in labeling, which the FDA abused to create a new category of drugs that could only be labeled as safe if they were prescribed by a doctor. This gave the FDA the power to prohibit certain drugs from being sold at all unless they were prescribed by a doctor - a power that was not the intent of the legislation.
By the start of the 1960’s, the average time for regulatory approval of a new drug was still only about half a year. But this resulted in the FDA’s big success, still touted today - the delay of the entry of Thalidomide into the U.S. market prevented it from being in widespread use before the discovery of infant birth defects from the drug. Therefore, the FDA did prevent thousands of birth defects.
As a result of this success, the government pushed for a massive expansion of the FDA’s powers. In 1962, the FDA’s power expanded to regulate manufacturing processes, to require pre-clearance of every human trial, to pre-approve all advertising and labels for drugs, and, in the biggest change, to require testing and certification for efficacy, and not just safety. This last change is primarily responsible for the huge increase in regulatory costs and delays. A big spike in overall development times for new drugs occured after 1962 - before 1962, the average development time of a drug was four to six years. By 1990, it had increased to 16 years - most of that time spent in FDA-controlled certification trials and testing.
After 1962, the number of new drugs entering the market began to decrease, as would be expected from economic theory. A radical increase in the costs of drug certification made many forms of drug research unprofitable, and manufacturers made more efforts to market drugs that could be sold to a mass audience, since the regulatory burden was a fixed cost that didn’t change whether you sold a thousand bills or a hundred billion.
The key thing to note here is how these regulations came to be - it wasn’t a slow, incremental process of fine-tuning and adding and removing regulations as market conditions changed or gaps in current regulation were found. The regulations were fairly static throughout most of the 20th century, punctuated by major new regulatory ‘bursts’ after high-profile events temporarily gave government political capital to use to expand its power. The odds that the regulations crafted in this kind of environment are anywhere close to optimal are vanishingly small. As are the odds that an act designed to regulate a highly technical field in 1962 is in any way appropriate to the R&D and markets in play in 2008. There have been minor changes in FDA regulations over the years, but the essential form and function of the agency has been defined by three incidents spread over 60 years.
Here’s a cite for this information: George Mason Law Review