You are arbitrarily limiting this statement to just illness. Can you recognise that at some point you are actually causing harm by having the government divert money away from better causes?
Profit might not be a perfect way to judge where the money should be spent, but it’s a lot better than just writing a blank cheque with other people’s money.
But there’s a difference between “having money as the sole driving force behind a lot of research” and “having money as the sole driving force behind all research”. Private companies don’t fund research into public health because–get this–we pay for public health via public funds.
But note that there are plenty of NGOs that fund basic research. If you want to contribute to curing problem X, make a donation.
That’s funny, because the FDA started regulating drugs in 1938, and didn’t require efficacy certification until 1962. It seems they didn’t have a problem separating the two then.
There’s a big difference between an anti-nausea drug that deforms your baby, and one that is perfectly safe but doesn’t seem to prevent nausea.
I know what you’re saying - an anti-cancer drug that doesn’t slow or stop the spread of cancer is dangerous if it prevents someone from using a treatment that is known to work. But there’s still a fundamental difference between that and an anti-cancer drug that, say, attacks your liver and kills you.
But life-saving drugs are an extreme case. There’s a whole range of drugs which improve lifestyle, and which really do little to no harm if they don’t work. Pharmacies are already full of such ‘drugs’. Wander around and read the labels of OTC medications, and look to see how many are ‘homeopathic’. They’re nothing but distilled water. But the FDA doesn’t stop people from selling those, because they know that water is safe. So buyer beware.
Why should a drug that needs to be tested for safety suddenly also require testing for efficacy, when a drug that doesn’t need safety testing also doesn’t need to show efficacy?
And you ignored my point about off-label use. Off-label prescriptions are extremely common, and yet they are completely untested for efficacy against the condition for which they are prescribed. If doctors have the freedom to do that, why should the drug be held up in certification for years because it has to have at least one certified on-label use?
As an example, Adults have been prescribed various drugs for Attention-Deficit-Disorder, when only one (Strattera) was actually certified for such use. And because efficacy trials often do not include children, pediatricians prescribe as many as half of all drugs for off-label conditions for which children had never been tested. In Psychiatry, more than a third of all prescriptions are off-label. Methotrexate is commonly prescribed for immune system disorders and rheumatoid arthritis, but was never tested for efficacy for those uses - it was certified as an anti-cancer drug. About 30% of all cancer drug prescriptions are off-label.
What this shows is that the health care community is very good at self-regulating in this regard. No one doubts the value of Methotrexate for its most common off-label uses.
The FDA will not allow drug manufacturers to disseminate information about off-label uses of prescription drugs. This is crazy, because the drug manufacturers often have huge reams of data on various off-label uses that came out of drug trials, but they are not allowed to share them with doctors. So doctors have to rely on field trials and medical journals and conferences and word of mouth to spread information about off-label uses of drugs. Despite this limitation, there seems to be a thriving, safe, and useful sub-industry of drug prescriptions for uses which were never tested for efficacy.
Given that this is already so widespread, and apparently well regulated by the doctors themselves, why not just get rid of the efficacy requirement altogether? Because there’s one way in which a drug is guaranteed not to work - if it sits on a shelf because no one can afford to buy it, or it remains in the head of a researcher because no one can afford to certify it.
The orphan drug tax credit is a tax credit designed to promote R&D into orphan diseases (a collection of 5,000 rare diseases which affect 20 million Americans total).
There isn’t enough incentive in the private market to research these diseases, so the public sector steps in to manipulate the market.
Hence, the public sector does a better job than the private sector at times because the profit motive is not always the same thing as research to promote health. There isn’t enough money to be made in orphan diseases so the public sector steps in and creates those incentives.
We currently have a situation where the incentive to create a 10th statin is high (low risk, high reward) but the incentive to combat diseases of poverty occurring in the 40% of the world population living on less than $2/day (tropical diseases, an HIV vaccine, etc) or orphan diseases which only affect a small number of people is low (high risk, low reward). So the public sector steps in and manipulates the market by skewing incentives. Fine by me.
On another note, bringing a new drug to market does not require it be better than old drugs, just better than placebo. Which is why about 70-80% of drugs that hit the market are ‘me too’ drugs, which have benefits (different side effect profiles, different effects) but fundamentally are not new classes of drugs or fighting new diseases.
smiling bandit, you are free to challenge ideas to your heart’s contnet, but you are way too close to simply calling another poster names.
Knock it off.
[ /Moderating ]
I would defend myself against your rudeness with rudeness of my own but I’ll probably get another official warning for doing that. Damn vagaries.
What is your evidence that public R&D is only about academic testing? I am including grants and tax credits in public R&D. I am not just talking about lab work in public universities I am also talking about grants, large rewards, tax credits, etc. given to private industry to create new drugs.
Where is your evidence, I’m only seeing opinion and rudeness.
Nobody said 40% went strictly to theoretical grant based research. I am including tax credits, grants, rewards, etc given to companies to bring a drug to a market.
The orphan drug tax credit is a good example of public funding being used to bring drugs to market.
The OOPD administers the major provisions of the Orphan Drug Act (ODA) which provide incentives for sponsors to develop products for rare diseases. The ODA has been very successful - more than 200 drugs and biological products for rare diseases have been brought to market since 1983. In contrast, the decade prior to 1983 saw fewer than ten such products come to market.
That isn’t true. A drug only needs to be better than placebo, not better than the other 11 drugs of the same class.
About 70-80% of drugs that hit the market are either knockoff drugs which are slightly different, or they are old drugs combined with other old drugs (taking 2 drugs that are already on the market, combining them and calling it a new drug), or they are old drugs repackaged for something else (using wellbutrin to fight depression, then repackaging it as zyban to fight smoking addiction).
There is good to creating knockoffs. There are more drugs to pick from, the side effects and benefit profiles are better, but there are tradeoffs.
As an example, with schizophrenia there are over a dozen drugs that work through the dopamine receptor. They have different side effect and benefit profiles (abilify is different from clozapine for example in benefits and side effects). But the NMDA receptor also plays a role in schizophrenia. Right now there are 0 drugs that work via the NMDA receptor on the market for schizophrenia. Luckily both private and public industry are working on fixing that but schizophrenics and other mentally ill people would be better off if R&D was devoted to creating 1 drugs for mental illness that worked via the NMDA receptor instead of a 14th drugs that worked via the dopamine receptor.
Hard to grasp? I’ll explain it like I’m talking to a child.
Some diseases affect large numbers of wealthy people (wealthy in global terms).
Other diseases affect a small number of wealthy people (ALS, MS, cystic fibrosis, etc)
Still others affect poor people (people living on $2/day)
At the same time, creating a new class of compounds has a high risk and low reward profile. You could end up spending $500 million on a new class of drug, only to have it fail clinical trials. So there is a barrier to creating new compounds since you don’t know the side effect profile or if it’ll even get through clinical trials.
At the same time, drugs that are already on the market have proven their safety and effectiveness. So you just make minor alterations to the compounds and sell it as a 10th statin, a 14th antipsychotic that works via the dopamine receptor, a 9th ACE inhibitor, etc. You already know it is effective, it will pass clinicals and the side effect profile is minor.
The incentive system is to take pre-existing compounds and tweak them rather than to create new compounds. It is also to focus on diseases that large numbers of wealthy people have, and to ignore diseases small numbers of wealthy people have or diseases that poor people have.
Ergo you need the public sector to manipulate the market to ensure drug companies are focusing on fighting serious diseases with new compounds rather than making a 10th statin that doesn’t work better than the other 9.
Pharma companies have done amazing things in the last 60 years. Endless advances in treatment of endocrine disorders, microbe infections, cardiovascular diseases, cancers, mental illnesses, etc. I think having private drug companies are great. However the incentive system they have is not to create new drugs to cure diseases, which is why 70-80% of drugs that come to market are not new classes of drugs.
Slight nitpick, Methotrexate is actually FDA approved for RA, you are right that it was originally for treating certain types of cancer, but there are many drugs that get approval for another indication even after it is released.
True, the manufacturer is not allowed to advertise for non approved indications, but there is plenty of ways they can get the information out there. If they have studies done, they can get it printed in a peer reviewed journal like the NEJM. There are plenty of databases that put the information together and make it available to health care providers, one of the popular ones (at least for pharmacists) is Clinical Pharmacology, which lists all the off-label indications and dosages for medications. The only problem is that these databases normally require a subscription to view, you won’t find off-label uses in something like a PDR.
If tweaking existing compounds was truly the dominant incentive, then we’d have 10,000 variations of Aspirin instead of all the other drugs that have been introduced on the market since 1899.
Are you saying we’ve had 100+ years of drug research dominated by Aspirin knockoffs? History does not match up with your assertion that companies are overly preoccupied with trying to come up with the 10,001 variation of Aspirin.
I’ve said it many times because it is true. However that doesn’t mean 100% of drugs are knockoffs. About 70-80% of drugs that come to market are knockoffs, the other 20-30% are novel.
As far as aspirin, it is the original COX enzyme inhibitor. And yes we have seen dozens of various COX inhibitors since (roughly 20), as well as various formulations and combinations. There are benefits and drawbacks (some are more selective, some like APAP do not cause bleeding, etc) but there have been about 20 different COX enzyme inhibitors since aspirin. So your argument isn’t true, we have been creating ‘the next aspirin’ and still do. Celebrex is the 21st-ish variation of aspirin.
However 70-80% does not equal 100%. New drugs still come to market.
Why? How could that be if there’s a bigger incentive (as you say) to make another variant of aspirin?
You’re focusing on the “words” that make up a particular drug and since the word is the same, it must be a very minor variation. That’s wrong.
You can’t say there’s been little technology advance in petroleum formulations. If you’re biased to focus on re-using a particular word in your favor, such as “carbon” based fossil fuel, then it looks like we have the same fuel as before. If you use other words such as kerosene, jet fuel, diesel, etc, then the progress and differentiation is more apparent. If you keep saying they are all just “variants on carbon molecules” since 1865, then it looks like the industry has been standing still for 150 years.
BETHESDA, MD, 18 January 2007—The number of new drugs entering the U.S. market has declined sharply, while spending by the pharmaceutical industry on research and development has steadily increased, congressional investigators said in a recent report.
From 1993 through 2004, pharmaceutical research and development expenses increased by 147%, the Government Accountability Office (GAO)—the investigative arm of Congress—reported.
However, investigators said, the number of new drug applications (NDAs) submitted to FDA during that same period increased by only 38%.
From 1993 through 1995, GAO reported, the number of NDAs submitted for new molecular entities (NMEs) increased, but declined by 40% between 1995 and 2004.
Of the 1264 NDAs submitted to FDA from 1993 through 2004, only 32% were for NMEs, congressional investigators said. The remaining 68% were applications for variations of medications already on the U.S. market, often referred to as “me-too” drugs.
Me-too drugs are less risky to develop, GAO stated, “because the safety and efficacy of the drugs on which they are based have already been studied.”
But, investigators reported, me-too drugs “do not offer any significant therapeutic benefits over products already being sold.”
In addition, GAO said, when companies concentrate resources on developing me-too drugs, innovation in new medications is reduced
I’m not sure what is wrong with my conclusion. Pain is a complex mechanism, and one of the factors in it is the COX enzyme creates prostaglandins in order to send signals of pain and inflammation. We have about 20 drugs that block the COX enzyme as well as various doses and formulations.
However, inhibiting the COX enzyme is just one of many potential targets for reducing pain and inflammation. But because we are researching a 21st, 22nd, 23rd COX enzyme inhibitor we are not devoting as much R&D to new targets for pain maintenance.
As far as the example of carbon and fuel, I don’t believe that is a good comparison. Coal is a lot more multifunctional, you combust it and you get energy. Human biology is not that simple. You can take any fossil fuel and burn it. There are endless proteins to block, hormones to mimic, reactions to trigger or inhibit, etc in the human body. Its not a simple combustion reaction.
Not every disease can be cured by blocking the COX enzyme like the 20 variations of aspirin do. Aspirin is meaningless for osteoporosis, depression, fungal infections, etc. Human biology and fossil fuels are not the same thing. You can combust ‘any’ fossil fuel and get energy which can be used to do near anything, its not the same thing as the endless proteins and signal chemicals in the human body.
The point is that the more classes of drugs we have, the better. The reason is that new classes that work via different mechanisms will probably be found to be multifunctional. It has recently been found that NSAIDs like ibuprofin might reduce the risk of developing Alzheimers. New classes of drugs are going to have tons of benefits. But right now R&D is devoted to creating me-too drugs, not new classes of drugs.
One definite effect of only government-run pharmaceuticals would be no accountability when people die. When corporations mess up, there’s accountability. The CEO gets publicly raked over the coals in the press and the company has to pony up money. When the government messes up, everyone points fingers at everyone else. Well-meaning people who see government pharmaceuticals as the only way keep their criticism muted and insist these problems are not routine and that the system works well. Mistakes happen. In the end, some low-level flunkie takes a fall while the higher ups avoid any accountability whatsoever.
We need greedy CEOs simply so we can have someone to kick around if nothing else. When government screws up, we tend to resort to handwringing rather than real anger.
Well, while I’m sure the tax credit is nice, a far better incentive for a company to work on an orphan drug is the new ability to fast-track a different drug after submitting an application for an orphan drug. Being able to fast-track an eventual blockbuster could be worth billions. Sorry, I don’t have a link, as I don’t have access to Organic Process Research & Development where I’m posting from at the moment. I’m pretty sure it was in one of the latest regulatory highlights.
I haven’t read most of this thread, but I did want to address the issue of academic research and drug discovery. The reason that academic research does not produce new drugs, is simply because they do not have the funds or resources to do so. A very large academic chemistry group at universities like Harvard, have about forty employees. These employees are really only doing actual research for about three years, then a new “trainee” comes in and has to learn the ropes again. That is just the chemistry side of the coin. In order to produce drugs, you need biochemists, analytical chemists, medical doctors, high throughput synthesis machines, a large facility to store chemicals, combinatorial testing equipment, and people to maintain all of this equipment.
On the other hand, saying that academic chemists do not provide crucial research to the pharmaceutical industry isn’t right either. Instead of deliberately synthesizing drugs for a purpose, they tend to focus on long term natural product synthesis. While natural product synthesis is a complete loss for industry, it plays a crucial role in developing methodology for creating new scaffolds and arrangements of future drugs. The publicly published research will never be credited with the discovery of a new drug, but without it, we would be stuck with different arrangements of old drugs.
And of course, the ultimate synthesis of Taxol was credited to a university professor. There are actually professors credited with new drugs. Off the top of your head, can you honestly name a single person behind a new drug from the past twenty years?
This doesn’t really disagree with your assessment of fully government controlled pharmaceutical companies. If the companies were forced to only research drugs in the public interest, then enzymes and active sites unrelated to the target would never be investigated. Like the academic side of things, these publicly dead end research avenues, also produce unexpected results in unrelated areas. IMO a fully regulated pharmaceutical industry would be much slower at new treatments.
OTOH, fully deregulated and privately funded pharmaceuticals would be equally a disaster. There are many public threats, that are not profitable to pharmaceutical companies. IMO, the only way to ensure decent progress is made, is to encourage this research with public funds.
Government funding of private research should not be underestimated. I have yet to meat an industrial research group that was not receiving a government grant. Modern research is incredibly expensive, and those who think private companies are capable of producing useful research always underestimate the real cost. I briefly looked into doing some contract work, and just renting a hood and desk space was 120,000 a year in what is probably the cheapest area of the country.
Government funding of private research should not be underestimated. I have yet to meat an industrial research group that was not receiving a government grant. Modern research is incredibly expensive, and those who think private companies are capable of producing useful research always underestimate the real cost.
I think it’s also important to remember , however, that having a finger in every pie does not make the government a piemaker. The government does not produce anything, nor is it particularly good at picking winners and losers. It’s hard to give the government credit for contributing to science when all they do is throw money around randomly and never produce a finished, practical product out of that research.
The government role in financing basic research is crucial, no doubt, but I think supporters of activist government overstate the case and underestimate the fact that in the end, no matter how big the government role is, it is the private company that is accountable for the product, not the government. It makes no sense to give the government credit for creating useful drug A and then simultaneously slam a private company for creating unsafe drug B, when both products were subsidized by the government. Either the government is the most important part of the chain or it isn’t.
I just discovered that dichloroacetic acid might be an effective anti-cancer compound. The reason is that it alters metabolism and as a result can shrink tumor cells.
But DCA is a cheap chemical, and can’t be patented. So private industry has no interest in conducting or funding clinical trials to bring it to market since they would be funding a product that not only can they not make money off of, but that might cost them market share of their own cancer drugs. Drugs which are more expensive and more toxic.
Some non-profits, public institutions and charities are funding trials on DCA as a treatment for cancer though.
Does this preliminary data prove DCA is a guaranteed miracle drug that can cure cancer for a few dollars per person? No.
Does it prove that private industry has no interest in a compound that could save consumers and the government billions of dollars in lower health spending, dramatically improve human health and prevent tons of suffering if they will lose money in the process? Yes. Private industry pursues profits, which are morally neutral. That is a good motive for some situations, and bad for others. In this case, it is bad.
But all of you can keep condescending to me about the miracles of the free market.
I think I understand your arguments why it might be better in theory, but I’m curious about whether or not you have forgotten that the experiment has been done in practice. Much of the 20th century saw whole nations, filled with folks smart enough for rocket science and nuclear weaponry, try state-planned and state-run pharmaceuticals with a stated goal to care for the People.
Can you give us some examples (pick any Communist system you like–USSR, e.g.) of how their centrally planned and funded pharmaceutical industry out-performed private industry for innovation and quality, effective pharmaceuticals?
One, I’m not a communist. The fact that I do not fawn over the free market as if it is the second coming (just let it do its thing and everything will work out for the best) doesn’t make me a communist. It makes me rational.
But an example you are looking for would be the US.
“It’s a tidy story,” writes Merrill Goozener in The American Prospect in an article from the early 2000s, “but it falls apart under scrutiny. Every independent study that’s ever looked at the sources of medical innovation has concluded that research funded by the public sector - not the private sector - is chiefly responsible for a majority of the medically significant advances that have led to new treatments of disease.”
The article cites a National Bureau of Economic Research study showing that public research was behind 15 of the 21 drugs regarded as having the highest therapeutic value between 1965 and 1992. Another study found that government labs and noncommercial institutions were involved in 60 percent of 32 innovative drugs. The National Cancer Institute, for instance, spent $32 million to develop the anticancer drug, Taxol, which generates $1.7 billion in sales for Bristol-Myers Squibb and which Bristol-Myers sells to patients for more than 20 times its manufacturing cost.
Private industry wants to maximize profits and minimize risk. Which is fine. But it doesn’t work too well in health care R&D. The profit motive is great in some areas (consumer electronics as an example), but it is extremely flawed in health care R&D.
You maximize profit and minimize risk by letting the public sector do all the grunt work of drug discovery, then you take their products to market. Or you spend all your time inventing knock-off drugs. Or you try to block generics from hitting the market. Or you ignore potential treatments for cancer that cost $20/month because they can’t be patented. Or you focus on diseases of the wealthy leisure class (baldness, acne, impotence) rather than the life threatening diseases facing humanity (malaria, HIV, TB) since the former have more money to spend on medications.
Profits are fine and good, but they don’t seem to work very well as an incentive for pharmaceutical R&D.
I’m uninterested in whether or not you are a Communist. I don’t particularly care. Most governments just sort of bumble along.
I suggest that your specific proposal–that private pharmaceutical companies should be banned and that the substitution of government ones would be an improvement–has already been carried out by communist countries over a period of many decades.
And I ask you again: do you have some examples of where your model has succeeded?
I am not asking about whether or not innovation also occurs in a university setting–and neither was your OP. It’s about whether or not state-run pharmaceutical companies do a better job for society overall, than privately-run ones. The experiment has already occurred. How did they do?
I don’t know of any developed nations with purely public pharma companies, and that is a very black/white proposition. Nation’s medical infrastructures are an amalgam of public and private enterprise to varying degrees. However I do know the public sector plays a big role in bringing out groundbreaking ideas in medical R&D. And the goals of the public system seem more in line with the goals of medicine than the goals of the private system. I can’t answer your question because as far as I know there are no situations like the one you describe. Private industry in the US is not 100% private (it is funded in various ways by the public sector) and I don’t know anything about soviet drug manufacturing and research.
But as I posted earlier, because the public system manipulated the private system with the orphan drug act, a variety of orphan drugs hit the market. However you can’t say the system was 100% private before that, and 100% public afterwards. I can give you examples of where the public sector compensated for the failings of the private sector, but I don’t know of any 100% pure public pharma companies. I don’t know of any 100% pure private pharma companies either.
Centrally planned economies tend to tank. But the private sector (when it comes to health care R&D) is extremely inefficient.