Episode 27: The First Opiate Epidemic


The United States is in the midst of an epidemic of addiction and overdose deaths due to opiate painkillers. Its causes are varied, but there’s no question that physicians share a large part of the blame. Little discussed is that this is actually the second time this has happened. Almost a century ago, a remarkably similar epidemic struck the country. In this episode, called “The First Opiate Epidemic,” I discuss what happened, the parallels to today, and the lessons we can learn from our forbearers. Learn about all this and a new #AdamAnswers in this month’s Bedside Rounds, a tiny podcast about fascinating stories in clinical medicine!

Sources:

  • Courtwright DT. Dark Paradise: A History of Opiate Addiction in America. Harvard University Press, 2001.
  • Meldrum ML, “The ongoing opiod prescription epidemic: historical context,” Am J Public Health. 2016 August; 106(8): 1365–1366.
  • Courtwright DT, “Preventing and treating narcotic addiction — a century of federal drug control,” N Engl J Med 2015; 373:2095-2097.
  • Adams JFA, “Substitutes for opium in chronic diseases,” Boston Med Surg J 1889; 121:351-356.
  • Macht DI, “The history of opium and some of its preparations and alkaloids,” JAMA. 1915;LXIV(6):477-481.
  • Hamilton GR and Baskett TF, “In the arms of Morpheus: the development of morphine for postoperative pain relief,” Can J Anesth. 2000;47:4, 367-374.
  • Weiner JP, “A shortage of physicians or a surplus of assumptions?” Health Aff January 2002 vol. 21 no. 1 160-162.
  • Gudbranson BA et al, Reassessing the Data on Whether a Physician Shortage Exists. JAMA. 2017;317(19):1945-1946.
  • Kirch DG and Petelle K, Addressing the Physician Shortage: The Peril of Ignoring Demography. JAMA. 2017;317(19):1947-1948.

Transcript

This is Adam Rodman, and you’re listening to Bedside Rounds, a tiny podcast about fascinating stories in clinical medicine. This episode is called “The First Opiate Epidemic.” It’s not news that American is currently experiencing an epidemic of overdoses and addiction from opiates. Last year, almost 60,000 people died as a result of overdoses, and overdose deaths are now the most common killer of Americans under the age of 50. The causes of the epidemic are diverse and could — and will — take up an entire textbook, but it’s not very controversial that doctors have been a major contributor. What we don’t really talk about is that this is not the first opiate epidemic to strike the United States. Almost 100 years ago America was struck by an opiate epidemic of a similar scope and scale. In this episode, we’re going to talk about that first epidemic, the parallels to today, and the lessons that we all can learn. After all, everything old is new again. 

 

Opium, of course, is not new — it’s as old as civilization itself. It’s harvested from papaver somniferum — the sleeping poppy. The first written references to poppy cultivation come from ancient Sumeria where the poppy was called “hul gil”, or the plant of joy. This was recorded on a white clay tablet dated approximately to 3400 BCE — to give some context, that’s older than the Epic of Gilgamesh. Over the next several millennia it spread throughout the Mediterranean world. An Egyptian papyrus from roughly 1500 BCE recommends smashing poppy seeds to a pulp to treat a fussy child. Which probably worked quite well, assuming they didn’t stop breathing. By the first century BCE we have unequivocal evidence that opium is being used as a narcotic. Hippocrates himself dismisses the apparently common belief that opium is magical, but presents a list of chronic diseases whose suffering it can ease, some of which we still use opiates for today. The various Islamic empires continued the development of opium as a medication, and by the time of Avicenna we basically have our modern understanding of the drug, including its pain relieving and hypnotic properties, its anti-coughing effects, treatment for diarrhea, causing respiratory depression, and even sexual dysfunction.

 

From Avicenna, the use of opium in medicine spread to Europe during the Renaissance, and then America. Instead of brewing a tea, or making a salve, the preferred preparation was laudanum, or a mixture of opium with alcohol and other spices. Thomas Sydenham, the English physician who had his fingers in just about everything, invented probably the most famous laudanum that persisted until the early 20th century. For those wondering, other than opium and alcohol, it contained saffron, cloves, and cinnamon, kind of like mulled wine or a hot toddy.

 

This laudanum was an “all natural” product, and I don’t mean that in a positive way. Much like much of the supplement industry today, it was unregulated, and different batches had dramatically different potency, or might not even contain opium at all. Improved understanding of the new field of chemistry, now definitively separate from alchemy, led a new generation of young pharmacists to try and isolate the active ingredient from opium to more scientifically provide pain relief. The pharmacist who was ultimately successful — or at least would get credit — was named Friedrich Sertuner. He managed to extract crystals from opium that he called “Morpheus”, after the god of sleep, since he noted that the principal effect was to cause a deep slumber. The scientific community was initially very skeptical, but he persisted with his experiments, including giving the drug to the neighborhood children, since, you know, medical research is a lot easier without those pesky IRBs. Finally, the scientific community came around. Given the naming conventions of the day, his “morpheus” was converted to “morphine”, and in 1817 the new drug was born.

 

So as the nineteenth century started, we know opiates — at this point, only opium and morphine — were a popular medication, both in Europe and the United States. Really, the most popular medication. By 1834, opium had become the most commonly prescribed drug in the US. And I can’t really blame either doctors or patients. Medicine prior to surprisingly recently really couldn’t’ do much, and as anyone who listens to my show knows that it was often horrific and hastened death (bloodletting probably being one of the more dramatic examples). For a medical system that often inflicted torture in the name of healing, opiates actually relieved pain, comforted the suffering and the dying, stopped coughing, helped slow chronic diarrhea. It worked! There’s a great quote by Oliver Wendell Holmes that I think sums up the contemporary view of the “materia medica,” or what we would call medicine:

 

Throw out opium, which the Creator himself seems to prescribe, for we often see the scarlet poppy growing in the cornfields, as if it were foreseen that wherever there is hunger to be fed there must also be a pain to be soothed, and I firmly believe that if the whole materia medica, as now used, could be sunk to the bottom of the sea, it would be better for mankind-and all the worse for the fishes.

 

Ouch. That’s brutal.

 

One thing, you’ll notice, is missing from this entire discussion — the topic of addiction. And honestly, prior to the nineteenth century, there appears to be little indication of an awareness or acknowledgement of the potential for opium to be addictive. There were individual cases, to be sure, but they were usually in the setting of life-limiting diseases. Benjamin Franklin, for example, took to opium to combat the pain of a bladder stone that had grown so great in size that he could feel it roll inside of him. It’s debatable how big a public health issue opiate addiction was prior to 1840 or so, but there’s no question that America was hit with a public health crisis fueled by its own doctors. Exact numbers are hard to come by, but by 1842 there were an estimated 0.72 opiate addicts per 1000 people — and by the 1890s there were 4.6 per thousand — or one in every 220 people. Those numbers might been rather abstract, so as a basis of comparison, the CDC currently estimates about 6 addicts per 1000 people.

 

So what happened? This is a matter of debate, and there are a few different explanations, but they all sort of boil down to, “doctors exposed a lot of people to opiates.” Initially, doctors motivations were rather understandable. The 1840s saw the United States struck by a series of deadly outbreaks. The cholera epidemic that I talked about back in Episode 25 about O’Shaughnessy, Latta, and the invention of IV fluids — it jumped the pond and killed tens of thousands, including president Zachary Taylor. A dysentery outbreak followed, and then another cholera outbreak. The treatment of choice for all these conditions was opium, and some of the survivors likely became addicted. 

 

The second reason is the Civil War. The human cost of the Civil War was massive. 750,000 people died, though the vast majority were from disease and not war injuries. To ease their suffering, doctors prescribed opium, and lots of it —  requisitions show 10 million opium pills, and 3 million ounces of opium powders given to union soldiers alone. Many of the conditions that soldiers were treated for became chronic conditions, and presumably some of these people became addicted. Civil War veterans would later unfairly become the scapegoat of the first opiate epidemic, which was popularly called the “soldier’s disease”.

 

And finally a new technological innovation made opiate administration far more effective — and dramatically increased its addictive potential. The hypodermic syringe — designed to place a substance under the skin, rather than remove blood — was invented in 1856. It was initially a controversial invention, but one substance guaranteed that it quickly caught on. Remember that morphine was invented in 1817, but it was initially not very popular compared to opium — despite its predictable effects, apparently it tasted disgusting. But doctors quickly discovered that these foul-tasting crystals could be dissolved and injected underneath the skin and have a much more reliable, rapid, and intense onset. A syringe of morphine was, as one author called it, “a magic wand”. If you’ll remember, back in Episode 22 I talked about the assassination of James Garfield. The first thing the suffering man was offered, which blissfully took away his pain, was a vial of morphine. In the later half of the nineteenth century, the doctor’s visit still happened at home, but for a patient with a chronic condition, the physician couldn’t be expected to drop by every day and provide a shot. This was before automobiles, of course, and outside of a few metropolitan areas there was little public transportation. Therefore, the syringe and a vial of morphine were usually left with the patients. The popularity of morphine splurged, and by the late 1860s, morphine had supplanted opium as the opiate addict’s drug of choice.

 

There’s another medical innovation that had little to do with doctors, and while it probably didn’t have nearly as big an impact on the epidemic, it certainly got a large part of the blame. This would be “patent medicines,” sold by charlatans and heavily advertised with colorful names like “The Elixir of Life”, the Kickapoo Indian Sagwa, and Swamproot.Or Coca-Cola, or Angostura bitters, but that’s a story for another podcast. They often contained opium or cocaine, and were very popular in the second half of the nineteenth century, especially with the lower and lower-middle classes. They didn’t seem to cause much addiction in retrospect, probably for several reasons. One, they weren’t labeled that they contained opium (or really anything at all), so patients wouldn’t realize or associate euphoria or withdrawal with the drug. Many were also marketed towards babies to help them sleep — and as you’d expect, it led to quite a number of dead babies, but not many baby addicts. And finally, by the 1890s and early 1900s, patent medicines had drawn the ire of the government as a convenient scapegoat for the opiate epidemic, and most removed any narcotics, which of course is why Coca Cola has neither coca nor cola.

 

David Courtwright, a historian who has studied this period of American history, wrote:

 

“Over and over again the epidemiologic data affirm a simple truth: those groups who, for whatever reason, have had the greatest exposure to opiates have had the highest rate

s of opiate addiction”

 

And by the closing half of the nineteenth century, there were plenty opiates, but some groups in particular were practically swimming in them. Morphine started to be prescribed for pretty much anything, though there were some diseases that stood out. I mentioned how the phrase “soldier’s disease” was unfair, because the opiate addict of this period was not a Civil War veteran, but far more likely to be a middle or upper-middle class white woman. Morphine was a potent treatment for “women’s diseases” that struck the Victorian upper classes. The most common diagnoses were neurasthenia — a catch all term for a variety of somatic symptoms — and dysmenorrhea, or period pain, as well as joint pain, fatigue, and overwork. But it was also a cure for “masturbation, nymphomania, and violent hiccough”. The other group “swimming in opiates” was the doctors themselves. While obviously hard to measure, estimates suggest that between 6 and 10 percent of doctors were themselves addicted to opiates during this period. Less than scrupulous physicians also opened “quack cure joints” where they would supply addicts — and give referring doctors a healthy kickback.

 

So this combination of factors is what led to 1 in 220 people in America addicted to opiates by the 1890s. But within just a couple decades, the epidemic had largely ended. So what happened? The traditional story is that the federal government intervened, passing the Harrison Narcotics Tax Act in 1914 which required sellers of opiates to register and pay a tax, and largely put a chill on prescribing. But the reality is far more complicated. After peaking around 1895, the number of addicts rapidly dropped, and had pretty much reached the bottom by the time the Harrison Act was passed.  And just as the medical profession had largely created this epidemic, it took the lead in fighting it.

 

First of all, our understanding of chronic diseases — especially chronic infectious diseases — changed. By the late 1890s, germ theory had become more or less universally accepted, and the new field of public health was dramatically cutting down infectious diseases like cholera and TB — diseases that would have engendered potential lifelong opiate prescriptions previously. And as we better understood these diseases, we learned of medications that were far better than opium, like intravenous fluids.

 

Doctors also became increasingly aware of the risk of addiction. By the 1870s, opiate addiction had been well described, and a number of theories were being bandied about. While many of these theories are naturally colored by the prejudices of the day, including eugenics, they also represent some of the first attempts to understand addiction beyond the simple moralistic idea of “vice”.

 

And then doctors investigated and developed alternatives. Antipyretics were invented in the 1880s, initially to treat fevers, but they were also found to be effective at treating pain. There’s a wonderful article that I posted on my twitter by a Dr. Adams in the Boston Medical and Surgical Journal — the precursor to NEJM — in 1889 called Substitutes for Opium in Chronic Diseases. It reads like it could have been published in 2017. He starts by pointing out the three major disadvantages to opiates: first, that in a high dose they are a poison, second, that they have significant side effects, and finally, the risk of addiction. His alternatives include these new antipyretics,  most notably acetanilide, which is the precursor to acetaminophen, more commonly known as tylenol and paracetamol, and salicyclic acid. Just a few years later, acetylsalicylic acid would be synthesized by Bayer and called aspirin, giving the world a safe and nonaddictive pain reliever.

 

The young group of physicians that started practicing as the nineteenth century ended took this information and set out to reform opiate prescribing. Medical curricula were professionalizing — this was the era of the Flexner report — and reformers set to include the risks and alternatives to opiates in new classes and textbooks. They also stressed the idea that opiates should not just mask chronic conditions that could be otherwise treated — “opiates are the lazy physicians remedy,” the saying went. And on a local level, doctors lobbied to restrict the selling of opiates to only those with a valid prescription. The turnabout was so rapid that by 1910, there were doctors publicly worrying that opiate prescribing was restricted in the new generation so much that patients dying in pain might not be able to get the drugs.

 

All of this is to say that by 1914, the Harrison Act was largely reacting to a problem that had largely been contained. My intent is to talk about the medical establishment’s contributions to the epidemic, so I don’t want to talk too much about legislative efforts. But one of the long-term effects of the Harrison Act was to close down the nascent drug treatment clinics that were popping up around the country. Let’s also briefly talk about heroin. Heroin was introduced by Bayer in 1898 — it is actually one of the oldest drug trade names, since the chemical substance is actually called diacetylmorphine. It was marketed as a cough suppressant, rather than a painkiller but the medical establishment was immediately suspicious — warnings about it popped up within a couple years, and by 1920 the AMA had called for a total ban. It was never frequently prescribed in the United States, and apparently not much of a problem during this first opiate epidemic; rather, in the 1930s it became increasingly popular in the black market as it could be shipped — and then cut down and adulterated far easier than morphine. In any event, by World War II, opiate addiction was very rare in the United States, either caused by doctors or on the black market.

 

So how did this happen all over again? I’m going to include some review articles in the shownotes, but briefly, in the early 1980s, several pain specialists started to advocate for greater use of opiates in chronic pain, noting the large number of people with chronic pain, studies showing that opiates treated pain better than NSAIDs or tylenol, and pointing out the lack of random controlled trials showing that opiates carried high rates of addiction. They found eager supporters in drug companies, especially Purdue pharmaceuticals, which financed presentations and publications from sympathetic doctors, and was eager to push the idea that its products were “non-addictive”.  Meanwhile, advocacy resulted in pain being labeled the “fifth vital sign,” to be aggressively treated — with opiates of course — both as an inpatient and an outpatient. Physicians were to increase the dose as necessary to control pain, just as you’d increase the dose of blood pressure or diabetes medications. A shift towards patient satisfaction in medicine gave just another reason to aggressively treat the fifth vital sign. In the late 1990s, Purdue released the medication OxyContin. OxyContin poured into the streets — $31 billions profits worth to Purdue — supplied by overtaxed primary care providers and less scrupulously by cash-only pill mills. It was marketed as “safe” and “nonaddictive” since it was released over 12 hours, but it could be easily crushed and injected. And when this new generation of addicts — largely young lower- and middle-class young men — found the pills too expensive, they readily turned to heroin, which the black market was more than willing to supply.

 

So you can already see the similarities. In both cases, well-meaning doctors who wanted to treat their patients’ pain turned to an admittedly effective pain killing medication — a gift from God, per Oliver Wendell Holmes. And for a variety of reasons that the original doctors didn’t foresee, it got out of hand — wars, disease, and the morphine syringe in the nineteenth century, quality metrics and skeezy drug companies in the twentieth.The pill mills churning out OxyContin were basically an exact copy of the “quack cure joints”. Bayer tried to push its brand-name opiate Heroin, with not too much success — but by the end of the 20th century, drug companies like Purdue had pill-pushing down to an art, to the point that 80% of the world’s opiate supplies are consumed only in the United States. Combine a horribly addictive substance with the modern marketing machine, and you have a recipe for disaster. 

 

History has a way of repeating itself, and medical history is no different. We’re not different and no smarter than our forebears. And in my opinion, and the opinion of many, much of the fault of the opiate epidemic does fall on our shoulders. Physicians should have been more aware that barely a century before basically the same thing happened. I see the effects of the opiate epidemic every day, and I know a lot of you guys do to. It’s not a good feeling, to see what my profession has done.

 

But in the first opiate epidemic, we can see parallels on how we can end the second — and what we can do better. The pendulum has shifted, and new generations of doctors are again learning about the dangers of opiates. This is shockingly new. I finished medical school in 2013, and this was not something we focused on. In many states, including my state of Massachusetts, medical schools are required to teach about safe opiate prescribing. Doctors have lobbied for laws restricting prescribing — in this case, the launch of prescription monitoring programs that help identify opiate abuse early, and mandatory drug monitoring to try and detect abuse and diversion, as well as laws limiting the number of pills in new prescriptions. There’s an increased emphasis on a multi-disciplinary approach to treating pain, including new medications for pain that would make Dr. Adams from 1889 proud. The meds are new — neuroleptics and novel antidepressants, spot injection of steroids and lidocaine, and even experiments with drugs like ketamine — but the idea is old-fashioned.

 

Also, unlike the first epidemic, doctors and lawmakers in the second have realized the importance of properly funding treatment centers. Despite the attempts at some localities to restrict their prescribing, buprenorphine and methadone clinics continue to spread as evidence-based methods to treat opiate addiction.

 

Will all this be enough to end the second opiate epidemic? There are some promising signs with opiate prescriptions going down, but there also seems to be an increased uptake in illicit heroin usage, and even scarier new drugs like carfentanil. Another sobering lesson of the first opiate epidemic is that it might take a while — it was roughly 50 years from when doctors became aware of the risks of opiates to when the first epidemic ended. And while doctors did a good job of preventing new cases of addiction, unfortunately a large part of the epidemic was likely older patients with addiction dying. 

 

So we’ll continue to try and muddle our way through the second opiate epidemic. And regardless the lessons from the first, physicians will have to do out best for our patients and the rest of society. All of this has happened before — we got us into this, and just like last time, physicians have no choice but to help get us out. 

 

Well, that’s it for the show. But wait — it’s time for a #AdamAnswers! This question comes from Dr. Doug Challener, who asks, “I hear conflicting stories regarding the so-called American “physician shortage”. Any truth to this?”

 

Great question! Let’s a get a little historical perspective, which should be the purpose of the show. In the 1950s, the American Medical Association and the Association of American Medical Colleges, or AAMC, started publically warning about a looming “Physician Shortage”, based on demographic changes in the US and the increasing complexity of medicine. This is where this idea first entered the lexicon. States and the federal government listened, and by the 70s had doubled the amount of training spots. This actually led to fears in the 1980s about physician oversupply. By the year 2000, the AAMC was again warning about a looming physician shortage in the next thirty years — a remarkably similar argument to the one bandied in the 1970s. But this time, there was an equally loud argument from doctors that there was no such thing. Both arguments depend on a number of assumptions and considerable nuance, but fortunately for us, JAMA made them go head to head with opposing viewpoints in their May 16th issue this year. If you want to read them both, I’ve placed them in the shownotes. So for the AAMC’s argument first — each year they release an update of physician workforce projections, and 2017’s update suggested that the US would face a shortage of between 40,000 and 104,900 physicians, especially in the surgical subspecialties. Their argument is simple — demographic change will drive the increased demand for physicians. The US population is growing and getting older, and will require more medical care.

 

The opposing piece, penned by Ezekiel Emmanuel’s team, does not inherently disagree with the projections but argues that currently we have more than enough physicians both on the supply and demand end. His research has shown that the neither RomneyCare in Massachusetts, or the expansions from the Affordable Care Act increased wait times, the most common measure of a shortage. Emmanuel argues that physicians are just maldistributed, with urban areas having too many, and rural areas having far too few, and that the primary problem is an inefficient system that leads to a large amount of wasted time and resources. Adding more doctors will just add to the inefficiencies. As for demographic changes, Emmanuel agrees that changes will have to be made — but feels that technological changes for follow ups, and increased physician extenders like PAs and NPs will be able to fill the gap. This view largely echoes the 2014 IOM Report with much the same findings.

 

So like I said, a lot of unstated assumptions are made it both arguments. The US does actually fall towards the lower end in numbers of physicians in OECD countries — the U.S. has 2.6 doctors per 1000 people, lower than many European nations like Austria, which has 5 per 1000, and Norway, which has 4.3 per 1000, and while it has only an average number of specialists, it has a far below average number of primary care doctors, which would suggest we need more PCPs. But it’s also not clear that adding more doctors makes a healthier population. It costs U.S. taxpayers an average of half a million dollars a year per training position added. The cost of adding the number of physicians the AAMC asks for would be huge. Is that really the most cost-efficient use of taxpayer money in an already colossally inefficient health care system?

 

So Dr. Challener, I’m not sure there’s an easy answer, though as a skeptical doctor who thinks our existing system is incredibly wasteful and not sustainable, my sympathies are that we don’t have a looming physician shortage, with the exception of a few surgical subspecialties (most notably orthopedics). But you guys are free to disagree. So let me know; you can tweet at me @AdamRodmanMD. And if you have any other burning questions for the next #AdamAnswers, be like Dr. Challener and tweet them to me @AdamRodmanMD. Thanks again my friend!

 

Okay, that’s it for the show for real.  I ended up putting a lot more reading and research than usual into this one, so let me know what you thought! I’m on Twitter @AdamRodmanMD, and on facebook at /BedsideRounds. All of our previous episodes are available on the website at www.bedside-rounds.org, or on Apple Podcasts, Stitcher, or your preferred podcast retrieval method. 

 

As I mentioned earlier, I’m greatly indebted to the work of historian David Courtwright, and especially his book Dark Paradise, who has done excellent in depth research in this period of American history. All of those sources, plus the remainder, are in the shownotes and on the website.

 

And of course, as always while I am actually a doctor and I don’t just play one on the internet, this podcast is intended to be purely for entertainment and informational purposes, and should not be construed as medical advice. If you have any medical concerns, please see your primary care provider.