Episode 48: Micrographia


Germs are regarded today with a combination of fear and disgust. But mankind’s first introduction to the microbial world started off on a very different foot. In this episode, as part of a larger series contextualizing germ theory, we’ll talk about the discovery of animalcules and how they forever changed our conception of the natural world — and what causes disease. Plus, a new #AdamAnswers about the influence of Bayes Theorem on medicine!

 

Sources:

  • Albury WR, Marie-Francois-Xavier Bichat, Encyclopedia of Life Science, 2001. 
  • Ball CS, The Early History of the Compound Microscope, Bios, Vol 37, No2 (May 1966).
  • Findlen P, Athanasius Kircher: The Last Man Who Knew Everything. 
  • Feinstein AR, “An Analysis of Diagnostic Reasoning,” Yale Journal of Biology and Medicine, 1973.
  • Forsberg L.Nature’s Invisibilia: The Victorian Microscope and the Miniature Fairy, Victorian Studies 2015.
  • Gest H. The discovery of microorganisms by Robert Hooke and Antoni van Leeuwenhoek, Fellows of The Royal Society. Notes and Records of the Royal Society of Lond, 2004. 
  • Hall, GH, The Clinical Application of Bayes Theorem, The Lancet, September 9, 1967. 
  • Howard-Jones N, Fracastoro and Henle: A Re-Appraisal of their Contribution to the Concept of Communicable Diseases,” Medical History, 1977, 21: 61-68.
  • Lane N, The unseen world: reflections on Leeuwenhoek (1677) ‘Concerning little animals’. Philosophical Transactions of the Royal Society, 19 April 2015. 
  • Lawson I, Crafting the microworld: how Robert Hooke constructed knowledge about small things, Notes and Records of the Royal Society of Lond, 2015.
  • McLeMee S, Athanasius Kirchehr, Dude of Wonders, The Chronicle of Higher Education, May 28, 2002. 
  • Van Leeuwenhoek A, Observations, communicated to the publisher by Mr. Antony van Leewenhoeck, in a dutch letter of the 9th Octob. 1676. here English’d: concerning little animals by him observed in rain-well-sea- and snow water; as also in water wherein pepper had lain infused (https://royalsocietypublishing.org/doi/10.1098/rstl.1677.0003)
  • “Little worms which propagate plague,” J R Coll Physicians Edinb, 2008. 
  • Van Zuylen J, “The microscopes of Antoni van Leeuwenhoek,” Journal of Microscopy., 1981.

Music from https://filmmusic.io, “Wholesome,” “Pookatori and Friends,”  by Kevin MacLeod (https://incompetech.com). License: CC BY

 

Transcript

This is Adam Rodman, and you’re listening to Bedside Rounds, a monthly podcast on the weird, wonderful, and intensely human stories that have shaped modern medicine, brought to you in partnership with the American College of Physicians. This episode is called Micrographia. This year I’m going to do a loosely connected series on germ theory. No, it won’t be a back-to-back three-parter like I did on smoking, lung cancer, and causality, and I’m not going to do them all at once. But I’ve always been simultaneously fascinated and intimidated by germ theory. Germ theory is one of the foundational myths of modern medicine — the ability to identify a major cause of human disease for essentially the first time, and over a period of a century, the development of the tools to more or less eliminate these diseases, in a way that’s dramatically improved the quality of the lives of basically every human being on the planet. I don’t think it’s too much of a stretch to draw a line between Pasteur’s rabies vaccine, Koch’s discovery of TB, Ehrlich’s salvarsan, and the discovery of penicillin with the scientific esteem that medicine is held in today. In 2019, germ theory is presented as an epistemic whole — infectious microorganisms cause a whole host of diseases, which we can treat with a variety of anti-infective medications generally called antibiotics. But wherever there’s an enduring myth, I think we owe it to ourselves to look closer. 

 

We currently have a very, shall we say, complicated relationship with microbes — a combination of disgust and fear. Germs are an ever-present threat, hence the popularity of a variety of cleaning supplies that kill “99% of germs.” But things looked much different when humanity first met germs. In this episode, I’m going to take us back to mankind’s first introduction to microorganisms and germs — or “animalcules,” Latin for little animals, when the microscopic world was revered with a sense of wonder. And I should note, for a podcast, this is going to be quite image heavy, so I’m releasing a companion thread on the Bedside Rounds twitter @BedsideRounds (and I’ll have it on my own as well).

 

Before we launch into it, I have to mention that the tidy story you’ve probably heard before about the discovery of germ theory is largely an early 20th century fabrication. Modern scholarship has revealed a world far more complicated, and it wasn’t until the middle of the 19th century that something approaching germ theory was even seriously entertained — and even then it was more appropriate to talk about germ “theories,” since even if you accepted that microorganisms could cause disease, it wasn’t quite clear to anyone just HOW they might do this. Even the world germ is problematic. It’s an odd word, if you think about it. It’s not even consistently used today — it could strictly mean microorganisms that cause disease, or microorganisms that could potentially cause disease, and sometimes to refer to all microorganisms. Really, it’s a botanical term — plants “germinate,” form new life, and for almost a thousand years this would have been the primary definition of the term. The metaphor of disease “sprouting” in a human is very old indeed. 

 

Epidemic disease has always caused a bit of an etiological headache for physicians. If a world where all health was due to mismatches in humors, how could everyone have the same imbalance at the same time causing, say, the Black Death? The answer, of course, was miasma, or toxic airs, that would imbalance everyone similarly. As far back as the Hippocratics, it was noted that quotidian fever came in areas where there were swampy airs and decaying material. We now know that was likely malaria, and it’s really quite an astute observation from the Hippocratics. But how did the air imbalance the humors? The Greeks, Romans, and Arabs speculated that there might be “seeds” (semina) that were spread through the air, and took root to cause disease, though there’s not really an overarching mechanism, until the Renaissance physician Girolamo Fracastoro. I’ve spoken about Fracastoro before in the live podcast I did with Tony about syphilis. So Fracastoro observed that there were some diseases — syphilis, yes, but others as well — that were clearly spread by touch. There clearly must be some disease-causing agent that traveled from one person to another. He called this new disease transmission “contagion” from the Latin “to touch,” and the source of our word “contagious” today. Now, it’s important to point out that while contagion superficially appears similar germ theory, there’s not a clear epistemic connection. Fracastoro was inspired by the proto-science of alchemy; he felt that the infectious molecule was a chemical substance which he called “seminaria contagionis,” or the seedlets of disease, to draw a distinction to the semina of Galen and Avicenna. And these seedlets, or germs as they were later translated in English, behaved very differently than we imagine infectious disease to work today — they could be spread from person-to-person, sure, or even via intermediary objects which he called “fomites,” a word we still use today. But they could also be spread over very long distances through the air, influenced by the configuration of the stars — an idea much more akin to miasma, and kill at the” distance of a mile” like the Basilisk. And while Fracastoro’s contagion has often been positioned as a competitor to miasma, Vivian Nutton has done some excellent work showing that contagion was actually not terribly controversial for doctors in the 16th century. In fact, the idea that Fracastoro was readily rejected comes from the late 18th and 19th century, when the contagionists and miasmatists would do battle — and as we forget today, the contagionists largely lost. But I’m getting off topic here, and that’s for a future episode in this series.

 

My point here is, while historians have often looked to theories about disease transmission to  understand germ theory, it’s an anachronism to push those beliefs on the ancients, or Fracastoro or even the contagionists and anticontagionists. Because really, the first inkling of the wonders of the microscopic world came, unsurprisingly, when we first started looking for it — with microscopes.

 

One of the traditional narratives for the discovery of microorganisms is of technological progress — the invention of the compound microscope. And there’s a certain amount of truth to this narrative. The ancients had been aware of optics, and by the 13th century, glass grinders were making eyeglasses that contained lens capable of magnification by several times, essentially modern reading glasses. The Dutch in particular became very skilled at grinding and shaping lenses. It should not be surprising then that the most commonly accepted inventor of the compound microscope is Hans Janssen, from Middleburg, Holland. This small microscope had three tubes that telescoped inside each other, with a biconvex lens as an eyepiece, and a convex lens for the objective piece. You would be able to focus on an object by sliding the telescopes tubes in and out, giving the viewer a maximum magnification of ten times. Throughout the sixteenth century, compound microscopes followed basically this same model, though “flea glasses” were incredibly popular as well — essentially a magnifying glass attached to a tube where you could trap a flea or a mite and study it.

 

Now, the reason I say that the narrative is “mostly” true is that early compound microscopes didn’t offer much more magnification than a well-made single lenses —  and as we’ll see, in some cases single lenses were an order of magnitude better. What changed, then, was the very idea that the microscope could be used to extend a person’s senses and better understand the natural world. This, of course, is at the beginning of the Enlightenment, and the birth of what would later be called “science.” The word microscope itself suggests this new vision — invented almost a half century after compound microscopes were invented, by one of Galileo’s friends, explicitly as a device to study the microscopic world. Notably, it also referred to single lenses used for the same purpose — the distinction was its use, rather than its construction.

 

From the beginning, the microscope was turned on the natural world. Take the Apiarium by two Italian mathematicians, which focused on detailed observations of bees, and has the first drawings of the microscopic world. It was published in 1625. The authors delight in pointing out previously unseen details of bees, and in undoing a persistent myth: “You say that the opening, which they are accustomed to call the place of the hearing organ, look very little like ears. But not so according to our microscope.” So apparently prior to the Apiarium, it was assumed that bees had tiny little ears. Soon the microscope was turned to examine humans. Borel examined a patient with conjunctivitis, and noted an unseen ingrown eyelash, which he removed and resolved the conjuctivitis. He studied the parenchyma of animal organs and noticed what he called “sieves, of which nature arranges the various substances according to the shape of the holes. Passage is thus given only to atoms of a certain shape.” 

 

By 1658, the German Jesuit priest Athanasius Kircher had turned the microscope to try and understand the cause of disease. A brief aside, because Kircher is a fascinating guy, and he’s been mostly ignored by modern history of science until the past couple of decades. He was probably the first “scientific” celebrity — Kircher’s ideas were well-known across Europe, he was a true polymath: he taught math, attempted to translate Egyptian hieroglyphics and Etruscan, wrote early “science fiction” that advocated Copernican views, described the mundus subterraneus — the subterranean world, complete with theories of a water cycle dominated by massive underground rivers that all joined together in a maelstrom near Norway,  wrote the first serious treatise of bioluminescence, explained the natural source of volcanos, worked on the first “magic lanterns,” essentially slide projectors if you remember those, and then invented the “miraculous book” — the first pop-up book in history, which should be enough to secure his fame. There’s a reason a recent biography of him was called “The Last Man Who Knew Everything.” And for our purposes, he was the first to study the microscopic world as a cause of human disease. In 1656, Rome was hit with the plague, and Kircher, who seemed to legitimately think that he was impossible to kill, set about ministering to the dying, even as the pope begged doctors not to abandon the city. Kircher, like Fracastoro and many Italian physicians, was an advocate of contagion, and he thought that he might use a microscope to finally see what constituted the seedlets of disease. He examined their blood under a microscope — it’s not entirely clear HOW he made his preparation (in his Scrutinarium, his publication two years later, he doesn’t say) but in their blood he saw “little worms which propagate plague” — vermiculi, the diminunitive of worms. As we know now, there’s absolutely no way he saw Yersinia pestis, the organism that causes plague — his microscope was far to weak, with 33 times magnification at most. At best, he likely saw what we call “rouleaux” of red blood cells — stacked red blood cells like poker chips that can form in very inflammatory states — or merely as an artifact of preparing blood. And he didn’t exactly make good use of this theory. Kircher makes some, let’s just say, questionable recommendations of how to treat the plague — which includes toad amulets, tarantulas, garlic, and viper venom. Despite being essentially a celebrity in his prime, by the end of his life Kircher was largely ignored, and banished to a footnote by the next century. “Science,” which I’m still using air quotes here, was starting to come into its Cartesian own. Detailed observation started to take hold, and speculation about a secret world under our feet, or giants still living in the northern climes, increasingly had no cachet in scientific discourse, even when there were important insights into the cause of disease. And this new detailed discourse would be exemplified by the two men who would bring the microscopic world to the masses — Robert Hooke and Antoni von Leeuwenhoek. 

 

Robert Hooke was only a generation younger than Kircher, but those thirty years still make him seem inseparably closer to use than the Jesuit priest. A young man in England, he became obsessed with the precision and accurate measures of mechanics, and was a profound theorizer and inventor — the “Newton of mechanics.” He invented the air pump, worked on clock-making, invented the cross-hair for use with the telescope, invented scientific meterology, including the wheel barometer, developed a method to calibrate thermometers (and came up with the idea that “zero” should be the freezing point of water”), and invented the universal joint. He described “Hooke’s Law” — extension is proportional to force, and made major contributions to the theory of universal gravitation. He was also one of the founding members of the Royal Society, and drew up the plans for the rebuilt city of London after the Great Fire of 1666.  And for our purposes, he was one of the first, and certainly the most popular, to try and describe the microscopic world. 

 

Hooke’s interest in microscopy can’t be separated from his underlying belief that tools could help extend human’s senses and better understand the world — just like the meteorological and horological innovations he worked on throughout his life. Hooke would not be surprised to see electron microscopes today. 

 

“Tis not unlikely, but that there may be yet invented several other helps for the eye, atmuch exceeding those already found, as those do the bare eye, such as by which we may perhaps be able to discover living Creatures in the Moon, or other Planets, the figures of the compounding Particles of matter, and the particular Schematisms and Textures of Bodies.”

 

As befitted this Enlightenment tinkerer, when he set himself to study the microscopic world, he tried to perfect his own microscope — single lens, double lenses, wax seals, tubes filled with water, lenses made of “waters, gums, resins, salts, arsenick, oyls, and with diverse other mixtures of water and oily liquors.” In the end, though, he actually settled for a store bought tool — which cost him 10 GBP, and take inflation calculators from the 17th century with a grain of salt, but that comes out to almost 3500 2019 US dollars. If he couldn’t tinker with the instrument — which he technically did, since he added a swiveling lamp — he would tinker with the preparation of his samples. The most famous example is his attempts to study the ant. These early microscopes were finicky beasts at best, and Hooke would spend days trying to get just partial looks at small parts of creatures, sketching them out as he went. He initially tried gluing an ant down to a glass plate, but the creature “so twist and wind its body that I could not any wayes get a good view.” He finally settled on dunking the ants in brandy — an hour swimming in booze would immobilize the presumably drunk ant for the same amount of time. Every time the ant would come too and try to scurry off the glass plate, Hooke would dunk him in again. So that print of an ant that you can still see today — he’s drunk, splayed out in Hooke’s living room.

 

One point to make is that nothing that Hooke looked at was truly “microscopic” — ants, mites, beetles — even the texture of cork, which he noted was full of small holes, like “cells” in a monastery, and yes, that’s where we get the word for cell — all of these were visible to the human eye; we could just see them in more detail. There’s one exception though — and that’s his observation in 1663 of a “bluish mould upon a piece of leather.” Where the naked eye had only seen mold, with his microscope he describes long, transparent stalks with round knobs at the top, many of them smooth, but others broken, growing together in a cluster. They looked, in fact, much like the very macroscopic mushrooms. Modern microbiologists have since shown that Hooke was describing the mucor species — a common environmental mold, and the cause of the horrific invasive disease mucormycosis. Take a look at the pictures on the thread — you can see how accurate his description was. This was, in retrospect, the first accurate description of a microorganism, a fungus, and Hooke seems to have recognized its significance.

 

In 1665, the Royal Society published a collection of Hooke’s drawings entitled Micrographia: or Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses. With Observations and Inquiries Thereupon. It’s quite a title, and it made an equally big splash. Samuel Pepys, the famous diarist, recalled first opening his copy of Micrographia. “Before I went to bed, I sat up till 2 a-clock in my chamber, reading of Mr. Hookes’ Microscopical Observations, the most ingenious book that I ever read in my life.” The book also had a profound effect on a draper-cum-scientist across the English channel, who would soon open a life-long correspondence with Hooke that forever change mankind’s relationship with the microscopic world — Antoni van Leuuwenhoek. About the time that Hooke started his foray into microscopy, Antoni van Leeuwenhoek opened a textile shop in the city of Dellft in Holland. At some point, like many drapers of the day, he began to use a magnifying glass to inspect his cloth. While we have over 190 letters of Leeuwenhoek today discussing his discoveries, we have no idea what his inspiration was. But at some point, he read Hooke’s Micrographia and started crafting his own microscopes to investigate the natural world. And from the beginning, Leeuwenhoek showed an intense interest in a world that was hidden from plain sight. His first letter, sent to Hooke and the Royal Society on April 28, 1673, was another description of a mold, again, the mucor species. Leeuwenhoek would continue to make observations and write letters to the Royal Society throughout his lifetime — he would later be elected Fellow, and Hooke actually learned to read Dutch for the express purpose of communicating with Leeuwenhoek. 

 

Leeuwenhoek’s most fateful contribution would be described in a letter he sent on October 19, 1674, where you can also see his insatiable curiosity at work. Leeuwenhoek had fallen sick the previous winter, and had largely lost his ability to taste food. This led him to examine his diseased tongue in a mirror, noting thick furrows. He procured an ox tongue and saw that, in fact, those furrows were “very fine pointed projections” composed of “very small globules” — the taste buds. This naturally led him to wonder why certain spices, like pepper, had such strong tastes, so he decided to make “infusions” and study them under the microscope. He had left some pepper-water sitting in his window sill for three weeks, and on April 24, 1676 he examined it under his microscope. He was shocked to numerous little organisms floating through the water, which he called animalcules — little animals, and of many different sorts. For whatever reason, animalcule was translated into English initially as “little eel.” He even placed a grain of sand under the microscope to try and judge their size — not even 100 of his animalcules laid out side by side would equal a single grain of sand. 

 

Hooke excitedly repeated this experiment and confirmed Leeuwenhoek’s findings in front of the entire Royal Society. He would write, “I was very much surprized at this so wonderful a spectacle, having never seen any living creature comparable to these for smallness:nor could I indeed imagine that nature had afforded instances of so exceedingly minute animal productions. But nature is not to be limited by our narrow apprehensions; future improvements of glasses may yet further enlighten our understanding, and ocular inspection may demonstrate that which as yet we may think too extravagant either to feign or suppose.”

 

Leeuwenhoek would go on to describe red blood cells, protozoa, sperm, dental plaque (in which he described another species of bacteria, Selenomonas), muscle fibers, and possibly even anaerobic bacteria, which would not be rediscovered again until Louis Pasteur 200 years later. Now I should mention that Leeuwenhoek was incredibly secretive about his microscope — even to Tsar Peter the Great, he gave only an average instrument. The reason is likely that Leeuwenhoek came from the tradition of a craftsmen, where innovations were kept a secret, lest your competitors get a hold of them. When Leeuwenhoek died — at the age of 90, and he was still writing letters to the Royal Society describing his own illness — he sent 25 of his microscopes to the Royal Society. Shockingly, they were all single lens microscopes. In 1981, nine of them were found in the collection of the Royal Society’s collection, and tested by Van Zuylen with modern standards. These surviving microscopes were still of an incredible quality — he found that the best one could magnify up to 266 times. In 1985, the photographer Robert Ford used this microscope to take pictures of his own blood, giardia, and spiral bacteria. You can even make out a granulocyte with a lobed nucleus — just two micrometers in diameter! And the crazy thing — based on his observations, he clearly possessed a microscope capable of magnifying up to 500 times which has been lost to time.

 

So by the end of the 17th century, you theoretically have many of the pieces necessary for germ theory — a proto-scientific theory of contagion, with a “seedlet” capable of transferring from person to person, Kircher’s idea, even if based on faulty observations, that a living creature could be that seedlet, and most importantly, proof of a world of near-invisible microorganisms living everywhere — including in the human body. So how did it take almost 200 more years for germ theory to really gain traction?

 

I have a couple of theories. First, Leeuwenhoek’s advances really were far ahead of his time; his technique for a single lens microscope wouldn’t be replicated until the 19th century, and compound microscopes wouldn’t catch up with his level of magnification until the late 18th century. His secrecy and fierce protection of his methods may have set back the field he ushered in. In fact, until relatively recently, Leeuwenhoek has been seen as an amateur, not particularly scientifically interested in the discoveries he was making, though modern historiography shows that he was actually deeply interested in experimenting and making accurate descriptions. Though really, you can pick that up just by reading his letters and hearing the many ways he prepared his specimens — you can find them in the shownotes.

 

Another reason — bacteria are actually quite hard to see, and even more to categorize under a microscope. One of the biggest lessons of Micrographia and Hooke’s drunks is that preparation is everything, and bacteria need stains to be properly seen. Leeuwenhoek actually invented the first bacterial stain — saffron water. But true classification of bacteria would have to wait for the invention of aniline dyes in the second half of the 19th century, when European society when crazy for the bright colors that science could now provide. A classic example — the brilliant purple color Violet de Paris was first invented in 1861 as a textile dye. But in 1882, it was repurposed to dye bacteria as the gram stain, which still forms the basis of bacterial identification in 2019. If the bacteria have thick cell walls that take up the stain, like a fashionable Parisian dress circa 1865, we say it’s gram positive. If it doesn’t have thick cell wall, it takes up the pink counterstain; there are a couple preparations, but they’re both derivatives of a pink aniline dye called “mauveine.” And yes, they’re gram negative. And as any practicing clinician knows, the gram positive/negative distinction is still incredibly important for actual clinic care — and it’s kind of crazy that our bacterial identification is still based on textile dyes from the 1860s. This is an aside, because at some point I’ll actually do an episode on aniline dyes, which have had an oversized influence on the history of medicine, but I think it’s quite interesting how the clothing industry had such a big influence on microbiology — through Leeuwenhoek, the draper, and later the dyes used for cloth.

 

Second, our conception of disease had to change. Kircher may have been a contagionist, but he still had no idea where disease existed, or how those little worms he thought he saw in the blood of plague victims might have done their damage. The idea that “animalcules” might cause disease was always going to be purely theoretical until a mechanism was developed — and that would be pathological anatomy, which came to prominence in the Paris Clinical School of the early 19th century. I talk about this a lot, so for a one sentence explanation, pathological anatomy is the idea that disease exists in a specific place in the human body — the tissues in particular. The word “lesion” essentially defines this view. On the face of it, you’d think pathological anatomists would love microscopy. And while some of the Paris Clinical School, Laennec in particular, owned microscopes and used them to identify parasites, the overall view was skepticism. Bichat in particular pointed out that the microscope was seemingly subjective — different viewers couldn’t always agree with what they saw when they looked at the human body. Hadn’t some seen a humunculus in sperm — a miniature man, ready to sprout forth a new human being?

 

But beyond the technological and epistemic constraints, I think there’s a third reason — the wider cultural response to the microscopic world, and microbiology in particular. Because Hooke’s Micrographia — much to his consternation — set off a centuries-long public fascination with the microscopic world, and microscopic life in particular, that he felt rendered the subject “unserious”. 

 

“Much the same has been the Fate of microscopes,” he would write, “as to their Inventions, Improvements, Use, Neglect and Slighting, which are now reduced almost to a single votary, which is Mr. Leeuwenhoek; besides whom, I hear of none that make any other Use of that Instrument, but for Diversion and Pastime, and that by reason it is become a portable Instrument, and easy to be carried in one’s Pocket.”

 

To put it simply, the educated public went microscope crazy, and flea glasses and simple compound sold like wildfire. The aforementioned diarist Samuel Pepys, who stayed up until 2 AM reading Micrographia, went out the next day about bought a microscope and a scotoscope (the light source) for 5 pounds 10 shillings, though he was disappointed with how difficult it was to use the tool. As microscopes became cheaper and mass produced in the 19th century, this fascination only grew. Forsberg in particular wrote a fascinating paper examining the Victorian middle classes’ fascination with the microscopic world. Microbes were seen as inhabiting an almost magical world, “Nature’s invisibilia, a stratum of natural life lying beyond the veil of human perception but present everywhere in the natural world,” as the president of the Royal Microscopical Society would write. In particular, she draws a comparison to fairy literature — belief in fairies had been widespread in pre-industrial Britain, but has all but gone extinct in a scientific, empirical age. But the language of popularizers of microscopy echoed this extinct belief in fairies — a magical world, just beyond human recognition. She opens the piece with a explicit comparison, the story “The Diamond Lens,” in which a scientist invents a powerful new microscope that allows him to see a beautiful female fairy he called Animula — an intentional comparison to animalcules. He becomes more and more obsessed with her, eschewing the macroscopic world to gaze at her through his scope — until she dies a horrific death as the water on his slide dries up. The microscopic world is beautiful, mysterious, and moreover fragile — not exactly something that causes epidemic disease. 

 

But by the early 19th century, the idea that the microscopic world might be behind disease was starting to catch on — but that will be have to be a topic for the next episode in this series! That’s it for the episode — thanks for listening!

 

But first, it’s time for #AdamAnswers. #AdamAnswers is the segment on the show where I tackle whatever medical questions you might have, no matter what they are. And I have a great question, but one that I’m really not equipped to address, which is to say — I’m gonna try it anyway! So Vivian Imbriotis asks, “Did the formalisation of probability, including Bayes Theorem, percolate through and affect medical thought? If so, how?”

 

So here’s my caveat Vivian — I am not a math person. And in fact, I’m going to talk about the actual STUFF of Bayes theorem with only the broadest brush strokes, and I am completely avoiding the whole Bayesian vs frequentist debate.

 

Okay, let’s talk Bayes’ theorem! It’s named after an English reverend who described an algorithm to calculate an unknown parameter, though most of the actual work on it was done by Pierre-Simon Laplace, who didn’t get naming rights despite independently devising, developing, and popularizing it (though there are plenty of other things named after him, so don’t feel bad). Probably the most simply stated, Bayes’ theorem is a way to calculate conditional probabilities. What is the probability of A, given B? There’s an obvious parallel to medicine here — diagnosis. And this is where, Vivian, you first start to see the impact of Bayes’ theorem in medicine, starting in the 1950s. Just how do doctors make a diagnosis? Though our entire nosology has changed over a thousand years or so, the internal logic really hasn’t. Physicians use and interpret signs to fit a patient’s presentation to some category of disease. For the Hippocratics, that might have been quotidian fever, to the noslogists, an ague, and to us malaria — but the reasoning process is similar — and equally obscure. So think of a tricky diagnosis you’ve made. How did you get there?

 

In the 50s and 60s, you have medical educators starting to try and break apart clinical reasoning for the purpose of understanding and teaching it. At the same time, you have a parallel movement with statisticians and computer scientists to build a new model of diagnostic reasoning using probabilistic and statistical models. And this is the point where Bayes theorem first meets clinical medicine. 

 

Take, for example, an article by GH Hall from September 9, 1967, which I have in the show notes. Hall recognized that the new NHS in Britain was collecting fantastic statistical data that could help calculate pre-test probabilities of a variety of new diseases. But doctors were in no state to actually use this data — and I love this quote because it could pretty much also be used today: “The training of the present generation of clinicians has not included instruction in the principles of symbolic logic, Boolean algebra, and probability theory, yet knowledge of the fundamentals of these subjects throws much light on the nature of medical reasoning … A new Rosetta stone must be found which will enable doctors to understand what mathematicians are talking about.”

 

And why did Hall find this so important? Computers. “It will be necessary to accept the fact that from the point of view of memory and experience, a properly programmed computer will be more wise than the most knowledgeable clinician.”

 

And this is what I find so fascinating — many of these studies promoting and studying probabilistic diagnosis are done with the intent of making a computer that thinks like a doctor, or could even replace a doctor. In fact, the 70s saw a number of these systems (called expert systems) actually developed, most prominently MYCIN and INTERNIST I. As you may have noticed, our jobs were not all eliminated in the late 1970s. But that’s a story for another day.

 

Did Bayes theory have a huge impact on medical practice at this time? I’d have to say not really. And the major reason is probably that Bayes theory often seems to run counter to our clinical experience. The most famous example is from Meehl and Rosen’s 1955 paper, which I’ve used on the show all the way back in Bedside Rounds’ infancy in episode 15. Imagine that a disease has an incidence of 1/1000 people — that is, pre-test probability. You have a test that has a false positive rate of 5%, and a false negative rate of 0 percent — so five percent of the time that it’s positive, the person actually doesn’t have the disease, but if its negative they definitely don’t have it. What is the chance that a random person with a positive test has the disease? Most people say 95%. The actual answer is 2%. I’ll have the article in the show notes if you want to see the math, which is admittedly somewhat convoluted (to solve this problem, I would personally use natural frequencies, but that’s a modification beyond the scope of this answer). 

 

I think you really start to see Bayes’ theorem affect medical care as we enter the 1980s and the period of evidence-based medicine, especially with decision tools. The earliest example I’ve found was Goldman’s cardiac pre-op risk score, from 1977. That example was given to me, by the way, by Bob Centor, who developed the Centor Criteria in 1979. Since then, they’ve become ubiquitous. Take a classic one — the Wells score, developed by Dr. Phil Wells to identify a PE in the emergency room in the early 2000s. Wells had a data set where he knew the pre-test probability of PE was high in a patient presenting with symptoms — about 30%. He then evaluated a number of different signs and symptoms — including tachycardia, hemoptysis, previous DVT, etc, and developed a model that would combine them and calculate a post-test probability of PE. In the lowest risk group, this could put your risk of a PE as low as 3%; the highest score pushed you up to 40.6%. And this real and immediate treatment implications — first, the threshold to image with a CT angiogram, but honestly, for very high probabilities, maybe even to empirically treat with a blood thinner depending on stability. 

 

Decision tools have become ubiquitous in medicine, and I’d say that’s the most visible influence of Bayes. But there’s also been a push to use probabilistic thinking at the bedside. I’ve talked about this on the show before. Think likelihood ratios, Fagan’s nomograms, test characteristics, the evidence-based physical exam. Some even advocate using a bedside calculator that can pull up pre-test probabilities and test characteristics on the fly, spitting out a post-test probability. 

 

The only problem? Much to the chagrin of EBM-enthusiasts and Bayesian doctors, it turns out that we really don’t like to use them, and a lot of the discussion around diagnosis continues in a very, well, Hippocratic vein. After all, what is an “illness script” but a physicians phronesis? And I’m starting to get a little off topic here. Okay Vivian, I hope that sufficiently answers your question! A long time ago, I started working on an episode on expert systems in medicine, and I think it might be time to finally finish it (and get some expert help myself). So thank you for not only your great question, but for inspiring me!

 

Do you, dear listeners, have a question you want answered? Tweet at me @AdamRodmanMD, and I will do the best I can, even if it’s way over my head!

 

That’s it for the episode! I hope you enjoyed it, because this is a bit of a milestone for me — it’s now been one year since I partnered with the ACP! And what a year it’s been! I’ve talked bacteriophages with Andy Hale, Pierre Louis’ numerical method with Shani Herzig, taken some time travel through the radio to Franz Mesmer’s salon, gone crazy on syphilis with Tony Breu at the national ACP, and uncovered the dirty roots of causality, again with Shani. So thank you so much to the ACP for taking this chance on me — but also to all of you guys, who somehow keep tuning in to a show that is the very definition of obscure.

 

You want to listen to all those episodes? The website is www.bedsiderounds.org, with or without the dash, since I got it back from those domain parkers! I’m also on facebook at /BedsideRounds. You can find all the episodes on Apple Podcasts, Stitcher, Spotify, or wherever fine podcasts are found. I’m on Twitter @AdamRodmanMD, where a thread on this episode will be published, and the show is @BedsideRounds, where an incredibly talented group of medical students helps come up with awesome histmed Tweetorials. 

 

Sources are in the shownotes. And finally, while I am actually a doctor and I don’t just play one on the internet, this podcast is intended to be purely for entertainment and informational purposes, and should not be construed as medical advice. If you have any medical concerns, please see your primary care provider.