Why George Washington’s Doctors Prescribed Bloodletting During His Final Hours
Understanding the rationale for therapeutics in the past
One of the most important concepts one learns during training as a professional historian is the “Whig interpretation of history,” which refers generally to “oversimplified narratives that achieve drama and apparent moral clarity by interpreting past events in light of [the] present...” Historians call such narratives teleological: that is, narratives that judge the past with reference to values in the present.
Applied to the history of medicine, the Whig interpretation often manifests in the form of narratives where the methods of therapeutics used by humans in the past are dismissively described as “primitive” and “barbaric” procedures which were — and this is where the teleology comes in — dramatically and triumphantly conquered by modern medical ideas and procedures beginning in the mid-1800s. While that is a fun story to tell if one wants a convenient, simplified narrative, it leaves out much of the richness of the human aspects of this history. It ignores the historical fact that those “primitive” procedures were part of mainstream medical ideas for hundreds of years across cultures and societies. In its fixation on the present world, it completely misses the point that for people in the past, most of those treatments made sense and “worked.”
For historians, it is more interesting to learn why certain therapeutics were championed and thought to “work” by people in the past than to learn why those “do not work” by contemporary understandings. That is one reason why there is copious historical literature on the procedure that perhaps best defines medical treatment in the past — bloodletting. During medical training one might read at most a sentence or two about it: it is something that has long been abandoned as a discredited medical idea. But during history of medicine training one reads many articles and monographs about it: it is something that was a common and trusted medical procedure for more than two thousand years in several human societies. Fortunately, I have been both a medical student and a history student, and that has helped me learn from both of these wonderful worlds.
George Washington’s death:
In early 2019, as a teaching fellow for an exciting course titled “(How) Does Medicine Work?” taught by Professor David S. Jones at Harvard University, I taught college students some fascinating aspects of the history of medical ideas. It is during one of the classes in this course that we learned the story of George Washington’s death. A quick Google search reveals that “gruesome” things happened during Washington’s final hours. But in the course, we took a less dramatic approach and looked at Washington’s death in its proper historical context: firmly as part of the social, cultural, and medical milieu of the late 1700s (he died in 1799).
Some basic facts first. As epidemiologist David Morens describes in a 1999 article:
“On Friday, December 13, Washington had “taken a cold,” with mild hoarseness. At 2 the next morning, he awoke and had difficulty breathing. By 6 a.m., he was febrile, with throat pain and respiratory distress. Unable to swallow, he spoke with difficulty. His aide, Colonel Tobias Lear, sent for Craik and bloodletter George Rawlins. At about 7:30 a.m., Rawlins removed 12 to 14 oz (355 to 414 ml) of blood, with Washington requesting additional bloodletting.”
While many popular accounts give the impression that Washington’s “overenthusiastic” physicians almost “bled” him to death, or made the “wrong” decisions, we can see from the above excerpt that he himself had complete faith in the procedure and even voluntarily requested it to be done.
“[Later in the day] After the fourth bloodletting, Washington’s condition improved, and he was able to swallow. He examined his will… Around 5 p.m., Washington again sat up in a chair but soon returned to bed and was helped into an upright position. He continued to struggle for air, and his condition began to deteriorate. At 8 p.m., the physicians applied blisters of cantharides to his feet, arms, and legs and then applied wheat-bran cataplasms (poultices) to his throat. His condition deteriorated further. At around 10 p.m., Washington whispered burial instructions to Lear.”
All in all, the bed-ridden George Washington, first President of the United States, had about 2.5 quarts of blood drawn out of his body in the span of a dozen hours. That’s nearly half of an average adult human’s entire blood volume. Despite these “heroic” efforts, though, Washington died in the night around 10.30 pm.
This all seems pretty bizarre today. The primary complaint Washington had was throat discomfort and fever (scholars believe today that he had developed what we call “acute epiglottitis”), but the main treatment he received, even demanded, was… bloodletting. Three physicians in total were around him that day — all of them elite doctors of the time— supervising everything. What were they thinking?!
The rationale behind bloodletting:
Among the most cited articles in the history of medicine literature is a 1977 one by historian Charles Rosenberg, “The Therapeutic Revolution: Medicine, Meaning, and Social Change in Nineteenth-Century America.” In it he wrote:
“Therapeutics is after all a good deal more than a series of pharmacological or surgical experiments. It involves emotions and personal relationships and incorporates all of those cultural factors which determine belief, identity, and status. The meaning of traditional therapeutics must be sought within a particular cultural context… Individuals become sick, demand care and reassurance, are treated by designated healers.”
Basically, bloodletting made sense to Washington (the patient) and to his physicians because it fit in well with the prevailing notions of how the human body works. The body was thought of not as a collective of many kinds of discrete tissues and organs as we do today, but as a single whole in which there was “a system of intake and outgo — a system which had necessarily to remain in balance if the individual were to remain healthy… Equilibrium was synonymous with health, disequilibrium with illness.” These concepts of equilibrium can be traced (in the American and European context) to classical Greek and Roman medical ideas of the four humors representing four natural elements that had to be maintained in a perfect balance within the body.
“The American physician in 1800 had no diagnostic tools beyond his senses and it is hardly surprising that he would find congenial a framework of explanation which emphasized the importance of intake and outgo, of the significance of perspiration, of pulse, of urination and menstruation, of defecation… These were phenomena which [the] physician, the patient, and patient’s family could see, evaluate, scrutinize for clues to the sick individual’s fate… Drugs [and medical procedures] had to be seen as adjusting the body’s internal equilibrium, and the drug’s action had, if possible, to alter these visible products of the body’s otherwise inscrutable internal processes.” — Charles Rosenberg, “The Therapeutic Revolution: Medicine, Meaning, and Social Change in Nineteenth-Century America”
This prevalent understanding of ill health as disequilibrium meant that the best way to heal was to make some adjustments to the humors in the body and bring back an equilibrium state. But how does one do that? Usually by getting rid of “excess” elements. Hence we had prescriptions like emetics (which caused vomiting) and bloodletting. And how to make sure that an adjustment indeed is taking place internally? Well, by using the then-current technology: one’s own human senses… “… visible and predictable physiological effects; purges purged, emetics vomited, opium soothed pain and moderated diarrhea. Bleeding too seemed obviously to alter the body’s internal balance — as evidenced both by a changed pulse and the very quantity of the blood drawn… the patient’s response to a drug [or procedure] could indicate much about [their] condition, while the product elicited — urine, feces, blood, perspiration — could be examined so as to shed light on the body’s internal state.”
The many reasons why people believed bloodletting (which includes the application of leeches and other methods of drawing blood) to be a completely rational therapy can thus be summarized as follows:
- It rarely killed (patients who did die despite the procedure seldom died right after bloodletting).
- In the absence of any way to look into the intact human body, blood was considered a very powerful marker of the body’s internal workings and state of affairs.
- For both physicians and patients, bloodletting provided a sense of “something being done” to take control of the situation and effect improvements in it. Moreover, this “something” had a long and rich medical tradition behind it.
- Since many ailments are self-limiting, the recovery of the sick person after a certain period of time provided an ex post facto validation of the therapy. Non-recovery was most often interpreted as the disequilibrium being just too strong, or the bloodletting being done inadequately.
So did it “really” work?
A few years ago, with only medical training informing my worldview, I would quite assertively have answered this with a quick “No.” I did not have the “historical empathy” for George Washington and his doctors (and for past therapeutics in general) that I have today. To revisit what Rosenberg said in the first extract above, therapeutics involved “a good deal more than a series of pharmacological or surgical experiments, [including] emotions and personal relationships.”
Unlike most of us in the present, folks in the past were not looking for exact diagnoses and certified treatment regimens from their physicians. They were mainly looking for reassurance and prognosis, and procedures like purging and bloodletting helped physicians provide these to their patients. The very act of a doctor doing something seemingly powerful made the patient feel to be in “good hands” and on the way to potential recovery. In the patient-physician encounter, as long as the doctor confidently projected a trajectory of recovery (or even non-recovery), and appeared to do everything in their power to help the patient, things made sense and were thought generally to “work.”
In the case of Washington, mainstream medical opinion and even public opinion debated whether the doctors had bled him “too much,” but the very act of bloodletting was not under any serious attack (this happened later in the 1800s). So for George Washington, the bloodletting did “not work,” but only to the extent that we would, for example, say that an appendectomy procedure did not work for a certain patient with an acutely inflamed appendix. We wouldn’t lose confidence in the value of the surgical procedure itself, instead, we would just say that it didn’t work in the context of that individual patient.
To sum it all up, we often do a disservice to people in the past when we mock their ideas as “primitive” or “irrational.” In the case of bloodletting, for example, we saw that people had perfectly rational reasons — according to the beliefs and ideas of those times — for trusting it. (We should be cautious, of course, to not include over here such ideas as the Witch Trials involving midwives, where the reasons were less “rational” and medical, and more about controlling and exerting power over women.)
Washington is a great example here also because as a military general he had mandated smallpox inoculation of his troops, which in the late 1700s was an extremely unusual and radical (as well as successful) decision. The fact that 20 years later the same person, on developing throat discomfort and fever, asked his physicians to bleed his veins is thus very telling. When we ridicule people in the past for their ideas and beliefs without being empathetic to their particular contexts, it is our capacity of comprehension, not theirs, that ultimately is in the dock.