Seeking the Good Deathtags: death, health care, medicine, dying, Emily K. Abel
Although most terminally ill Americans define a good death as one that occurs at home surrounded by family, a high proportion of people die alone in hospitals, tethered to machines. And many high technology treatments administered to dying patients impose enormous financial costs and inflict additional suffering without significantly extending life.
One explanation for the ascendance of those therapies invokes the technological imperative; if the technology exists it must be used. Another indicts insurance incentives that reward physicians for administering procedures but not for talking with patients and families and explaining their options.
History provides a more compelling explanation. Hospitals were peripheral to the dying experience throughout the nineteenth century. When the first government survey was conducted in 1873, the nation had only 120 hospitals, most of which were custodial institutions housing the “deserving poor.” Middle-class patients rarely entered hospitals. Although low-income people had few options, most families were reluctant to entrust dying relatives to such facilities. In addition, hospitals tried to discourage the entry of dying patients.
Most nineteenth-century doctors were well aware that they possessed few effective treatments and often had little choice but to wait for the end, administering whatever pain medicine was available. Popular attitudes helped patients, families, and doctors accept the inevitability of death. The dominant culture continually issued reminders of life’s fragility. The religious tracts, popular health books, and novels flooding the market insistently warned that death could come at any moment and that one must be prepared. Children as well as adults received that message. A poem in an antebellum Sunday School book ended this way: “Lord, grant that I/In faith may die/ And live with thee/Above the sky.” And dying people and their families also believed they could anticipate heavenly reunions. The most famous example is Little Eva in Harriet Beecher Stowe’s Uncle Tom’s Cabin. Convinced she would see Uncle Tom and Mammy again in heaven, Eva died peacefully.
During the late nineteenth and early twentieth centuries, however, medicine’s imperative to avert death increasingly took priority over the demand to relieve pain and suffering at the end of life. Developments in scientific medicine dramatically heightened physicians’ confidence in their curative powers. An accumulation of breakthroughs, including the isolation of the pathogens causing major infectious diseases and the development of dramatic new diagnostic technologies both enhanced medicine’s efficacy and dazzled the public.
Hospitals revised surgical procedures to conform with discoveries about asepsis and installed X-ray machines and clinical laboratories, which publicity photographs prominently displayed. As hospital increasingly converted themselves into major scientific enterprises, growing numbers of physicians sought to affiliate with them, participate in their governance, and fill their beds with patients. By 1909, the nation had 4,259 hospitals with a total of 421,065 beds. As both hospitals and physicians gained increased confidence in their ability to avert mortality, concerns about the emotional and spiritual sufferings of people at the end of life receded into the background. Professional rewards increasingly came primarily to those who could forestall death.
The idealization of scientific rationality also helped to alter popular attitudes about death. Emboldened by the new bacteriological knowledge, health officials launched widespread campaigns to convince the public that mortality resulted from human action rather than divine forces. Far from winning praise, patients and relatives who accepted death as God’s will increasingly met criticism.
The deepening chasm between professional and lay knowledge helped doctors conceal bad news. Although little had distinguished the ideas and practice of physicians from those of laypeople throughout much of the nineteenth century, physicians could claim unique competence by the early twentieth century. They alone had access to diagnostic tools, and they spoke a language few patients could understand. Doctors also had a better understanding of diseases as distinct entities in all patients and could more accurately predict the outcome.
By the early twentieth century, improved control of acute infectious diseases began to foster the illusion that medicine could triumph over death itself. Simultaneously, however, the rising prevalence of chronic conditions made any celebration premature. Between 1870 and 1920, the proportion of deaths attributed to chronic, degenerative diseases, such as cancer and heart disease, increased from 7 to 50 percent; by 1940, the figure was more than 60 percent. Nevertheless, both physicians and hospitals tried to concentrate on patients with problems that could be quickly as well as successfully resolved.
The end of World War II inaugurated a new period of medical triumphalism. America’s victory, abetted by the development of penicillin, radar, and the atomic bomb, had generated unprecedented optimism about the entire scientific enterprise. The war “taught one lesson of incalculable importance,” the Women’s Home Companion reported in 1946. “The lesson--that with unlimited money to spend we can buy the answers to almost any scientific problem.” Federal and private funding for medical research soared.
Partly as a result of the postwar hospital building boom, death and dying increasingly moved into hospitals during the decades immediately following the war. By 1960, 50 percent of deaths occurred in those facilities, and many patients who died elsewhere spent time in hospitals during the last year of life. The spread of private insurance plans enabled patients to fill the new beds. But death did not move into hospitals solely because those facilities grew in size and number. The patient population was much sicker than it had been before the war. Rising expectations of cure encouraged many people to seek hospital placement; a high proportion met death instead.
Many practices established at the turn of the twentieth century and reinforced during the post-war period continue today. Patients and families still refuse to relinquish hope even when the possibility of recovery is remote. In addition, physician evasiveness encourages late fights for survival. Doctors today are far more likely than their predecessors to reveal grim diagnoses, but most continue to withhold poor prognoses. And medicine’s duty to cure disease still trumps the duty to care for patients whose lives slowly wane. Preserving life remains the primary goal.
“If you come to this hospital, we’re not going to let you die,” promised Dr. David T. Feinberg in 2009. Feinberg was CEO of the Ronald Reagan UCLA Medical Center, a facility distinguished by its high-intensity approach to medicine and the huge amount of money it spends on patients in the last year of life. Groups seeking to humanize the care of dying people must reorder medicine’s basic priorities.
comments powered by Disqus
- Did a historian who said he’s a victim of McCarthyism get the story wrong?
- Stephanie Coontz’s work on the history of marriage cited by the Supreme Court.
- NYT History Book Reviews: Who Got Noticed this Week?
- David Hackett Fischer wins $100,000 prize for lifetime achievement in military writing