Books Books - articles brought to you by History News Network. Fri, 19 Apr 2024 06:26:26 +0000 Fri, 19 Apr 2024 06:26:27 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://historynewsnetwork.org/article/group/3 Reading Peter Frankopan's Ambitious Planetary History

Desertification, village of Telly, Mali. Photo Ferdinand ReusCC BY-SA 2.0

 

The 24 main chapters of The Earth Transformed: An Untold History by British historian Peter Frankopan cover a longer period of history--from “the creation of our planet around 4.6 billion years ago” until late 2022--than any book I’ve read (it begins with a series of excellent maps, the first one displaying the Pangaea Supercontinent of 200 million years ago). Its introduction and conclusion focus on the problems of present-day climate change; Frankopan stresses that all his extensive historical research on human interaction with the environment has left him concerned about our climate future--and humanity’s fate within it.

How concerned? This sentence from the introduction sums it up nicely: “We live in a world teetering on the brink of disaster because of climate change.” And parts of that book’s section sum up as well as I’ve seen our present climate predicament:

Human impact on the natural environment has been devastating almost everywhere, in almost every way, from water contamination to erosion, from plastics entering the food chain to pressure on animal and plant life that has reached such a high level that the most recent United Nations report talks of declines in biodiversity at rates that are unprecedented in human history, and that threaten an erosion of “the very foundations of our economies, livelihoods, food security, health and quality of life worldwide.”

Or this 2019 quote from António Guterres, Secretary General of the United Nations: “Every week brings new climate-related devastation . . . . Floods. Drought. Heat waves. Wildfires. Superstorms.”

In his conclusion, Frankopan writes that the summer of 2022 was especially alarming—“Record heatwaves in Europe, the worst drought in many decades in Africa, nearly eight times the average rainfall in Pakistan . . . flash floods in Death Valley in the USA (caused by massive rainfall in three hours)…. the highest-ever recorded rate of rainfall in South Korea . . . the wettest year in Australia's modern history,” extremely high winter temperatures in Paraguay and in South Africa, “and a long and severe drought in China that followed the hottest summer on record, which was called the most severe heatwave ever recorded anywhere and was unparalleled in world climatic history.”

Yet, he marvels, many people continue to deny or minimize human-caused climate change. He does not deny, however, that there has been some progress in various countries, and he stresses that our past and present climate problems have been solvable—if only the collective will coalesces into action. He also mentions hopes that some people have in geoengineering including cloud seeding. But he cautions us that human modification of natural weather systems risks (as one scientific 2015 report indicated) “significant potential for unanticipated, unmanageable, and regrettable consequences.” In a recent Apple TV + eight-part fictional series called “Extrapolations,” a character played by Edward Norton in Episode 4 expresses similar sentiments: “We've treated this planet like an all-you-can-eat buffet for 250 years since we started burning fossil fuels. And changing the chemical composition of the atmosphere is not going to fix it.” Cloud seeding could lead to “changes in rainfall patterns that lead to crop failures and floods [and] . . . extreme weather events leading to mass migrations, social unrest, stress on infrastructure.”

Frankopan also indicates the possibility that future unknown events, like nuclear war, could greatly alter our climate, and concludes that the “biggest risk to global climate comes from volcanoes”—he often mentions the historical climate effects of volcanoes, especially their role in decreasing temperatures (see here for more on that effect). Regarding one volcano, occurring in what is now Indonesia in 1257, he writes that its effects “were global,” affecting such far-away areas as England and “the western flank of the Americas.”

Frankopan believes that generally “we ignore climate and long-run climate patterns or changes altogether when we look at history.” But his new book “attempts to integrate human and natural history,” including climate changes, believing “it is fundamentally important if we are to understand the world around us properly.” Using a wide variety of sources--212 pages of endnotes are available on the publisher’s website--he connects environmental changes to all sorts of historical developments including migrations, plagues, living arrangements, political structures, and religious beliefs. For example, he writes that “three of the most lethal pandemics” of the last 2,000 years followed “warmer springs and wetter summers [that] produced the bacterium that caused bubonic plague.” And, at the end of Chapter 12, “the fundamentals of ecological equilibrium and environmental sustainability underpinned the cultural, political, socio-economic, diplomatic and military histories of individual kingdoms, states or regions. Reliable food and water supplies were central at all times.”

At times, however, several pages may elapse without any mention of climate or the environment, as Frankopan details various political, social, or cultural developments in widely varied parts of the earth including Asia, Africa, and the Americas. As important as he thinks climate has been as a historical factor, he attempts not to overstate its significance. For example, in Chapter 9, he writes, “cities were far more lethal [because of unsanitary conditions] than changes to climate patterns.”

His first chapter is entitled “The World from the Dawn of Time (c.4.5bn–c.7m BC).” In this period before direct human ancestors (Homo-) existed, the author tells us that for about “half the earth’s existence, there was little or no oxygen in the atmosphere.” Still long before humans appeared, periods of extreme warming and cooling existed and one stage “brought about the extinction of 85 per cent of all species. . . . The single most famous moment of large-scale transformation in the past, however, was caused by an asteroid strike that impacted the earth 66 million years ago on the Yucatan peninsula” in Mexico.

In Frankopan’s second chapter, “On the Origins of Our Species (c.7m–c.12,000 BC),” he states that the timing of Homo sapiens’s origins is disputable: “Our own species may have started to emerge as distinct from Homo neanderthalensis [Neanderthals]. . . . though this is a matter of fierce debate.” Humans first appeared in Africa and then dispersed to other continents, for example, “into South-East Asia, China and beyond, reaching Australia by around 65,000 years ago.” “Most scholars date the arrival of the first modern humans in the Americas to around 22,000 years ago.”

The author intersperses these human movements with accounts of climate-change effects--besides volcanoes, he details all sorts of other causes of changes such as El Niño and La Niña. For example, “agriculture may not have been impossible before the Holocene,” a “long period of warmer, stable conditions” that began roughly 10,000 years ago, “but it suited conditions perfectly after its onset.”

Chapters 3 to 24 deal with a time span more familiar to historians--12,000 BC to AD 2022. But the book’s title, The Earth Transformed: An Untold Story correctly indicates that it is also unique. Not a history of some portion of our planet, but a global history (i.e. of the earth). And “untold” because no previous history has integrated the human and environmental journeys together over such a long time span.     

Although Frankopan pays sufficient attention to the Americas and his native England, the part of the world he is most familiar with is the Eurasian Steppe, which runs from the Balkans to the Pacific Ocean. Two of his previous books, The Silk Roads (2017) and The New Silk Roads (2020), deal with that area. Here, in Chapter 8, he writes. “Some 85 per cent of large empires over more than three thousands years developed in or close to the Eurasian steppe.” Among other observations he makes here is one that greatly affects demographics, a topic he often mentions: Tropical climates often provide “a crucible in which infectious diseases could flourish.”

Later on in Chapter 13, “Disease and the Formation of a New World (c.1250–c.1450),” he returns to the Eurasian steppe when he considers the Mongol conquest of many areas. And he writes that it may have “created a perfect environment” for the spread of plague. In the late 1340s, the Black Death spread across “Europe, the Middle East, North Africa,” and probably other parts of Africa, killing “an estimated 40-60 percent of the population.”

Frankopan is not only a global and environmental historian, but also one quite critical of European and Western imperialism and racism, from the time of Columbus to the present. Considering the world around 1500, he writes, “what drove the next cycle in global history was the pursuit of profit,” mainly by Europeans. He also mentions the “‘Great Dying’ of the indigenous populations of the Americas which was caused by violence, malnutrition and disease.” Later, dealing with the half century after 1870, he states that “the dovetailing of evangelical ideas about racial superiority, religious virtue and capitalism was a core element of the way that Europeans, and the British above all, saw both themselves--and the rest of the world.” And in that same period,

“the ecological implications of rapid transformation of landscapes that were motivated by the chase for a fast buck" were “severe and shocking.”

Like the earlier environmental critic E. F. Schumacher, he cites Gandhi on “the ravages of colonialism,” and suggests that modern economics should be based on a less materialistic approach to life. (Schumacher included his essay on “Buddhist Economics” in Small Is Beautiful.)

Regarding slavery, Frankopan estimates that in the 1780s “more than 900,000 souls were sent from the coast of Africa.” “The demand . . . was driven by the vast profits that could be generated from tobacco, cotton, indigo and sugar.” And even now the aftereffects of that racism that helped produce slavery still impact us. U.S. counties that possessed large numbers of slaves in the early 1860s are “more likely today not only to vote Republican, but to oppose affirmative action and express racial resentment and sentiments towards black people.”

The author’s last two chapters prior to his Conclusion cover the period from about 1960 to 2022. From the publication of Rachel Carson’s Silent Spring (1962) until the present, environmental anxieties have continued, at first regarding various forms of pollution and later stressing the dangers of climate change.

Frankopan also reveals that certain types of geoengineering, like cloud seeding, were already engaged in by the USA during the Vietnam War in the late 1960s, and for some time the U. S. Department of Defense has been “the largest institutional producer of greenhouse gases.” Military conflicts, as he points out, come with a “very high” environmental cost--note, for example, Russia’s invasion of Ukraine.

Although there is plenty of blame to go around for what Frankopan considers a woeful minimization of the importance of climate change, in the USA he identifies chiefly the Republicans. Despite more than a 99 percent agreement among “scientists working in this [climate-change] field,” more than half of the Republican members of the 117th Congress (ending in January 2023) “made statements either doubting or refusing to accept the scientific evidence for anthropogenic climate change.” In the last sentence of his book the author writes, “Perhaps we will find our way back there [to a sustainable planet] through peaceful means; a historian would not bet on it.” 

]]>
Fri, 19 Apr 2024 06:26:26 +0000 https://historynewsnetwork.org/article/185877 https://historynewsnetwork.org/article/185877 0
How Bob Dylan Ran Afoul of the FBI

James Baldwin and Bob Dylan at a dinner of the Emergency Civil Rights Committee, where Dylan would give a notorious speech in acceptance of the organization's Thomas Paine Award.

The Kennedy Assassination

On November 22, a little more than two weeks after the Newsweek article [a derogatory profile on Dylan], John F. Kennedy was assassinated in Dallas. On December 13, Bob Dylan received an award from the Emergency Civil Rights Committee. Things did not go well.

Problems arose when Dylan, who had been drinking throughout the ceremony, gave a rambling acceptance speech that reads more as an out-loud, unfiltered internal monologue, rather than a thought-through statement of views, let alone the expected thank you at an awards ceremony. In part, he said:

So, I accept this reward — not reward [laughter], award on behalf of Phillip Luce who led the group to Cuba which all people should go down to Cuba. I don’t see why anybody can’t go to Cuba. I don’t see what’s going to hurt by going any place. I don’t know what’s going to hurt anybody’s eyes to see anything. On the other hand, Phillip is a friend of mine who went to Cuba. I’ll stand up and to get uncompromisable about it, which I have to be to be honest, I just got to be, as I got to admit that the man who shot President Kennedy, Lee Oswald, I don’t know exactly where — what he thought he was doing, but I got to admit honestly that I too — I saw some of myself in him. I don’t think it would have gone — I don’t think it could go that far. But I got to stand up and say I saw things that he felt, in me — not to go that far and shoot. [Boos and hisses]

Before ending his remarks, he scolded the crowd for booing, “Bill of Rights is free speech,” and saying he accepted the award “on behalf of James Forman of the Student Non-Violent Coordinating Committee and on behalf of the people who went to Cuba.” That too was met with boos as well as applause.

Dylan’s internal thought process aside, in most situations in 1963, his comments on Cuba alone would have been enough to get him into trouble, but given the proximity to the Kennedy assassination, his remarks about Oswald were unequivocally verboten. As a result, he would be forced to issue a public apology. Though his apology, consistent with Dylan speaking for himself alone, reads as a further elaboration on his own internal thinking:

when I spoke of Lee Oswald, I was speakin of the times I was not speakin of his deed if it was his deed the deed speaks for itself.

Apology or not, the speech had repercussions. Among other things, the incident found its way into the FBI’s files — by way of his girlfriend Suze Rotolo. As a report in her file noted:

ROBERT DYLAN, self-employed as a folksinger appeared on December 13, 1963, at the 10th Annual Bill of Rights Dinner held by the ECLC at the Americana Hotel, New York City. At this dinner, DYLAN received the Tom Paine Award given each year by the ECLC to the “foremost fighter for civil liberties.” In his acceptance speech DYLAN said that he agreed in part with LEE HARVEY OSWALD and thought that he understood OSWALD but would not have gone as far as OSWALD did.

A more elaborate account of the incident showed up in the nationally syndicated column of Fulton Lewis, Jr., which ridiculed the entire event, but made clear to get across Dylan’s remarks. For example, Lewis characterized James Baldwin, also honored at the event, as a “liberal egghead whose books dot the best seller list,” and, Robert Thompson, another attendee as “the top-ranking Communist official once convicted of violating the Smith Act.” He then delivered his shot at Dylan:

The ECRC Tom Paine Award went to folksinger Bob Dylan, who wore dirty chinos and a worn-out shirt. He accepted the award “on behalf of all those who went to Cuba because they’re young and I’m young and I’m proud of it.” He went on to say that he saw part of Lee Harvey Oswald “in myself.”

What is striking about the column is that it reads as though Lewis were at the dinner, though he never says as much, nor does he cite any source for what is a very detailed description of the event. So either he failed to mention his attendance — his byline has him in Washington, the dinner was in New York — or he received a rather detailed report from an unnamed source.

All this might be explained by the fact that Lewis had a friendly relationship with the FBI. An FBI memo from October 1963, which listed anti-communist writers “who have proved themselves to us,” including journalists Paul Harvey of ABC News, Victor Riesel of the Hall Syndicate, and Fulton Lewis Jr. of King Features Syndicate.

That particular mystery might be answered by information in the FBI file on Bob Dylan, which recent governmental releases show was created. Specifically, there is an FBI report on the Emergency Civil Liberties Committee, which includes a table of contents listing for a report on the dinner. Unfortunately, the actual report is not included in that document, though there is a notation on the informant — coded as T-3390-S — who supplied information on Dylan. Beyond that, there is a report from January 1964, which references a file on Dylan himself, though there he is called “Bobby Dyllon.” Bob Dylan, in other words, was a subject of a more particular kind of FBI attention.

While most writing on Dylan in this period focuses on his personal decisions and behavior, what is clear in looking at the concentrated events in his most political period is that he confronted a considerable amount of scrutiny and hostility. He was ridiculed in the media, kept from performing certain material on television, and had his spontaneous remarks used to justify the opening of an FBI file. Dylan, in other words, was up against more than he realized. In this, he was not alone.

Excerpted with permission from 

Whole World in an Uproar: Music Rebellion & Repression 1955-1972

Aaron J Leonard

Repeater Books, 2023

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185833 https://historynewsnetwork.org/article/185833 0
The Power of Dependency in Women's Legal Petitions in Revolutionary America (Excerpt)

James Peale, "The Artist and His Family," 1795

Historians have spent decades investigating whether the American Revolution benefited women or provoked changes in women’s status. By and large, white women’s traditional political rights and legal status remained relatively stagnant in the wake of the American Revolution. In some ways, women’s legal status declined over the course of the long eighteenth century. Certain women’s private lives, however, did see some important shifts, especially in regards to family limitation and motherhood. Importantly, the Revolution politicized some women who participated in boycotts, contributed to and consumed Tory and Whig literature, and even acted as spies or soldiers themselves during the war. Women also carefully negotiated their political positions to manage the survival and safety of their families. In the postwar period, elite white women gained greater access to education, though ultimately in service of raising respectable republican sons and their worthy wives. In many ways, however, the lives of American women looked much the same in the postrevolutionary period as they had prior to the war. Despite Abigail Adams’s threat to “foment a rebellion” if women were not included formally in the new American body politic, there would be no great women’s revolution in the late eighteenth and early nineteenth centuries.

Asking whether the Revolution benefited women or brought meaningful changes in their social, legal, and economic statuses, however, cannot fully illuminate the war’s impact on women’s lives. In some ways, this framework is both anachronistic and problematic. Constructing our queries in this way asks too much from a historical period in which inequality and unfreedom were so deeply embedded in patriarchal law, culture, and society as to render such a sea change unlikely  at best. Likewise, this line of inquiry presumes that revolutionary-era women collectively desired what first- and second-wave feminists sought for themselves. It also judges the consequences of the Revolution for women from a set of expectations codified as masculine. Certainly, there were a few noteworthy women who sought rights and freedoms for which liberal feminists of the nineteenth and twentieth century fought, but the Abigail Adamses, Mercy Otis Warrens, and Judith Sargent Murrays of the American revolutionary era were few and far between.

This long scholarly conversation about whether the American Revolution was centrally a moment of change, stagnation, or decline in women’s lives has framed many historical investigations from the wrong perspective. Ironically, we have been studying patriarchal oppression, resistance to it, and attempts to overcome it from a patriarchal standard all along. We must seek to understand the impact of the American Revolution on women’s lives by framing our inquisition around women’s own worldview, their own needs, aspirations, and desires, even when doing so is uncomfortable to our modern sensibilities. What function did the Revolution serve in women’s lives? How did women interpret the rhetoric of the Revolution? How did they make the disruption and upheaval of this historical moment work to their advantage, with the tools already at their disposal? How did they use the apparatus of patriarchal oppression— namely, assumptions of their subordination and powerlessness—to their advantage? What did they want for themselves in this period, and were they able to achieve it? When the impact of the Revolution is investigated  with this shift in perspective, we are able to observe the ways in which women’s individual and collective consciousness changed, even if the Revolution was not radical enough to propel them from their unequal station in American society.  

In Dependence asks these questions from a regionally comparative and chronologically wide-ranging perspective, focusing on three vibrant urban areas—Boston, Massachusetts; Philadelphia, Pennsylvania; and Charleston, South Carolina—between 1750 and 1820, or what I refer to broadly as the “revolutionary era.” These three cities serve as ideal locations for a study of early American women’s experiences as their laws, social customs, and cultures varied significantly. Boston, Philadelphia, and Charleston were three of the most populous cities in the American colonies and, later, the early republic, which provided inhabitants with access to burgeoning communities as well as the growing marketplaces of goods, printed materials, and ideas. Massachusetts’s, Pennsylvania’s, and South Carolina’s laws regarding marriage, divorce, and property ownership (and thus their demarcation of women’s rights and legal status) all differed a great deal during this period. I chose to focus my study on urban as opposed to rural areas so as to include in this work impoverished communities, whose members often turned for assistance to city almshouses and other local organizations. Women in each of these three cities had the opportunity to petition their state legislatures for redress, yet because of their varying experiences and racial and class identities, they did so for different reasons, with different access to seats of patriarchal power, and certainly with different outcomes.

The revolutionary era was a period in which ideas about the meanings of independence, freedom, and individual rights were undergoing dynamic changes. Dependence was a fact of life in colonial British America, defining relationships ranging from colonial subjects’ connections to the king to wives’ unions with their husbands. Both parties in these relationships had power—even dependents—and these relationships required a set of mutual obligations. Thus, dependence was not an inherently impotent status. The meaning of dependence shifted, how ever, with the adoption of the Declaration of Independence. Dependence ceased to be a construct with positive connotations in the American imagination, and likewise became imbued with a sense of powerlessness. The newly independent United States required the allegiance of  its people, and adopted the concept of voluntary citizenship rather than  involuntary subjectship. Accordingly, the law recognized women’s personhood and, to a certain degree, their citizenship, but it also presumed their dependence, which codified them as legally vulnerable and passive. Dependence, then, became highly gendered, and feminized. Women’s  dependent status was likewise contingent on their socioeconomic status, their race, the legal jurisdiction in which they resided, and their relationship to men in power.

Importantly, dependence must not be observed as the ultimate foil to independence. These terms are not abjectly dichotomous to one another, but exist on a fluid spectrum. Situated on this continuum, women firmly asserted their dependence while expressing the “powers of the weak.” While a traditional understanding of “power” implies some form of domination of one party over another through possession, control, command, or authority, this conception obscures the meaning of the word itself while also negating the exercises and expressions of power that do not conform to these standards. If power is also understood as existing on a fluid spectrum, then, an analysis of women’s invocation of the language of dependence in their petitions to state legislatures, courts, local aid societies, and their communities becomes much different.

Notions of power and freedom in early America were contingent upon a person’s intersectional identities. Wealthy, white male enslavers, for example, had different understandings and experiences of freedom than did the Black women they enslaved, and because of the legal structure of the patriarchal state, these white male enslavers held a great deal of power over unfree, enslaved Black women. Like dependence and independence, freedom and unfreedom existed on different ends of the same spectrum. Race, gender, class, religion, region, status of apprenticeship,  servitude, or enslavement, and other elements of an early American’s  identity shaped their relationship to freedom and unfreedom. Notably, this continuum was deeply hierarchical. Even if enslaved women earned  or purchased their legal freedom from the institution of slavery, that free  status was still tenuous, as was the free status of any children they bore. Likewise, enslaved women would have viewed freedom differently than their white counterparts. Black women in particular often defined freedom as self-ownership, the ability to own property, to profess their  faith freely, and to ensure freedom for their families. Freedom for many  enslaved people was a matter of degrees, a game of inches, a process of  constant negotiation for small margins of autonomy and independence  in an otherwise deeply oppressive system. Even if they obtained documentation that declared them legally free from the institution of slavery,  that did not guarantee their perpetual freedom, and it certainly did not  grant them equality under the law; that freedom—even if it existed on  paper—was tenuous. Additionally, American freedom did not evolve  and expand in a teleological manner; in many cases, even in the revolutionary era, freedoms devolved and disappeared for certain marginalized groups of Americans.  We must always consider the ways in which  Americans’ experiences of their freedoms were not (and in many ways, still are not) equal.

Black women experienced multiple, layered dependencies that were compounded by their race and gender, and especially by the existence of the race-based system of chattel slavery that relied on Black women’s reproductive capacity to enhance the power of white patriarchs. Black women, therefore, were not endowed with the same legal protections, rights, and privileges as their white contemporaries were. Engaging with the sympathies of white patriarchs, for example, was not a functional or effective strategy for Black women, as it was for white women. In order  to fully understand how Black women exploited the terms of their intersectional dependencies, then, we must examine the unique experiences  of Black women from within these interlocking systems of oppression. The notion that women could—and can still—express power because of their subordinate status and the protection it offers indicates that women have never been completely powerless. Like other historically marginalized groups or individuals, women have been able to express  a degree of power, autonomy, and agency over their own lives while still being overtly suppressed by a controlling authority. Thus, dependents  expressed power in a variety of ways, including more subtle means such as claiming a public voice or becoming politically active via the submission of petitions. What is especially significant, however, is not that women found power through petitioning various authorities but that they found power in this way through public declarations of their dependent, unequal, and subordinate status.

This excerpt from In Dependence: Women and the Patriarchal State in Revolutionary America is published by permission of NYU Press. 

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185816 https://historynewsnetwork.org/article/185816 0
Carolyn Woods Eisenberg on Nixon's War Deceptions

Fire and Rain: Nixon, Kissinger, and the Wars in Southeast Asia

By Carolyn Woods Eisenberg

Could the U.S. have ended the Vietnam War in 1969, saving 22,000 American men and more than one million Asian lives?

As early as 1967, Secretary of Defense Robert McNamara warned President Johnson the war was unwinnable and recommended that the U.S. stop the bombing of North Vietnam and begin to negotiate seriously. Johnson ignored him and McNamara later resigned.

During his 1968 election campaign, Richard Nixon promised to end the war and bring “peace with honor.” But he did not release any details of how he would do that. As Carolyn Woods Eisenberg, a professor of U.S. History and American Foreign Relations at Hofstra University, notes in her new book, Fire and Rain: Nixon, Kissinger, and the Wars in Southeast Asia, he did not have any private plans in place either. He would rely on his own reputation for “toughness” and the strategic acumen of his new National Security Advisor, Henry Kissinger.

During the next four years, Nixon and Kissinger gradually reduced the number of U.S. troops – in an attempt to defuse mounting public criticism – while stepping up military aid to South Vietnam and ramping up the bombing, eventually sending B-52s over previously off-limits targets including Cambodia and Hanoi. 

For Eisenberg, “The ostensible motivation for these ongoing policy choices was U.S. ‘credibility’ – the need to demonstrate the overwhelming power and resolve of the United States” to friends and enemies around the world.”

Despite increased bombing, the North Vietnamese steadily gained ground. The heightened violence and the rise in civilian deaths became, according to the author, “less a demonstration of strength than a sign to much of the world of American weakness and cruelty.”

Fire and Rain details how Nixon and Kissinger deceived the American public – and themselves – through four years of bombardment, on-and-off negotiations and mounting Congressional criticism.    

Her critical portrait of the two men draws on White House tapes, recently declassified memos, telephone transcripts and memoirs from Nixon’s close associates.

Her book provides a fascinating day-by-day account of how the two men fed each other’s ambitions and bloated egos. Kissinger, the former academic, jealously protected his intimate relationship with the crafty, insecure Nixon by continual flattery and, when necessary, with knives-drawn bureaucratic infighting with rivals.

Kissinger carefully watched each of Nixon’s major TV addresses (the President made more than a dozen such speeches in his first four years) and called immediately afterwards to congratulate him. Each time Kissinger lauded Nixon by saying the speech was “a work of art, “meaty,” “on-point” and that his delivery was “powerful,” or “strong, but not ingratiating” and that “it brought a lump to my throat.” 

Nixon, in turn, often vented his anger by sharing with Kissinger scathing criticisms of those who had “let him down.” When he failed to receive needed support in Congress, he confided to Kissinger that Republican Minority Leader Hugh Scott and Minority Whip Robert Griffin, “were a miserable lot… weak leaders.”

Horrible, Horrible

As the tapes reveal, the dialogue between the two men often turned morbid. In 1971, Nixon was contemplating bombing civilian targets in North Vietnam.

“Bomb Haiphong. Go for 60 days of bombing. Just knock the shit out of them,” Nixon mused.

“That’s right,” Kissinger agreed.

“And then everybody would say ‘Oh horrible, horrible, horrible” (laughs). That’s all right. You agree?” said Nixon on tape.

“Absolutely, absolutely,” Kissinger said.

Nixon carefully played on Kissinger’s rivalry with Secretary of State William Rogers. Nixon, deeply suspicious of “striped pants” diplomats, was determined to set his own foreign policy, with Kissinger serving as his strategic advisor and personal messenger to the Soviets, Chinese and North Vietnamese.

This did not stop the hapless Rogers from trying to do his job as head of the State Department and raising Kissinger’s ire. H.R. Haldeman wrote this in his famous diary:

“K in to see me for his periodic depression about Rogers. This time he’s found Rogers is meeting with (Russian diplomat) Dobrynin tomorrow and K is absolutely convinced that he’s going to try and make his own Vietnam settlement… and take full credit for it. K’s temptation is to confront Bill (Rogers)…Thinks he can scare him.”

Concessions to Brezhnev

As the 1972 elections approached, Nixon was desperate to be able to claim “peace” in Vietnam. But he and Kissinger now had little to bargain with, since the U.S. had withdrawn most of the 500,000 troops that were in Vietnam in 1968. In desperation, he and Kissinger turned to an unlikely partner, the Soviet Union.

Hoping to curry favor with Chairman Leonid Brezhnev, Nixon and Kissinger made a series of historic concessions. First, they would unilaterally acknowledge the “Iron Curtain” borders of Eastern Europe as permanent and accept a new Berlin agreement – without consulting their NATO allies. Second, they hastily agreed to a new SALT (strategic arms limitation agreement), granting major concessions to the Soviets. This shocked the existing U.S. arms limitation team headed by the respected Gerard Smith.

The Russians were secretly delighted and amazed how important the long-festering Asian war had become to Nixon, but they refused to slow their huge arms shipments to the North Vietnamese.

With almost all U.S. ground troops gone, Nixon had to rely on the Air Force. In the 21 months prior to the 1972 election, the Air Force dropped more bombs on Vietnam and Cambodia than the total tonnage rained down on Nazi Germany in World War II.

In 1972, the North Vietnamese, concerned about their damaged infrastructure and anticipating that Nixon would get re-elected, returned to the negotiating table. When he met with them in Paris, Kissinger made a big concession: the North could retain all the ground in South Vietnam it had captured and keep its troops in place. An agreement quickly fell into place. In October, just ten days before the election, Kissinger announced to a crowd of reporters, “Peace is at hand.”

With “peace” seemingly secure, Nixon cruised to re-election, clobbering Democrat George McGovern.

No Peace, No Honor

On January 23, 1973, just three days after his second inauguration, he told the American people that “we have concluded an agreement to end the war and bring peace with honor in Vietnam and in Southeast Asia.”

But the promise of peace was another deception. Warfare in Southeast Asia continued and by 1975, South Vietnam, Cambodia and Laos, once independent states, had all fallen under Communist control. Nixon himself was sinking in quicksand. Deeply engaged in the Watergate cover-up. He would resign in disgrace in August 1974.

Nor did the Paris agreement bring honor to the U.S. Eisenberg observes “the unrestrained use of American firepower had multiplied enemies and discredited friends.”

Why didn’t Nixon and Kissinger make the needed concessions and end the war four years earlier?

The author concludes “The unwillingness to stop a futile war was partly a result of (their) character and outlook. By temperament both men were drawn to military solutions and had reached a pinnacle of power by virtue of their ‘hawkish’ credentials…they readily embraced violent alternatives.”  

The availability of the Nixon White House tapes and other declassified material enables Eisenberg to paint an intimate, day-by-day portrait of both men’s interactions, complete with mood swings and unpleasant epithets.

Given the fact that (to our knowledge) no President since Nixon has taped conversations inside the White House, it may be a long time before we are able to get this kind of objective, detailed report of presidential decision-making.     

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185598 https://historynewsnetwork.org/article/185598 0
No Blood for Oil: Examining the Movement Against the Iraq War

Black Bloc protesters against the ongoing War on Terror, Washington, March 21, 2009

Book Review

David Cortright, A Peaceful Superpower: Lessons from the World’s Largest Antiwar Movement (New Village Press, 2023)

On February 15, 2003, a month before the United States launched its ill-fated “shock and awe” military campaign against Saddam Hussein’s Iraq, the world was witness to a massive, coordinated mobilization of citizens worldwide, the likes of which had never been seen before in opposition to waging war. Fifty-four nations were home to some 600 marches. As Brian Sandberg has noted in a recent HNN piece, Canada and the United States witnessed 250 protests, Europe had 105, 37 took place in the Middle East and Asia, 8 occurred in Africa, Latin America had twice as many as Africa, Oceania saw 34, and, of all places, Antarctica lay claim to 1. Cities throughout the world, large and small, became epicenters for antiwar protestors. All in all, nearly 10 million global citizens said no to war.

Of course that did not stop the Bush II Administration from going to war, insisting that in the aftermath of the 9/11 attacks and Hussein’s possession of “weapons of mass distraction,” the United States had the moral imperative and righteous cause to democratize the Middle East and win this “War on Terror.” Of course, many did not agree with this assessment. In fact, five months prior to the invasion one antiwar group, Americans Against War with Iraq, ran a full page ad in the New York Times with blaring headlines, “Bush’s Weapons of Mass Distraction: War with Iraq,” which was endorsed by hundreds of American citizens urging their elected representatives to reject the invasion. Other groups such as NOT IN OUR NAME, International ANSWER,  Win Without War (the author helped found this organization), Code Pink for Peace (reminiscent of the work of Women Strike for Peace during the Vietnam War) and United for Peace and Justice sponsored large scale demonstrations with protestors carrying signs reading “No Blood for Oil” and “Bring Them Home Alive.”

Once the military invasion was underway and the number of US soldiers killed in action mounted there were other novel forms of protest. One of the most moving was “False Pretenses.” The traditional pacifist organization American Friends Service Committee first introduced the idea of a memorial by placing 500 pairs of boots at the Federal Building in Chicago to symbolize graphically the number of American soldiers killed at that point in the war. Other protestors constructed mock coffins and blocked street traffic in order to get their point across. Most dramatic, and in the true spirit of civil disobedience, was Grandmothers Against the War, a local group based in New York City. On October 17, 2005, a number of headstrong grandmothers made a valiant effort to enlist in the military at New York City’s Times Square recruiting station. When they were refused entry, all seventeen of them, ranging in age from forty-nine to ninety, promptly stat down and begged to be arrested. They were, but all charges were subsequently dropped.

Adding to the uniqueness of the evolving antiwar movement was the appearance of a grassroots international antiwar community. Acting as a decentralized network known as heterarchies, led by MoveOn and WHY WAR?, news and shared strategies with political activists rapidly shaped the movement’s organizing and campaign activities. The internet proved powerful when linking with other more grassroots organizations, especially Veterans for Common Sense, Operation Truth, and Iraq Veterans against the War. As events moved quickly, so, too, did these groups get the word out to counter false narratives and misinformation on the part of the warring power.

No one is better equipped to discuss this movement, its successes, its failures, and its social movement perspectives than scholar-activist David Cortright. Like me, David is a Vietnam War veteran; he understands the sacrifices and cost of military service. As a scholar in his own right he has authored a number of commendable works on peace history and peace activism. Now retired as Professor Emeritus from the University of Notre Dame, he has, in terms both of us would understand, earned his spurs. His book is engaging and he draws from his own personal experiences, which rely on comparisons between the Vietnam and Iraq antiwar movements. He is the right author to compose the first scholarly analysis of the antiwar movement from a personal and objective perspective involving the nation's longest-running military conflict, Afghanistan included.

What Cortright seeks to unwrap for readers is explaining that the Iraq antiwar movement represents “a continuation of multigenerational struggles for peace that emerged from networks dating back to the Vietnam era.” He expands his focus to address some of the global opposition to the war, though his primary focus is on the US side of the equation, while explaining how new digital forms of mobilization helped modernize the peace movement and the “landscape of political activism more generally.” He considers the groups that became part of the movement, especially religious communities, trade union, business leaders, people of color, military members, the Hollywood crowd, women’s organizations and others. What helped facilitate their activism, of course, was the Internet. In striving to make the antiwar movement respectable, moreover, these groups banded together to call for constructive alternatives such as international policing and cooperation through the use of nonmilitary means to counter terrorism. Urging peacebuilding and diplomacy, respect for international law, and attempting to legitimize the role of the United Nations—today a tall order given Putin’s savage attack on Ukraine and China’s unquenchable appetite for military supremacy—the key role that the antiwar movement in the United States sought to achieve was calling for honesty and transparency on the part of the nation’s leaders as well as an expanded voice “for civil society in shaping policy decisions.” 

It is fair to state that in the fog of war, as Cortright knows only too well, the crumbling towers smashing down in lower Manhattan, the penetrated walls at the Pentagon, and Flight 93 crashing in Shanksville, Pennsylvania, were still burning in the collective memory of the American public, providing convenient cover for the national government to extend its “war on terror” and once and for all do away with this emergent and pesky threat. But although it was difficult for peace groups to stop the war from happening Cortright admirably shows how opponents of this war utilized the tools of digital messaging to foster community-based contact in order to create sustainable organizations. In some respects, even though the war would drag on needlessly—some felt endlessly—Cortright’s book shows how mobilization for social change can work and why it is so critical in building political power for change. What both means of  mobilization accomplished was helping to change public opinion over time.

But changing public opinion to prevent the war, despite the unprecedented scale and breath of the movement, proved insufficient at the beginning and, like all previous antiwar movements, illustrates the futility and frustration committed peace activists have always faced in efforts to duplicate the war government’s success when it comes to capturing the larger media and developing the effective media strategy to sway public opinion. It is in this vein where Cortright ably defines how difficult communicating for peace can be while at the same time demonstrating that, unlike the Vietnam antiwar movement, for instance, this movement did have some notable successes, the most obvious being not disparaging the troops themselves. One of the most valuable lessons the Iraq antiwar movement learned from Vietnam was to oppose the war itself but show respect to the individuals fighting it. After being separated from Active Duty in 1970 while traveling back to JFK International I know only too well how I was treated by a flight attendant while in uniform—this despite the fact that the “spitting image” was way overblown. Equally significant, the current movement did not support Hussein, and among Hollywood opponents of this conflict one would be hard pressed to find a “Hanoi Jane.” Where the movement could have scored more points with the confused public would have been to address its public concerns about the Iraqi dictator. Even Cortright admits that had that been done it “might have helped to attract greater participation from the Jewish community and would have acknowledged the widespread perception of Saddam as evil incarnate.”

Even among the less enthusiastic African American community, however, which saw few participate in antiwar protests, the movement did succeed in terms of rejecting the appeals of military recruiters. Generally, the military had increasingly relied on appeals to social and economic opportunities, including educational benefits, to fill the ranks of the All Volunteer Force by attempting to penetrate the inner cities and less-privileged communities. And women’s groups, long a critical component of the peace movement dating back to the formation of the Women’s International League for Peace and Freedom shortly after World War I, became a powerful force in the Iraq antiwar movement. With Code Pink at the helm, leaders admirably captured the spirit of Jane Addams and Emily Greene Balch and steered the peace ship against the rising tide of the “Bush administration’s ‘testosterone-poisoned rhetoric’.” 

Tragically, and this is where the movement grew in popularity and strength as the conflict continued, the organizers of the antiwar movement kept driving home the point that the Bush administration squandered the “international empathy and support that flowed to the United States in the wake and the terrorist attacks of September 2001.” By invading Iraq, the US lost not only the endorsement of the United Nations but also a number of valuable military allies. And as the number of American casualties mounted and the number of Iraqi civilian deaths mounted, the antiwar movement did help forge a Democratic Party consensus for withdrawal of troops, which ultimately led to Barack Obama’s election and a gradual end to military conflict. It turned out to be a mixed bag as the withdrawal of troops dragged on for three years while Obama’s policies in Afghanistan were marked by drone warfare and an expanded military presence in that country thus earmarking it as the nation’s longest war.   

Much of what Cortright describes is based on his own long, personal involvement in the antiwar movement and the events surrounding it are also known by peace historians. But what makes his account so valuable is the lessons he seeks to convey in the hopes of creating a more peaceful world. He bemoans the fact that “the movement made little or no progress on the larger agenda of creating a more peaceful US foreign policy.” Sadly, that has always been the case throughout our history. But what he does take satisfaction in pointing out is that war opponents, using the Internet and community-based action, did generate enough political pressure to bring a gradual end to the conflict. To some degree it highlighted the power of social action necessary for shaping the course of history.

But in light of the war in Ukraine and China’s saber-rattling are we any closer to establishing “effective” peaceful foreign policies? The fact that the US is working with its NATO allies without boots on the ground in Ukraine indicates that Cortright’s analysis of the movement’s continuing influence should not be so easily ignored. But it also comes with a mixed message. Indeed, the fact that after twenty years 61% of the American public now think the invasion was wrong while 60 to 70% of Iraqis agree that toppling Saddam was worth it despite the present hardships and loss of 200,000 lives should lead one to believe that there is a better option when it comes to creating civic order in our universe. As Merle Curti, the late Pulitzer-prize winning historian and first chronicler of the American peace movement so ably put it: “beyond these means of active and passive resistance to war is the perpetual dilemma of what to do when the values of peace are in apparent conflict with decency, humanity, and justice.” 

Where the real lesson may lie for challenging the war habit is writing about and including in school texts the history of peace movements in the nation’s past. The sad fact is that historians have not done a very good job of writing about or explaining to their students what peace movements are actually designed to do as agents of social change. Instead they fall victim to allowing defenders of the status quo to denigrate their actions, especially in time of war, while ignoring all the positive things these movements have achieved, socially, economically, and politically, over the past two centuries. The inability to draw attention to the role of the peace movement and its peace organizations when supporting abolitionism prior to the American Civil War, the role it played in assisting Native Americans with aid and educational means on reservations, the influential role Jane Addams played with the Settlement House movement at the height of urbanization and immigration at the turn of the twentieth century, the efforts of A.J. Muste during the labor organizing drives between the world wars, the influential and substantial contribution that peace advocates made during the modern civil rights struggles—and yes, MLK, Jr. was a member of the Fellowship of Reconciliation—and the involvement peace groups played in organizing and carrying out nonviolent direct action strategies when challenging the construction of nuclear power plants and their threat to the environment are just some examples to consider.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185441 https://historynewsnetwork.org/article/185441 0
Excerpt: The March to Battle at Fort Sumter

The scene of Jefferson Davis's inauguration as the provisional President of the Confederate States of America, Montgomery, Alabama, February 18, 1861

The march of North and South to a clash at Fort Sumter began with the departure of Senator Jefferson Davis from the government of the United States in the winter of 1860.

Jefferson Davis: I am sure there is not one of you, despite whatever sharp differences there may have been between us, to whom I cannot now say, in the presence of my God, I wish you well and such, I am sure, is the feeling of the people whom I represent.

Leaving the US Senate was an emotional experience for him.

Jefferson Davis: I see now around me some with whom I served long. There have been points of collision but whatever of offense there has been to me, I leave here. I carry with me no hostile remembrance. . . . I go hence unencumbered by the memory of any injury received, and having discharged the duty of making the only reparation in my power for any injury received.. . . [I] bid you a final adieu.

The Senators in the chamber, and all the spectators, roared with enthusiasm; the applause was deafening. Davis, sensing what the future held, sat down heavily in his chair, put his head in his hands, and wept.The man who soon would lead a new country looked very sick that day. Davis had just recovered from yet another debilitating herpes attack and was barely able to stand to deliver his farewell speech.

Murat Halstead, journalist: Why, that is the face of a corpse, the form of a skeleton. Look at the haggard, sunken, weary eye—the thin, white wrinkled lips clasped close upon the teeth in anguish. That is the mouth of a brave but impatient sufferer. See the ghastly white, hollowed, bitterly puck-ered cheek, the high, sharp cheekbone, the pale brow, full of fine wrinkles, the grisly hair, prematurely gray; and see the thin, bloodless, bony nervous hands? He deposits his documents upon his desk and sinks into his chair, as if incapable of rising.

Visiting British journalist William Russell, who interviewed Davis right after he was inaugurated, did not think much of him.

William Russell: [His face] was thin and marked on cheek and brow with many wrinkles . . . [his left eye is nearly blind] the other is dark, piercing and intelligent. He did not impress me as favorably as I had expected.

In fact, President Davis suffered from herpes simplex, which closed over one of his eyes and debilitated him when he was under stress, particularly throughout the Fort Sumter crisis.

Jefferson Davis: I am suffering under a painful illness which has closely confined me for more than seven weeks and leaves me quite unable to read or write.

Many others who met him for the first time in Montgomery, Alabama, had vastly higher opinions of him. The new Confederate president was always glad to meet people; he told them all what they wanted to hear. He said to one group:

Jefferson Davis: Our people are a gallant, impetuous, determined people. What they resolve to do, that they most assuredly persevere in doing.

He told others that if the North wanted a fight, he was ready to give it to them. But he, himself, was not sure of the extent of his power.

Jefferson Davis: To me, personally, all violence is abhorrent. As President of the Confederate states, my authority is, in many respects, more circumscribed than would be my authority as Governor of Mississippi.

Davis had his problems. When Varina Davis first met her future husband, she wrote to her mother:

Varina Davis: He impresses me, a remarkable kind of man, but of uncertain temper and has a way of taking for granted that everybody agrees with him when he expresses an opinion, which offends me. He is the kind of person I should expect to rescue one from a mad dog at any risk, but to insist upon a stoical indifference to the fright afterward. I do not think I shall ever like him as I do his brother Joe. It was this sincerity of opinion which sometimes gave him the manner to which his opponents saw as domineering.

The new Confederate president also had a short, violent temper. He exploded at slight provocations. He was a perfectionist and wanted everybody to do what he thought was best, even if they did not agree with him. He never understood that someone with another opinion simply saw things a different way; people who did not agree with him were just wrong. He expected more and better work from everybody, regardless of circumstance or illness. Davis wanted everybody to be punctual and would stand outside the door to their office in the morning and tell them if they were a single minute late.

One of his biggest faults was that he would humiliate someone and then not understand why they felt humiliated. His wife helped him all she could, and put up with his imperfections. She always believed, though, that he did not have the personal skills to be a leader of any kind, much less the head of a new country.

In February 1861, Davis received a telegram from Robert Toombs, a tall, blustery Senator from Georgia, informing him of his election as President of the Confederacy. He read it to his wife.

Varina Davis: He spoke of it as a man might speak of a sentence of death.

She advised him not to take the job. Yet Varina Davis was a good first lady and performed well for someone thrust into the job. She had been an admired hostess back in Washington and would be again in the Confederacy. She was intelligent, gracious, friendly, and possessed a good sense of humor.

New Friend: She is as witty as she is wise.

Davis always defended the South’s new Constitution.

Jefferson Davis: It was a model of wise, temperate and liberal statesmanship. Intelligent criticism, from hostile as well as friendly sources, has been compelled to admit its excellence and has sustained the judgment of popular northern journals.

No, it did not. Northern journals were outraged by the secessionists and lambasted them in editorials—calling them scoundrels, at best. 

Editor,Detroit Daily Advertiser: Every horse thief, murderer, gambler, robber and other rogue of high and low degree, fled to Texas when he found that the United States could no longer hold him. The pioneers of that state were all threats of one kind or another. . . . [T]hose of them that have escaped hanging or the state prison, and their descendants, are the men who have led the secession movement in that state.

Editor,Boston Journal: Secession is treason.

The Confederate president could see the war clouds forming in the Alabama sky. He blamed the Union.

Jefferson Davis: My mind has been for some time satisfied that a peaceful solution to our difficulties was not to be anticipated and therefore my thoughts have been directed to the manner of rendering force effective.

He understood that if war came, he would be asking men who did not own slaves to lay down their lives to defend slavery. He justified doing so by defending the institution.

Jefferson Davis: A government, to afford the needful protection and exercise proper care for the welfare for a people, must have homogeneity in its constituents. It is this necessity which has divided the human race into separate nations and finally has defeated the grandest efforts which conquerors have made to give unlimited extent to their domain.

Jefferson Davis: The slave must be made fit for his freedom by education and discipline and thus made unfit for slavery. And as soon as he becomes unfit for slavery, the master will no longer desire to hold him as a slave.

And he made an accurate prediction about the coming conflict.

Jefferson Davis: A Civil War will be long and bloody.

It was not just the South that was worried, but the West, too, and no one in the West was more concerned than Sam Houston, the governor of Texas. In November of 1860, he expressed his fears in a letter to a friend.

Governor Sam Houston, Texas: When I contemplate the horrors of Civil War, such as a dissolution of the Union will shortly force upon me, I cannot believe that the people will rashly take a step fraught with these consequences. They will consider well all the blessings of the government we have and it will only be when the grievances we suffer are of a nature that, as free men, we can no longer bear them, that we will raise the standard of revolution. Then the civilized world, our own consciences and posterity will justify us. If that time should come, that will be the day and hour. If it has not—if our rights are yet secured, we cannot be justified. Has the time come? If it has, the people who have to bear the burden of revolution must affect the work.

When their new peaceful homes are the scene of desolation, they will feel no pang of regret. Moved by a common feeling of resistance, they will not ask for the forms of law to justify their action. Nor will they follow the noisy demagogue who will flee at the first show of danger. Men of the people will come forth to lead them who will be ready to risk the consequences of revolution. If the Union is dissolved now, will we have additional security for slavery? Will we have our rights better secured? After enduring Civil War for years, will there be any promise of a better state of things than we now enjoy?

As tensions heightened over Fort Sumter, both Abraham Lincoln and Jefferson Davis began to examine their options—and their consciences.

Jefferson Davis: God forbid if the day should ever come when to be true to my constituents is to be hostile to the Union.

Abraham Lincoln: I am not a war man. I want peace more than any man in this country.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185415 https://historynewsnetwork.org/article/185415 0
After April 4: The 1968 Rebellions and the Unfinished Work of Civil Rights in DC

© 2023 Kyla Sommers. This excerpt originally appeared in When the Smoke Cleared: The 1968 Rebellions and the Unfinished Battle for Civil Rights in the Nation’s Capital, published by The New Press. Reprinted here with permission.

Rev. Dr. Martin Luther King Jr.’s assassination on April 4, 1968, ignited centuries of grief and anger at American racism. An incalculable number of Black Americans took to the streets to protest this injustice in more than one hundred cities across the United States. The rebellions in Washington, DC, were the largest in the country. The capital endured $33 million in property damage ($238 million adjusted for inflation) and fifteen thousand federal troops occupied the District. Enraged crowds started more than one thousand fires. But what happened after the city stopped burning?

As the activist and future DC mayor Marion Barry declared at a DC City Council hearing in May 1968, the rebellions “created a vacuum and an opportunity.” Something would have to be done to reconstruct portions of DC, but it remained to be determined what would be rebuilt and whose interests would be served in the process. Would DC seize the chance to rectify the structural inequalities that motivated the uprisings?

Thousands of Washingtonians ambitiously grasped this “opportunity” to rebuild the capital as a more just society that would protect and foster Black political and economic power. The majority-Black city’s populace aided their communities during the uprisings and responded with resiliency and determination in the aftermath. DC’s government, community groups, and citizens loosely agreed on a reconstruction process they believed would alleviate the social injustices that were the root causes of unrest.

The rebellions challenged the same powerful institutions that generations of moderate and militant Black activists had previously picketed, boycotted, and sued. Most often, people attacked the most accessible representations of white people’s power over Black communities: white-owned and/or -operated stores, commuter highways, and “occupying” police forces. Black Washingtonians had confronted these manifestations of white political power as they demanded freedom, economic opportunities, good education, accountable policing, voting rights, and political power for over a century. Even though the tactics used by protesters were different, the rebellions predominantly targeted the same groups that Black people had long pressured to change.

After the uprisings, Black Washingtonians and parts of the DC government emphasized the idea that the rebellions were the result of legitimate anger at systemic racism and the government’s failure to address it. Building on this understanding of the upheaval, DC leaders adopted an ambitious plan to resolve many of these long-standing inequities. The effort to rebuild DC seized upon the idea that the people who were most affected by government initiatives should have some control in how those programs were administered.

Three elements of the city’s plan demonstrate how Black Washingtonians used the concept of citizen participation to demand economic and political power. First, after the uprisings the DC City Council held public hearings to listen to the community to determine how it should respond to the rebellions. A group of Washingtonians coordinated with each other to present a clear, compelling narrative of the problems that Black people faced in the capital and the reforms they desired. These solutions included policies that explicitly benefited and even favored Black residents as a way to compensate for the historical discrimination African Americans in DC had endured. The DC City Council adopted most of these suggestions into its blueprint for responding to the rebellions.

Second, Black Washingtonians lobbied for a role in police oversight. Harassment by police officers was one of the biggest issues facing Black people in DC. After police officers killed two Black men in the summer of 1968, Black people protested and demanded action from the city. After a government commission studied the issue, the DC City Council passed legislation that limited when a police officer could fire a gun and created civilian review boards to grant Black community members a guiding role in police hiring and discipline.

Finally, DC incorporated citizen participation into its rebuilding plans for Shaw, a 90 percent Black neighborhood that had been the center of Black Washington since the end of the Civil War. More than 50 percent of Shaw residents were surveyed to ask how they wanted their community to be rebuilt. The ensuing plans eschewed private development and instead tasked nonprofit groups with building new housing in partnership with the DC government. Black businesses and workers would design and build the residences as well as public amenities like libraries and schools.

This response to the rebellions was very different from the reactions of white and conservative Americans, who considered the events after King’s assassination to be an apolitical crime spree that demonstrated the need for stronger police forces. Washington suburbanites had complained about DC crime for more than a decade. Some had demanded more police even when crime rates were low. Politicians had stoked these fears and used DC crime as a platform to oppose civil rights and encourage larger, more powerful police departments. But after the uprisings, the concern over crime in the capital reached new heights. Some suburban residents refused to even enter the District, and others called for the military to permanently occupy the capital to control crime.

The fears and demands of white suburban Americans greatly affected American politics in the aftermath of April 1968. While President Johnson had previously emphasized large government programs to combat poverty and racial injustice in response to urban upheaval, the president now foregrounded anticrime policies like the Safe Streets Act that ballooned police department budgets and permitted more electronic surveillance. Richard Nixon made crime in DC a core issue in his 1968 presidential campaign. Once elected, Nixon used DC to experiment with different anticrime measures including mandatory minimum sentences and “no-knock” warrants. As other local and state governments modeled these measures, they disproportionately harmed Black Americans and other people of color.

Richard Nixon’s agenda also limited DC’s efforts to rebuild. He destroyed the government programs that made DC’s reconstruction plan possible, slashed funding for urban housing projects, and discouraged citizen participation programs. Development companies were allowed to bid on rebuilding projects, shutting out local nonprofits.

Nonetheless, the plans and efforts of a majority-Black city to rebuild and reform itself deserve consideration, especially as Americans continue to grapple with the crises of racial inequality and police brutality. From the June 2020 protests for racial justice to the insurrectionist attack on the U.S. Capitol on January 6, 2021, recent events have demonstrated that the histories of protest, policing, racial inequality, and self-governance in Washington, DC, are timely and consequential. The 1968 uprisings in DC and the 2020 protests in DC that followed the murder of George Floyd were not comparable in terms of scale—fewer than five hundred people were arrested in connection to the DC protests in 2020, while more than six thousand were arrested in 1968. Still, this history of the 1968 uprisings in the capital helps to explain our current tumultuous moment and offers historical insights on how previous generations have responded to the ongoing crisis of systemic racism.

  

© 2023 Kyla Sommers. This excerpt originally appeared in When the Smoke Cleared: The 1968 Rebellions and the Unfinished Battle for Civil Rights in the Nation’s Capital, published by The New Press. Reprinted here with permission.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185372 https://historynewsnetwork.org/article/185372 0
Excerpt: The Akan Forest Kingdom of Asante

Asante Yam Ceremony, from Thomas Edward Bowdich"Mission from Cape Coast Castle to Ashantee" (London, 1819)

On 19 May 1817, Asantehene Osei Tutu Kwame welcomed for the first time a British diplomatic mission to his capital, Kumasi. The envoys and their African retinue had set out four weeks earlier from Cape Coast Castle, a fortified outpost on the seashore of the Gulf of Guinea, which from the mid-seventeenth century to 1807 had served as the headquarters of Britain’s slave-trading operations in West Africa. Making its way through the tropical forest up one of the eight ‘great roads’ radiating from Kumasi to the outer reaches of Asante’s imperial domains, the mission was instructed to halt at a town 30 miles (48 km) short of the capital while Osei Tutu Kwame presided over what it understood to be ‘the King’s fetish week’. This was the six-day period leading up to akwasidae, one of the two solemn adae ceremonies held in every forty-two-day cycle of the Asante calendar, during which the king withdrew from public gaze into the confines of his palace to commune with his ancestors and thereby ensure the well-being of the Asanteman, ‘the Asante nation’. Following an early-morning visit to the royal mausoleum containing the bones of his hallowed predecessors, he would then re-enter the public realm on what was deemed to be a particularly auspicious day for the conduct of diplomatic and other state business. It was on the afternoon of that day that the four British envoys were permitted to enter Kumasi. Emerging from the forest and the neat patchwork of agricultural settlements surrounding the capital, they were astounded by the spectacular choreographed display that enveloped them.

‘Upwards of 5000 people, the greater part warriors, met us with awful bursts of martial music, discordant only in its mixture; for horns, drums, rattles, and gong-gongs were all exerted with a zeal bordering on phrenzy, to subdue us by the first impression’, one of their number, T. Edward Bowdich, wrote (see pl. XVII). ‘The smoke which encircled us from the incessant discharges of musquetry, confined our glimpses to the foreground; and we were halted whilst the captains performed their Pyrrhic dance, in the centre of a circle formed by their warriors; where a confusion of flags, English, Dutch, and Danish, were waved and flourished in all directions.’ Inching their way through the thronging mass of dignitaries and townspeople, the party ‘passed through a very broad street, about a quarter of a mile long, to the market place’.

Our observations … had taught us to conceive a spectacle far exceeding our original expectations; but they had not prepared us for the extent and display of the scene which here burst upon us: an area of nearly a mile in circumference was crowded with magnificence and novelty. The king, his tributaries, and captains, were resplendent in the distance, surrounded by attendants of every description, fronted by a mass of warriors which seemed to make our approach impervious. The sun was reflected, with a glare scarcely more supportable than the heat, from the massy [i.e. solid] gold ornaments, which glistened in every direction. More than a hundred bands burst at once on our arrival, with the peculiar airs of the several chiefs ….

At least a hundred large umbrellas, or canopies, which could shelter thirty persons, were sprung up and down by the bearers with brilliant effect, being made of scarlet, yellow, and the most shewy cloths and silks, and crowned on the top with crescents, pelicans, elephants, barrels, and arms and swords of gold ….

The prolonged flourishes of the horns, a deafening tumult of drums … announced that we approached the king: we were already passing the principal officers of his household; the chamberlain, the gold horn blower, the captain of the messengers, the captain for royal executions, the captain of the market, the keeper of the royal burial ground, and the master of the bands, sat surrounded by a retinue and splendor which bespoke the dignity and importance of their offices.

Finally, Bowdich and his companions made their way into the pres- ence of the waiting Asantehene.

His deportment first excited my attention; native dignity in princes we are pleased to call barbarous was a curious spectacle: his manners were majestic, yet courteous … he wore a fillet of aggry beads round his temples, a necklace of gold cockspur shells … his bracelets were the richest mixtures of beads and gold, and his fingers covered in rings; his cloth was of a dark green silk … and his ancle strings of gold ornaments of the most delicate workmanship … he wore a pair of gold castanets on his fingers and thumb, which he clapped to enforce silence. The belts of the guards behind his chair, were cased in gold, and covered with small jaw bones of the same metal; the elephants tails, waving like a small cloud before him, were spangled with gold, and large plumes of feathers were flourished amid them. His eunuch presided over these attendants … [and] the royal stool, entirely cased in gold, was displayed under a splendid umbrella ….

The description goes on. As day turned to night, the vast reception continued by torchlight ‘and it was long before we were at liberty to retire’. Bowdich estimated the total number of warriors alone in attendance at thirty thousand.

The British effort to secure a treaty governing commercial and political relations with Asante failed. For the remainder of the nine- teenth century the two powers would be drawn into a series of recurring confrontations, until in 1896 the kingdom was occupied by the British, at the height of the European conquest and partition of Africa. Five years later, in the aftermath of a final, futile war of resistance, it was annexed to the older Gold Coast Colony. In 1957, the Gold Coast – with the Asante kingdom at its centre – became the independent nation of Ghana. The 1817 mission may have come to nothing, but Bowdich’s account, published two years later as Mission from Cape Coast Castle to Ashantee, represented the start of an enduring fascination with the great forest kingdom on the part of outside observers. Founded in the late seventeenth century, Asante would become the most powerful and prominent of what a recent study has called ‘fiscal-military states’ in West Africa. The rise of Oyo, Asante, Dahomey and Segu represented a new phase of militarized state-building after the fall of the Songhay Empire in 1591, and the expansion of the Atlantic slave trade marked the end of the ‘medieval’ period and the opening of an increasingly volatile ‘modern’ era in West Africa’s history. Asante was renowned for its fabulous wealth, and Bowdich leaves no doubt as to what that wealth was based on: gold. Heir to the tradition of West African gold production, which in the medieval period had supplied the precious metal to the trans-Saharan trade and from the fifteenth century had drawn acquisitive Europeans to the Atlantic coast, Asante entered world history as an alluring kingdom of gold. By the reign of Osei Tutu Kwame (r. 1804– 1823), it was at its peak in terms of wealth, military prowess and imperial reach. The nature of the kingship that emerged from and presided over this wealth in gold is the subject of this chapter.

The extracts quoted above from the description of the entry of the 1817 mission into Kumasi contain important clues to the nature of Asante statecraft. Kingship was based on the accumulation of wealth derived from forest agriculture and gold production, and was reinforced by military prowess and an elaborate and sophisticated government apparatus. It was also enshrined by spiritual power. As Bowdich observed, the Asantehene devoted a large amount of time and effort attending to the spiritual realm: for a start, for twelve days of every forty-two-day adaduanan or monthly cycle he was sequestered within his palace in the company of his departed ancestors. He also presided over a host of other ritual observations, performances and sacrifices, including the great Odwira festival, which served on an annual basis to recapitulate the historic project of Asante kingship. To what extent can Asante be seen to conform to the model of ‘sacred’ kingship as set out by the historical anthropologists David Graeber and Marshall Sahlins in their recent book, On Kings? This question lurks behind much debate among historians of Asante. Pioneering research tended to emphasize Asante’s material base, the complex structures of its office-holding elite and the bureaucratization of its government. Subsequent studies have been critical of this approach, exploring instead the ways in which political power was entangled with the spiritual realm and how the state sought to assert control over belief and knowledge in society. One thing is clear: while Asante at its height possessed formidable coercive or ‘instrumental’ power, it also possessed a dazzling array of ideological or ‘creative’ powers and the ability to project them. Its dynastic rulers were masterful performers of power.

Excerpted from Great Kingdoms of Africa by John Parker, published by the University of California Press. © 2023.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185371 https://historynewsnetwork.org/article/185371 0
When World War II Pacifists "Conquered the Future"

Bayard Rustin's intake mugshot, Lewisburg Penitentiary, 1945. Rustin was incarcerated for resistance to the military draft prior to American entry into the second World War.

War by Other Means: The Pacifists of the Greatest Generation Who Revolutionized Resistance by Daniel Akst (Melville House, 2022)

Nuclear war moved closer to the realm of possibility in 2019, when the Trump administration withdrew the U.S. from the Intermediate-Range Nuclear Forces Treaty. It became even more conceivable last month, when Russia stopped participating in the New START treaty, which called for Russia and the U.S. to reduce their nuclear arsenals and verify that they were honoring their commitments.

No doubt Max Kampelman would have been alarmed. An American lawyer and diplomat who died in 2013, Kampelman negotiated the first-ever nuclear arms reduction treaties between the two superpowers, in 1987 and 1991. He was also an ex-pacifist who had gone to prison during World War II for refusing to be drafted. There, he volunteered as a guinea pig in a grueling academic study of the effects of starvation.

Kampelman is one of the constellation of pacifists, anarchists, and other war resisters who we meet in Daniel Akst's fascinating new book, War by Other Means: The Pacifists of the Greatest Generation Who Revolutionized Resistance (Brooklyn: Melville House, 2022). The subtitle suggests one of the difficulties of writing such a book. The war against fascism was certainly one of the most justifiable and enduringly popular wars of all time, yet the people Akst is concerned with opposed it.

They were not admirers of Hitler and his allies; rather, they feared that the highly mechanized, technocratic warfare that was developing in the mid-20th century would turn their own country into something nearly as vile as Nazi Germany (“the adoption of Hitlerism in the name of democracy,” as the Socialist presidential candidate Norman Thomas said). And they made their resistance count for something: opposing the bombing of civilian targets in occupied Europe, pleading for the admission of Jewish refugees by the foot-dragging Roosevelt administration, demanding an end to internment of Japanese-Americans, documenting abuses in mental hospitals to which some were assigned, and campaigning against Jim Crow in the federal prisons that many of them found themselves in.

These pacifists were not famous at the time. While Americans knew generally that some conscientious objectors, or COs, were refusing to serve, very few were aware of the far-reaching political ferment that was going on in prisons, in CO camps established in rural parts of the country, and in the pages of pacifist newspapers and pamphlets that circulated during the war. Some would become well-known much later, however, including future civil rights leader Bayard Rustin, war resister David Dellinger, and their mentor, A.J. Muste, executive director of the pacifist Fellowship of Reconciliation (FOE) and apostle of nonviolence. Better known, marginally, were the Catholic Worker founder Dorothy Day and the radical journalist and political theorist Dwight Macdonald.

Afterward, their influence grew, thanks in part to the tactics and arguments they developed during the war, and in part to the nuclear arms race, which confirmed their warnings about the nature and direction of modern warfare. Many former COs moved directly into the campaigns against nuclear armaments. They helped formulate the strategy of nonviolent resistance that underpinned the Civil Rights Movement and the mass demonstrations and draft resistance that galvanized the campaigns against the Vietnam War. The Congress of Racial Equality (CORE) was founded in 1942 as an offshoot of the FOE and a product of Rustin and Muste’s conviction that ending racial segregation would be the next great struggle after the war ended. The abuse that Rustin and the anarchist poet Robert Duncan withstood owing to their homosexuality draws a through-line from wartime pacifism to the later gay rights movement. The tactics of direct action, civil disobedience, and media-savvy public protest that pacifists developed during World War II would help all of these movements, not to mention environmentalism and AIDS activism, achieve their greatest successes.

Akst’s story begins even before the U.S. entered the war, when the “Union Eight”—Dellinger and seven other students at Union Theological Seminary—refused to register for the draft. They would serve nine months in federal prison at Danbury, Connecticut, and would be in and out of prison and in trouble with the authorities for the remainder of the war. COs staged work stoppages, slowdowns, and out-and-out strikes both in federal prisons and in the rural Civilian Public Service (CPS) camps where many were sent to work on irrigation projects and the like—until they became incorrigible, that is.

Nor was resistance always strictly peaceful. COs were not paid for their work as internees. At one CPS camp in Michigan’s Upper Peninsula, COs responded by launching a campaign of vandalism and sabotage that included clogging toilets, hiding lightbulbs and silverware, and scrawling obscenities. On leave in a local town, one group of “conchies” disabled their vehicle, got drunk at local bars, and got into a fight with a soldier. Some pacifist leaders urged COs to cooperate, at least tacitly, once they were in the camps, but in many cases found this impossible. But in federal prisons, especially, pacifists showed solidarity with other prisoners—notably African Americans—and struggled to maintain their activism behind bars.

Akst’s protagonists were complex, difficult individuals who quarreled with each other and with friends and family who wanted to keep them out of trouble. As such, their lives did not follow a strict pattern. But Akst has the gift for weaving together the stories of a group of highly distinctive activists—Dellinger, Rustin, and many less famous names—into a lucid narrative while digging deep into their personalities and beliefs.

He pinpoints some similarities: Many of his protagonists had a conversion experience of one or another sort (Muste had multiple conversions during his long life). Many were Quakers or liberal Protestants with intellectual roots that stretched back to 19th century Abolitionism. Many were inveterate dissidents, never ready to declare victory and settle down. Above all, they were seekers; for Macdonald, Akst writes, the war was “a way station on a lifelong ideological pilgrimage,” and this could apply to nearly everyone Akst re-introduces in his book.

If anything brought them all together, it was an emerging philosophy or worldview that Day called “personalism,” and which Akst characterizes as “a way of navigating between … the corpses of capitalism and communism” at a time when the Depression had discredited the one and Stalin’s tyranny had destroyed any confidence in the other. More deeply, it was a way of reconciling the “sacredness and inviolability of the individual” and the need for collective action against injustice and the death cult of war.

In their own way, each of the activists who emerged from the war—even if they no longer adhered to pacifism—believed that “each of us, driven by love, had the power to change the world simply by changing ourselves.” It was a “mushy and idealistic” notion, Akst observes, but his subjects could be quite hardheaded and sensible when it came to organizing, and it had great moral force in the decades after the war, for Martin Luther King, Jr., among many others.

In purely practical terms, the lessons the World War II resisters carried away from the war represented a break from the top-down organizing of the Old Left that is still playing itself out, Akst notes. They were “wary of authority, often including their own, and longed for direct democracy and communitarian social arrangements,” and “cherished the specific humanity of each and every person.” The result was a preference for non-hierarchical, anarchist-inspired organizing that can be traced in the movement against corporate globalization, the Occupy movement, and the Movement for Black Lives.

These inclinations have created their own problems in the years since the war. The New Left that evolved out of the Civil Rights and antiwar movements never managed to win over the increasingly rigid mainstream of the American labor movement. It had trouble, generally, sinking deeper roots into working and oppressed communities looking for immediate political solutions to their problems. And it largely failed to establish institutions of resistance that could endure without being coopted by the State.

Akst grounds his protagonists’ accomplishments as well as their failings in their individual personalities; when your activism is a part of a lifelong intellectual pilgrimage, staying pinned down to one philosophy or strategy is difficult. Nevertheless, “to a great extent Dellinger and his fellow pacifists did conquer the future,” Akst writes, and on a host of issues—racism, militarism, authoritarianism, and the looming threat of the Bomb—they broke through where others were often afraid to make a fuss. Channeling their principles into a more enduring resistance is the necessary work of their successors.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185270 https://historynewsnetwork.org/article/185270 0
Christopher Gorham Gives the Remarkable Anna Marie Rosenberg the Bio She Deserves

Anna Marie Rosenberg, photographed while touring Korea as an Assistant Secretary of Defense in the Truman administration

The Confidante by Christopher C. Gorham (Kensington Books)

Anna Marie Rosenberg was an Austro-Hungarian Jewish immigrant who was known as “the busiest woman in New York” and “Seven-Job Anna” due to demand for her expertise in public relations and the then-new field of labor negotiations. She was so close to New York’s Republican mayor Fiorello La Guardia that he picked her up in his limousine each morning and dropped her off at her office before continuing to City Hall.

Major corporations hired Rosenberg to settle their labor problems in the strike-prone 1920s and 1930s. “Settling a strike or striking a deal, Anna would command, ‘Pipe down, boys, and listen to me.’ Whether union or management, she would tell them not what they wanted to hear, but what they needed to hear,” writes Christopher C. Gorham, author of the absorbing new biography The Confidante: The Untold Story of the Woman Who Helped Win World War II and Shape Modern America. “The deal done, she would clap her hands together, bracelets jangling, and congratulate the parties, ‘Wunnerful job, gentlemen!’”

Rosenberg first encountered Eleanor and Franklin Roosevelt when he ran for governor of New York in 1928. She joined the campaign as a labor advisor and continued to be an important member of his political circle until his death in 1945. Her influence on presidents continued for another twenty years. According to Gorham, “Anna’s combination of skill and social ease was valued by FDR as he won the presidency, and by her thirties she was the nation’s only woman in charge of implementing massive New Deal programs.”

Rosenberg’s biggest job in the early years of the New Deal was heading up New York’s regional office of the National Recovery Administration, or NRA. “The theory behind the NRA was that as businesses competed for customers, they cut prices and wages in an ever-descending struggle to get the cost of production to its lowest point so as to maximize profit,” Gorham explains. “The issuance and enforcement of hundreds and hundreds of legalistic fair-practice codes and a ban on unfair trade would, so went the theory, stimulate business recovery.” In practice, it was an impossible task that flew in the face of American capitalism. Rosenberg tried her best, but a Supreme Court decision in 1935 found the National Industrial Recovery Act, the enabling legislation of the NRA, unconstitutional, and her job ended.

Her next job was as regional director of the new Social Security Administration in 1936. With three telephones on her desk and an army of hundreds of employees in thirty-two field offices, Rosenberg still found time to hear about the problems of individuals. “Among the 1,050 walk-ins in one week in 1936 was an impoverished elderly couple, worried that they would have to split up after nearly fifty years of marriage,” Gorham writes. Insisting that they speak to “Miss Government Lady,” the couple was taken to Rosenberg, who comforted the sobbing woman. “A few phone calls later, Anna had arranged for enough public assistance to keep the couple in their apartment.”

When America entered World War II, Rosenberg helped devise the War Manpower Commission and served as its New York regional director. She dispatched the overabundant labor in New York to areas of the country with shortages, including Oak Ridge, Tennessee, where the atomic bomb was being built and tens of thousands of workers were needed. In fact, she was in on the Manhattan Project secret, something even FDR’s last vice president, Harry S. Truman, didn’t know about. She was involved in mediating labor strikes and threats affecting crucial war industries, urged FDR to end segregation in defense jobs, and advocated for high-paying defense jobs for women.

FDR also enjoyed her company. She was a pretty, vivacious woman famous for her stylish hats, with a wonderful sense of humor, and she joined the president at many of his daily cocktail parties. Working often from an office in the White House, she had ready access to the president (unlike some members of his own cabinet). The digital version of his calendar, FDR Day by Day, shows her meeting with him 127 times between 1936 and 1945. In comparison, Labor Secretary Frances Perkins, who held a dim view of Rosenberg and considered her a rival, met with the president 89 times in that same period.

Shortly after the D-Day invasion in June 1944, FDR dispatched Rosenberg as his personal emissary to Europe, introducing her in a letter to General Dwight Eisenhower as “my warm friend.”  She toured military hospitals in England, embedded with General George S. Patton’s Third Army (she called him “Georgie”), slept on the ground and ate K-rations, and interviewed soldiers about their post-war aspirations. Returning, she argued successfully for money for college to be added to the G.I. Bill, which had positive repercussions for millions of returning soldiers. FDR asked her to tour Europe a second time as the war was winding down, and though he died before she was due to leave, President Truman sent her on. She dealt with refugee problems and saw the horrors of a concentration camp, which had a profound effect on her as a Jew who would surely have met a similar fate had her family stayed in Europe.

She went on to serve in the Truman administration as assistant secretary of defense, surviving a challenge from communist hunter Joseph McCarthy, and served as an unpaid advisor to Eisenhower and Lyndon Johnson on matters from labor to civil rights. Although she supported John F. Kennedy, he was notorious for the lack of women in his administration, depending on her instead as a master fundraiser. She organized the birthday gala for his forty-fifth birthday and was sitting beside him when Marilyn Monroe sashayed out in a skin-tight gown and sang a breathy “Happy Birthday, Mr. President.”

Gorham’s admiring, gracefully written, and well-documented biography resurrects the life and contributions of a worthy woman who deserves to be remembered.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/185106 https://historynewsnetwork.org/article/185106 0
The Pope at War: Pius XII and the Vatican's Secret Archives

David I. Kertzer: The Pope at War: The Secret History of Pius XII, Mussolini, and Hitler (2022)

 

The Pope. And how many divisions has he?”

--Joseph Stalin at the 1943 Teheran conference, responding to Winston Churchill’s suggestion that the Pope be involved in post-war planning.

Pope Pius XII (Eugenio Pacelli, 1876-1958) was the most powerful religious figure in Europe during World War II. Based in the tiny state of Vatican City, he held sway over Europe’s 200 million Catholics. Known as a quiet, intellectual man, fluent in four languages, he served from 1939 until his death in 1959.

His legacy has been dominated by one haunting question: could he have done more to save the Jews?

After the war, the Vatican’s propaganda office mounted a coordinated effort to portray Pius XII as a hero, a moral leader who spoke out against anti-Semitism and pleaded with the warring countries to protect innocent civilians, including minorities. The Vatican claimed, however, that Pope Pius XII, isolated inside Fascist Italy, only heard unverified “rumors” about the organized genocide of the Nazis. Thus, he was unable to provide any help, other than offering prayers, for the Jews in Germany, Poland, Hungary and other occupied countries.

In 2009, his defenders even mounted a campaign to have him declared a saint.  This effort ran into serious opposition from Holocaust survivors and the effort was put on indefinite hold by Pope Francis in 2014.

In the past two decades, a "Pope Pius XII War" has quietly raged among historians with accusers and defenders publishing articles and books about the wartime Pope. Those critical of the Vatican include David Kertzer’s The Popes Against the Jews, Peter Godman’s Hitler and the Vatican and Susan Zucotti’s Under His Very Windows.

In 2022, another book, The Pope and the Holocaust: Pius XII and the Secret Vatican Archives by Michael Hesemann, a German history professor, came out defending the wartime pope and claiming he saved thousands of lives of Jews and other minorities.

Secret Archives

In 2020, after much prodding from historians, the Vatican finally granted access to a vast trove of World War II archives, previously locked away since the end of the war.

This breakthrough resulted in tens of thousands of pages of records, letters, reports and internal memos becoming accessible to scholars. The new evidence was damning. Pius XII had received detailed reports about the death camps and had been asked repeatedly by Jewish leaders, Allied governments, and clergy to intervene. Many visitors pleaded with him to speak out publicly against the Nazi’s mass murders. Later, when Mussolini began stripping Italian Jews of their jobs and property, priests and rabbis begged him to intervene with the dictator.

The answer was always “no.”

In his new book, The Pope at War, The Secret History of Pius XII, Mussolini and Hitler, David Kertzer, a history professor at Brown University, details the dark truths of Pius XII’s wartime actions. Kertzer, who has written six previous books about the Vatican in the twentieth century, was one of the first researchers to access the secret wartime archives.

In The Pope at War, he describes how Cardinal Eugene Pacelli, who first served as the Vatican’s secretary of state from 1930 to his election as Pope Pius XII in 1939, received detailed reports about the Nazi’s campaign persecution of the Jews and political dissenters (including anti-fascist Catholic priests) from the very beginning of the Hitler regime.

Later came Kristallnacht in 1938 and then the organization of huge death camps in Poland and Germany. Throughout this dark period, local priests and diplomats sent hundreds of letters, telegrams and detailed reports of the death camps to the Vatican. But the Pope and his close advisors consistently rejected any effort to protest the killings, either publicly or privately. The Vatican’s powerful propaganda machine (two daily newspapers, a radio network and papal messages) ignored the roundups of Jews, and its only references to the war were anodyne statements calling for warring nations to spare “innocent civilians.”

Public Neutrality

Why was Pius XII so cautious?

Kertzer suggests that while Pius XII was privately shocked, he felt the Vatican and the Catholic Church in Italy and Germany were very vulnerable and could face violent attacks, should he anger either Mussolini or Hitler.

He also feared for the independence of Vatican City. When Italy was reunified in the mid-19th century, the Vatican was stripped of The Papal States, a region in central Italy. Tiny (109 acres) Vatican City, including historic St. Peter’s Basilica, lost its sovereignty and became part of the new, unified Italian state.

After Mussolini became prime minister in 1922, he was eager to cement his power as an absolute dictator in a one-party state. Some 99% of Italy’s 44 million people were Catholic, so the church represented a potential threat. He struck a deal with the Vatican in 1929, known as the Lateran Accords, that recognized Vatican City as a sovereign state, independent of Italy. The Vatican was now free to police its own territory and act as a nation-state, establishing diplomatic relations with other nations.

In return The Vatican agreed to subsume its powerful political party the PPI (Partito Popolare Italiano) into Mussolini’s Fascist Party. It also agreed to re-organize Catholic Action, a nationwide youth organization, into a training ground for fascist ideology.

Many aspects of fascism appealed to the Pope XII and the Vatican hierarchy. Most important was Mussolini’s suppression of Italy’s small, but well-organized Communist Party. Second was the Fascist Party’s rejection of Modern Europe’s popular culture including jazz music, avant garde literature and racy movies. The Vatican was very concerned these would lead to public amorality, particularly sexual promiscuity.

 

1,000 Years of Anti-Semitism

While the official ideology of Italy’s Fascist Party was not as virulently antisemitic as Germany’s Nazi Party, many of Mussolini’s lieutenants were outspoken Jew-haters. They were comfortable in the Catholic Church, which had a 1,000-year tradition of antisemitism. According to early church doctrine, the Jews were condemned to “eternal slavery” for their sin of murdering Jesus and then refusing to accept his teaching.

During the first decade of Mussolini’s dictatorship, Italy’s small (50,000) Jewish population did not face the violent antisemitism unleashed in Nazi Germany. In 1938, however, Mussolini, under pressure from Hitler, began an official purge of Jews from society. Jewish doctors, teachers and civil servants were forced out of their jobs.    

In 1940 Mussolini, following Hitler’s example, ordered the construction of some 200 concentration camps across Italy. The first to be confined in them were the thousands of Jewish and political refugees who had fled Germany, Austria and Czechoslovakia. Within three years, most would be sent to their death in Nazi death camps.

After the successful American and British invasion of Sicily, the Italian government formally surrendered to the Allies in May 1943. Six weeks later, with the approval of King Victor Emmanuel III, the Fascist Party leadership had Mussolini arrested. The Germans quickly sent an army to occupy northern and central Italy. Under the direction of the Nazi SS, Italian Jews were rounded up, some living only blocks from the Vatican. Many were sent directly to death camps. This was the darkest hour of Pope Pius XII’s reign, as he refused to speak out or order any clandestine resistance by local priests.

On June 4, 1944, the Allies liberated Rome. Pope Pius XII quickly established a liaison with American generals and greeted groups of Allied soldiers in the Vatican. But he still refused to publicly condemn the Nazis, even as they held out in Northern Italy and continued to send Italian Jews to death camps.

Moral Judgment

Kertzer saves his own moral judgment for the last chapter of his book. He states:

If Pius is to be judged for his action in protecting the institutional interests of the Roman Catholic Church at a time of war…his papacy was a success. However, as a moral leader Pius XII must be judged a failure.  At a time of great uncertainty, Pius XII clung firmly to his determination to do nothing to antagonize either (Hitler or Mussolini). In fulfilling this aim, the pope was remarkably successful.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184805 https://historynewsnetwork.org/article/184805 0
"The Dawn of Everything" Stretches its Evidence, But Makes Bold Arguments about Human Social Life

Excavation site at Çatalhöyük, a proto-urban settlement which dates to approximately 7,000 BCE

Photo Murat Özsoy 1958CC BY-SA 4.0

Review of David Graeber and David Wengrow, The Dawn of Everything: A New History of Humanity. New York: Farrar, Strauss, and Giroux,  2021.     

The title of The Dawn of Everything announces the book’s grand ambition: to challenge the established narrative of civilization as progress in material comforts and power (for some) and decline (of the others) into greater deprivation and unfreedom. The authors contend that the developments and decisions that led to a world-wide system of mostly hierarchical and authoritarian states need not be considered inevitable, nor the result unavoidable. Through a remarkably wide-ranging synthesis of the last thirty years of work on the Neolithic Age and the transition to agriculture and urban life, Graeber and Wengrow seek to open our political imaginations to recognize other ways of caring for the common good, some of which, they contend, have been realized in the past, and survived for many hundreds of years. They succeed in decoupling urban life from farming, and then cite cases of cities that appear to have been organized along horizontal, egalitarian lines. In doing so, they accomplish part of their goal. However, like other writers of provocative works, in pressing their case to the utmost, Graeber and Wengrow at times strain the evidence, and in castigating the writers of speculative history, they often seem to forget that they are writing speculative history also.

Graeber and Wengrow follow the method of paying attention to groups that have largely been silent or invisible in history because they did not have writing, or did not construct large stone monuments—those who lived in darkness in the interregnum between empires. Providing a more detailed and accurate portrait of such people, not considering them simply as underdeveloped barbarians or savages, can lead to a fuller history of humanity, a history not solely based on a single line of cultural evolution, through which all societies must proceed in a fixed set of stages.

The authors, an anthropologist and an archaeologist, are most successful in contesting the narratives of two developments—the origins of farming and of cities—that have previously been considered to be closely and even necessarily related. In the established and popularly accepted narrative of cultural evolution dominant in the last two and a half centuries, stages of social life succeed each other fairly quickly and decisively. On this view, farming displaced hunting and foraging over perhaps a few generations; moreover, agriculture in its early stages must have included, as appears in the early written record, the use of ploughs and planting, leading to the founding of permanent settlements, the production of surplus food, a greater division of labor, the appearance of craftspeople, priests, and permanent political hierarchies. In addition, hunting and foraging would not have been carried over into the newer social form because they are incompatible with settled life and the requirements of agricultural labor.

By contrast, relying on archaeology of the Neolithic era that has been published in the last three decades, Graeber and Wengrow show that the appearance of farming and of cities have in some cases been separated from each other by centuries or millenia, that many complex, hybrid forms existed, and that sometimes a people chose to remain in such a hybrid state, or even to return to hunting and foraging after having engaged in agriculture for generations or centuries. It is probable that plant cultivation first developed in many places—for example, along the shores of rivers, lakes, and springs. Flood-retreat farming near rivers required no ploughing, little investment of effort and time, and could serve as supplement to other means of subsistence. It was not likely to lead to private property because different pieces of land would be exposed and be productive each year.

In the early Neolithic, farming probably developed in the valleys of the Jordan and Euphrates as a “niche activity”—one of several forms of specialization, supplements to economies based primarily on wild resources. In some locations, the first steps toward cultivation consisted of (mostly women) observing which plants bore fruit at which time of year, and returning to harvest them in season, perhaps eventually establishing gardens next to temporary dwellings. There is evidence for seasonal alternation of forms of social organization: a hierarchical, patriarchal structure under a single leader during the hunting season, and a more egalitarian, perhaps matriarchal, organization in the season for foraging and gardening, which were mostly performed by women. Graeber and Wengrow contend that this pattern of seasonal variations of social structure held at Çatalhöyük in modern Turkey, “the world’s oldest town” (212) for more than a thousand years.

Even though they might have persisted for centuries, many of these hybrid forms could be considered partial or provisional farming. Some groups, like many northern Californian tribes, which were probably acquainted with agriculture from other tribes, apparently deliberately chose not to pursue the practice, while others, like those in England at the time of Stonehenge around 3300 B.C.E., after a period when they engaged in farming, turned away from it.  Almost all these developments are necessarily conjectural, because such small societies made up of hunters, foragers, and gardeners or small-scale farmers did not produce systems of writing.  

The case of Çatalhöyük indicates how the authors’ arguments about the halting growth of farming and the emergence of non-hierarchical cities complement each other. Just as they cite evidence that many societies maintained themselves in hybrid states combining seasonal or small-scale cultivation with hunting, fishing, and foraging, so they contend that early cities produced neither a division of labor, nor classes based on unequal wealth, nor a bureaucracy to organize the distribution of surpluses, nor a centralized political or religious authority. They cite more than a half dozen sites from around the world that challenge the established narrative of a set of institutions originating at nearly the same time in urban civilizations.

They assert that the early cities of Southern Mesopotamia such as Uruk provide no evidence of monarchy. The archealogical remains of Taljanky, the largest of the “mega-sites” in Ukraine dating to 4100-3300 B.C.E, with an estimated population of over 10,000, provide no signs of central administration, government buildings, or monumental architecture, no temples, palaces, or fortifications. However, the site presents evidence of small-scale gardening and the cultivation of orchards, some enclosed livestock, as well as hunting and foraging. According to the archaeological record, this town survived and prospered for more than five hundred years.

Teotihuacan in the Valley of Mexico provides perhaps the most striking example of urban life without kingship, central religious authority, bureaucracy, or wide inequalities. At its height, it is thought to have housed a population of about 100,000. In its early centuries (100-300 C.E.), Teotihuacan followed the pattern of other Meso-american cities ruled by warrior aristocracies, erecting monumental pyramids and other sacred structures, requiring the work of thousands of laborers and involving the ritual sacrifice of hundreds of warriors, infants, and captives who were buried in the pyramids’ foundations.

Yet the people of Teotihuacan appear to have reversed course around 300 when the Temple of Quetzalcoatl, the Feathered Serpent, was sacked and burned, and work on all pyramids came to a halt. Instead of pursuing the construction of palaces and temples, the city embarked on an ambitious program of building stone housing for the entire population. Each dwelling of about 10,000 square feet with plastered floors and painted walls would have housed 60 to 100 people, ten or twelve families, each with its own set of rooms.

The wall paintings of the new order contain scenes of everyday life, but no representations of warfare, captives, overlords, or kings. These colorful paintings appear to celebrate the activities of the entire community, not the greatness of a royal dynasty. Three-building complexes distributed throughout the city might have been used as assembly halls, suggesting that the unit of organization was the neighborhood, with local councils providing for the construction and maintenance of buildings, overseeing the distribution of necessary goods and services, and performing other public functions. This egalitarian, “republican,” de-centralized social organization—which has been called a “utopian experiment in urban life” (332)—survived for about 250 years before the bonds holding the city together seem to have dissolved, and the population dispersed, perhaps because of tensions between neighboring ethnic, linguistic, and occupational groups.

Graeber and Wengrow cite other, more ambiguous sites as evidence of egalitarian early cities. For example, Mohenjo-daro, founded in the Indus valley near the middle of the third millennium B.C.E., attained a peak population of perhaps 40,000. Its Lower Town, laid out in a grid of nearly straight lines, possessed an extensive system of terracotta sewage pipes, private and public toilets, and bathing facilities. Merchants and craftsmen in the Lower Town possessed metals and gems, signs of wealth absent in the Upper Citadel. On the other hand, the Citadel contained the Great Bath, a pool forty feet long by six feet deep that appears to have been the center of civic life. Excavations of the city have uncovered no evidence of monumental architecture, monarchs, warriors, or priests with political authority. However, the remains do give clear evidence of hierarchical organization, containing three of the four groups that later would be classified as castes in the Rig Veda (c. 1250 B.C.E.): ascetic priests in the Upper Citadel, merchants and laborers in the Lower Town (the absent fourth caste would have been composed of warriors). This hierarchy may not have distinguished groups on the basis of political authority, but it does classify on the basis of purity and cleanliness Although we do not know how public affairs were administered, it seems a stretch to consider Mohenjo-daro an instance of an early egalitarian city.

This example points to one of the principal limitations of Graeber and Wengrow’s book. In trying to provide a counterweight to a narrative they believe has paid inordinate attention to centralized, authoritarian regimes, they lean toward interpretations that accept the possibility of large-scale, nonliterate, non-hierarchical societies. Like most polemical writers, however, they tend to exaggerate and to strain the evidence. For example, Graeber and Wengrow want to argue that many despotic regimes have been brought down when oppressed people reclaimed their freedom by just walking away from their oppressors. Their argument is in line with the literal sense of the Sumerian word for “freedom,” ama(r)gi—a “return to mother” (426)—or with the verbal phrase for governmental change in Osage—to “move to another country” (469). But the authors cite only two clear examples of such desertions, both from Mississippian civilizations: Cahokia, centered at present-day East St. Louis, where, from 1150 to the city’s collapse in 1350, much of the commoner population deserted a culture based on aggressive warfare, mass executions for the burials of nobles, and strict surveillance of commoners. A similar exodus occurred several centuries later among the Natchez in the Lower Mississippi. But as they seek to generalize this finding, Graeber and Wengrow erroneously maintain that it was similarly possible to reclaim freedom by simply walking away from large empires such as the Roman, Han, or Incan. It was a notorious and bitter complaint among Romans, for instance, that one could not escape the reach of the Emperor, whose power extended to the ends of the known world.

Graeber and Wengrow similarly overreach in order to produce what they call the “indigenous critique” of European civilization. In 1703, an impoverished Baron Lahontan, who had spent ten years as a soldier and traveller in New France, published his Dialogues with a Savage of Good Sense, which recount the author’s conversations with a Native American he calls Adario, who articulates a devastating critique of French civilization. His targets include monarchy, the chasm between rich and poor, the dishonesty and faithlessness of the French, their lack of charity, the absurdities of Christian beliefs, the celibacy of priests, and many other institutions and practices. Lahontan calls Adario “the Rat,” which was also the (non-pejorative) cognomen of the celebrated Wendat (Huron) orator, statesman, and strategist, Kandiaronk, on whom Adario is clearly based. It is true, as Graeber and Wengrow state, that throughout the eighteenth century, European readers assumed that “Adario” was simply a fictional mouthpiece used by Lahontan to avoid persecution or censorship. Europeans, the authors claim, refused to believe that a “savage” Native American could have formulated a thoughtful political and social analysis of European society.

By contrast, Graeber and Wengrow at the other extreme assert that Adario’s criticisms and arguments are entirely Kandiaronk’s, as though no European could advance a forceful critique of their own civilization (it is likely that Adario’s critique derives from both Kandiaronk and Lahontan, more from the former than the latter). The authors go much further to infer that the criticism in the Dialogues constitutes not just one brilliant native’s considered insights, but a systematic judgment of Europeans by Native American political thought. Thus, their “indigenous critique” plays the role of a fully formulated political philosophy to contrast with the emerging narrative of progress. In fact, the “indigenous critique” may also have been in part a European “autocritique” of Enlightenment (as Mark Hulliung names his study of Rousseau’s thought). Graeber and Wengrow assert repeatedly that the “indigenous critique” influenced and perhaps catalyzed Enlightenment social and political thought and revolutionary practice—for which they believe Kandiaronk deserves credit. At the same time, they deplore  Enlightenment conjectural histories of early human societies as conservative responses intended to counteract the “indigenous critique.” In this way, they at once implicitly celebrate and explicitly disparage Enlightenment political and historical thought.

In fact, The Dawn of Everything stands in a much closer relation to the Enlightenment conjectural histories than its authors acknowledge. They recognize Rousseau’s importance, blaming him throughout for asserting, even while lamenting, the full-blown simultaneous appearance of agriculture and property, as well as the disappearance of innocent but stupid savages. They also refer to Adam Smith and Adam Ferguson, who proposed influential, clearly demarcated three- and four-stage theories of universal social development. But they do not mention alternate conjectural histories by Germans such as J. G. Herder and Georg Forster, who avoided a single, rigid scheme of social evolution, and argued in different ways that each people follows its own path of development at its own speed. Even among the Scots, James Dunbar questioned the category of savagery, contending that a society termed savage might be morally superior to a “civilized” empire. Through their almost exclusive focus on Rousseau, whose thought was unrepresentative in its utter condemnation of agriculture and property, Graeber and Wengrow may do what they accuse the Enlightenment thinkers of doing to indigenous people: they simplify a complex phenomenon in order to produce a derogatory representation of it.

The conjectural histories of the late eighteenth century were attempting to make sense of the fragmentary and often unreliable accounts from the previous two centuries of the world about which Europeans had previously known nothing or close to nothing. They speculated by necessity; they were thinking about periods for which there were no written records and few material remains. Yet they were speculating responsibly, attempting to work out an understanding of nonliterate societies that did not conform to Biblical accounts, mythical narratives, or dynastic histories, but was based on the best, if scanty, evidence they had before them. In that sense, they took a scientific approach.

Graeber and Wengrow also write responsible speculative history concerning societies about which much remains unknown, again in large part because of an absence of written records. Their speculations are based on many sites, material remains, and methods of analysis that were not available in the eighteenth century. Çatalhöyük was only excavated beginning in the late 1950s, and the Ukrainian mega-sites in the 1970s. It makes sense that Graeber and Wengrow have a different story to tell based on different, more plentiful evidence. In providing a provocative synthesis of the last thirty years of specialized archaeological research, however, their speculations are not more scientific than those of their Enlightenment predecessors. 

In fact, despite their straining of the evidence and occasional glib remarks (calling Rousseau, for example, a “not particularly successful eighteenth-century Swiss musician” [494]), they largely succeed in their primary aim of showing that the currently dominant form of bureaucratic, centralized, warlike state is neither inevitable nor inescapable. Most significant, perhaps, is their report that current reseach shows that cities with 100,000 people could be organized on egalitarian lines so that they were not centrally and hierarchically administered by a monarchy, aristocracy, or priesthood. Instead, neighborhood councils based on widespread participation were able to organize peaceful communal life for  hundreds of years at Teotihuacan, the Ukrainian mega-sites, the Hopewell Interactive Zone in Ohio, and Knossos in Crete, where the prominent role played by women appears to have been of signal importance.

In addition, the existence of societies that have been acquainted with or practiced full-scale farming, but turned away from the complete set of agricultural practices, enjoying greater freedom of thought and action for hundreds or even thousands of years—longer than most empires—indicates that human groups can evaluate the undesirable consequences of technological innovations and choose not to adopt all means of cultural or territorial expansion, economic growth, and resource exploitation. Indeed, the survival of our species and of others may depend on our developing sustainable forms of democratic self-government and adopting self-imposed restrictions on unchecked growth. Although such forms of social organization are widely dismissed as utopian, by showing their existence at many places and times in the past, this book demonstrates that they are indeed possible. Recognizing that such possibilities actually took shape in the past may encourage the realization of similar egalitarian societies in the future.  

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184781 https://historynewsnetwork.org/article/184781 0
Martin Sherwin's "Gambling with Armageddon" Strips away the Myths of Nuclear Deterrence

A US helicopter flies above the Soviet submarine B-59 during the blockade of Cuba, October 28-29, 1962

Martin J. Sherwin Gambling with Armageddon:  Nuclear Roulette from Hiroshima to the Cuban Missile Crisis (Vintage Paperback Edition 2022).

The development and the deployment of nuclear weapons are usually based on the assumption that they enhance national security.  But, in fact, as this powerful study of nuclear policy convincingly demonstrates, nuclear weapons move nations toward the brink of destruction.

The basis for this conclusion is the post-World War II nuclear arms race and, especially, the Cuban missile crisis of October 1962.  At the height of the crisis, top officials from the governments of the United States and the Soviet Union narrowly avoided annihilating a substantial portion of the human race by what former U.S. Secretary of State Dean Acheson, an important participant in the events, called “plain dumb luck.”

The author of this cautionary account, Martin Sherwin, who died shortly after its publication, was certainly well-qualified to tell this chilling story.  A professor of history at George Mason University, Sherwin was the author of the influential A World Destroyed: Hiroshima and Its Legacies and the co-author, with Kai Bird, of American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer, which, in 2006, won the Pulitzer Prize for biography.  Perhaps the key personal factor in generating these three scholarly works was Sherwin’s service as a U.S. Navy junior intelligence officer who was ordered to present top secret war plans to his commander during the Cuban missile crisis.

In Gambling with Armageddon, Sherwin shows deftly how nuclear weapons gradually became a key part of international relations.  Although Harry Truman favored some limitations on the integration of these weapons into U.S. national security strategy, his successor, Dwight Eisenhower, significantly expanded their role.  According to the Eisenhower administration’s NSC 162/2, the U.S. government would henceforth “consider nuclear weapons as available for use as other munitions.”  At Eisenhower’s direction, Sherwin notes, “nuclear weapons were no longer an element of American military power; they were its primary instrument.” 

Sherwin adds that, although the major purpose of the new U.S. “massive retaliation” strategy “was to frighten Soviet leaders and stymie their ambitions,” its “principal result . . . was to establish a blueprint for Nikita Khrushchev to create his own ‘nuclear brinkmanship’.”  John F. Kennedy’s early approach to U.S. national security policy―supplementing U.S. nuclear superiority with additional conventional military forces and sponsoring a CIA-directed invasion of Cuba―merely bolstered Khrushchev’s determination to contest U.S. power in world affairs.   Consequently, resumption of Soviet nuclear weapons testing and a Soviet-American crisis over Berlin followed.     

Indeed, dismayed by U.S. nuclear superiority and feeling disrespected by the U.S. government, Khrushchev decided to secretly deploy medium- and intermediate-range ballistic nuclear missiles in Cuba.  As Sherwin observes, the Soviet leader sought thereby “to protect Cuba, to even the balance of nuclear weapons and nuclear fear, and to reinforce his leverage to resolve the West Berlin problem.”  Assuming that the missiles would not be noticed until their deployment was completed, Khrushchev thought that the Kennedy administration, faced with a fait accompli, would have no choice but to accept them.  Khrushchev was certainly not expecting a nuclear war.

But that is what nearly occurred.   In the aftermath of the U.S. government’s discovery of the missile deployment in Cuba, the Joint Chiefs of Staff demanded the bombing and invasion of the island. They were supported by most members of ExComm, an ad hoc group of Kennedy’s top advisors during the crisis.  At the time, they did not realize that the Soviet government had already succeeded in delivering 164 nuclear warheads to Cuba and, therefore, that a substantial number of the ballistic missiles on the island were already operational.  Also, the 42,000 Soviet troops in Cuba were armed with tactical nuclear weapons and had been given authorization to use them to repel an invasion.  As Fidel Castro later remarked:  “It goes without saying that in the event of an invasion, we would have had nuclear war.”

Initially, among all of Kennedy’s advisors, only Adlai Stevenson, the U.S. ambassador to the United Nations, suggested employing a political means―rather than a military one―to secure the removal of the missiles.  Although Kennedy personally disliked Stevenson, he recognized the wisdom of his UN ambassador’s approach and gradually began to adopt his ideas.  “The question really is,” the president told his hawkish advisors, “what action we take which lessens the chance of a nuclear exchange, which obviously is the final failure.”  Therefore, Kennedy tempered his initial impulse to order rapid military action and, instead, adopted a plan for a naval blockade (“quarantine”) of Cuba, thereby halting the arrival of additional Soviet missiles and creating time for negotiations with Khrushchev for removal of the missiles already deployed.

U.S. military leaders, among other ostensible “wise men,” were appalled by what they considered the weakness of the blockade plan, though partially appeased by Kennedy’s assurances that, if it failed to secure the desired results within a seven-day period, a massive U.S. military attack on the island would follow.  Indeed, as Sherwin reveals, at the beginning of October, before the discovery of the missiles, the U.S. Joint Chiefs of Staff were already planning for an invasion of Cuba and looking for an excuse to justify it.

Even though Khrushchev, like Kennedy, regarded the blockade as a useful opportunity to negotiate key issues, they quickly lost control of the volatile situation.

For example, U.S. military officers took the U.S.-Soviet confrontation to new heights.  Acting on his own initiative, General Thomas Power, the head of the U.S. Strategic Air Command, advanced its nuclear forces to DEFCON 2, just one step short of nuclear war―the only occasion when that level of nuclear alert was ever instituted.  He also broadcast the U.S. alert level “in the clear,” ensuring that the Russians would intercept it.  They did, and promptly raised their nuclear alert level to the same status. 

In addition, few participants in the crisis seemed to know exactly what should be done if a Soviet ship did not respect the U.S. blockade of Cuba.  Should the U.S. Navy demand to board it?  Fire upon it?  Furthermore, at Castro’s orders, a Soviet surface-to-air battery in Cuba shot down an American U-2 surveillance flight, killing the pilot.  Khrushchev was apoplectic at the provocative action, while the Kennedy administration faced the quandary of how to respond to it.

A particularly dangerous incident occurred in the Sargasso Sea, near Cuba.  To bolster the Soviet defense of Cuba, four Soviet submarines, each armed with a torpedo housing a 15-kiloton nuclear warhead, had been dispatched to the island.  After a long, harrowing trip through unusually stormy seas, these vessels were badly battered when they arrived off Cuba.  Cut off from communication with Moscow, their crews had no idea whether the United States and the Soviet Union were already at war. 

All they did know was that a fleet of U.S. naval warships and warplanes was apparently attacking one of the stricken Soviet submarines, using the unorthodox (and unauthorized) tactic of forcing it to surface by flinging hand grenades into its vicinity.  One of the Soviet crew members recalled that “it felt like you were sitting in a metal barrel while somebody is constantly blasting with a sledgehammer.”  Given the depletion of the submarine’s batteries and the tropical waters, temperatures ranged in the submarine between 113 and 149 degrees Fahrenheit.  The air was foul, fresh water was in short supply, and crew members were reportedly “dropping like dominoes.”  Unhinged by the insufferable conditions below deck and convinced that his submarine was under attack, the vessel’s captain ordered his weapons officer to assemble the nuclear torpedo for action.  “We’re gonna blast them now!” he screamed.  We will die, but we will sink them all―we will not become the shame of the fleet.”

At this point, though, Captain Vasily Arkhipov, a young Soviet brigade chief of staff who had been randomly assigned to the submarine, intervened.  Calming the distraught captain, he eventually convinced him that the apparent military attack, plus subsequent machine gun fire from U.S. Navy aircraft, probably constituted no more than a demand to surface.  And so they did.  Arkhipov’s action, Sherwin notes, saved not only the lives of the submarine crew, “but also the lives of thousands of U.S. sailors and millions of innocent civilians who would have been killed in the nuclear exchanges that certainly would have followed from the destruction” that the “nuclear torpedo would have wreaked upon those U.S. Navy vessels.”

Meanwhile, recognizing that the situation was fast slipping out of their hands, Kennedy and Khrushchev did some tense but serious bargaining.  Ultimately, they agreed that Khrushchev would remove the missiles, while Kennedy would issue a public pledge not to invade Cuba.  Moreover, Kennedy would remove U.S. nuclear missiles from Turkey―reciprocal action that made sense to both men, although, for political reasons, Kennedy insisted on keeping the missile swap a secret.  Thus, the missile crisis ended with a diplomatic solution.

Ironically, continued secrecy about the Cuba-Turkey missile swap, combined with illusions of smooth Kennedy administration calibrations of power spun by ExComm participants and the mass communications media, led to a long-term, comforting, and triumphalist picture of the missile crisis.  Consequently, most Americans ended up with the impression that Kennedy stood firm in his demands, while Khrushchev “blinked.”  It was a hawkish “lesson”―and a false one.  As Sherwin points out, “the real lesson of the Cuban missile crisis . . . is that nuclear armaments create the perils they are deployed to prevent, but are of little use in resolving them.”

Although numerous books have been written about the Cuban missile crisis, Gambling with Armageddon ranks as the best of them.  Factually detailed, clearly and dramatically written, and grounded in massive research, it is a work of enormous power and erudition.  As such, it represents an outstanding achievement by one of the pre-eminent U.S. historians.

Like Sherwin’s other works, Gambling with Armageddon also grapples with one of the world’s major problems:  the prospect of nuclear annihilation.  At the least, it reveals that while nuclear weapons exist, the world remains in peril.  On a deeper level, it suggests the need to move beyond considerations of national security to international security, including the abolition of nuclear weapons and the peaceful resolution of conflict among nations.

Securing these goals might necessitate a long journey, but Sherwin’s writings remind us that, to safeguard human survival, there’s really no alternative to pressing forward with it.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184751 https://historynewsnetwork.org/article/184751 0
Rediscovering the Lost Midwest (Excerpt)

Barn near Traverse City, Michigan. Photo David Ball, 2005

For too long the American Midwest has suffered from a mixture of scholarly neglect and ridicule. Consistent with the past derision of the region, the Midwest has been greatly neglected by historians, a condition that a small group of academics and other writers, mostly located in the Midwest, has been attempting to remedy in recent years by way of new study groups, journals, and other publications. I count this book in that number. 

Given the prevailing atmosphere of disdain and indifference, readers may be surprised at what a new look at midwestern history reveals. Once the cobwebs are cleared off old journals, long-forgotten records consulted, and the veil of stereotypes pierced, a remarkable world is discovered. In contrast to prevailing clichés and the modern platitudes about backwardness, sterility, racial injustice, and oppression, an in-depth look at the history of the American Midwest reveals a land of democratic vigor, cultural strength, racial and gender progress, and civic energy—a Good Country, a place lost to the mists of time by chronic neglect but one well worth recovering, for the sake of both the accuracy of our history and our own well-being. The Midwest of the long nineteenth century, to state it boldly, constituted the most advanced democratic society that the world had seen to date, but its achievements are rarely highlighted in history texts and indeed seldom mentioned. 

In this old and forgotten Midwest, where theories of democracy advanced so far in practice, there were also, dare I say, elements of idealism. These stemmed from the democratic nature of the Northwest Ordinance but also from the emergence of New Americans, people born and raised in the young republic who had escaped the bonds and constraints of Europe and colonial life with its indentures, slavery, and persisting aristocracy. They scoffed at those who tried to reestablish Old World privileges out on the open frontier of Ohio and Indiana. They recognized the absence of privation, the natural bounty of the region, and the access to fruitful land, a precious rarity in feudal Europe, so the degree to which they embraced boosting and promoting their new region and its expansion and fecundity makes sense. To invoke midwestern idealism is not to imply naiveté. Pragmatism and common sense reigned. But there was a communally agreed to ideal, a model for behavior, a goal to be striven for, a moral code, a way of inspiring the young, a motivation for civic duty, a virtuous patriotism, a recognition of civic obligations, and, perhaps most telling, a willingness to bleed and die for one’s home, especially as against sinful rebels who put the young republic at risk. Above all, there was little of modernity’s corrosive cynicism, the kind that yields indifference and decay. It is hard to pin it down and dissect and quantify it, but this idealism runs through the record of the Midwest and it is a key part of its history, despite its elusiveness to the written word. It is why, during the rocky decades of the later twentieth century, when hard times came, there was so much nostalgia for the old Midwest. This is not to say false feelings or empty sentiment, but a nostalgia grounded in a lived and real experience. 

As profound and successful as midwestern development was, it failed on some fronts. This book is titled The Good Country, not The Perfect Country, and so it examines the failures of the Midwest with regard to women, Native Americans, racial and ethnic minorities, and other matters while also recognizing the context, complexity, and ambiguity of this history along with evidence of substantive advances to remedy these failures. This exercise represents the great challenge of modern historiography, one met by or even acknowledged by too few historians. The history profession in the United States, many would concede, has become too one-sided, too critical, and too focused on American faults and not sufficiently attentive to what would have been considered great achievements in their proper historical setting. 

It is past time for a great correction in the field of American history. A rebalancing of what we think we know is needed to place people and events properly, to understand better what worked and what failed, and to provide hope from the past to those who seek democratic progress in the present. American history was not one long train of abuses and suffering, as it would sometimes seem based on the prevailing sentiments in and outputs from American history departments. Nor was it an uninterrupted ascendance toward perfection, as some critics of academia might presume. It was a mixture of advances and defeats, but more of the former than many recent historians admit. By looking with fresh eyes at the history of the American Midwest, the most historically neglected region in the United States, we can begin to see elements of American history that have nearly vanished from the main currents of historical work in recent decades and begin the great correction that may rebalance our view of the past and rectify the recent distortion of the American story.

Beyond the historical record and the interpretative agendas and disputes of scholars—and more important to all our daily lives—is the need to reflect on the political culture in which we are now immersed. For several decades at least, a common way to frame cultural and political conflict in the United States is to pit the traditions and practices of Old America against the new and rising and youthful forces of change, to contrast the square against the hip. This has become our most foundational political dynamic and what we continue to fuss about in most of our political and cultural debates. If the assumptions surrounding Old America are wrong, however, it rescrambles a prevalent framing device for modern American politics. A different framing based on a more accurate history might reduce social tensions and democratic logjams by causing the upstart forces of change to instead draw on the successes of the past instead of dismissing them or, worse, denying their existence. After a long and tortured intellectual journey, the Midwestern intellectual Christopher Lasch came to these realizations and began to worry about the cost of forgetting our once-prevalent civic and communal traditions. He came to see that “his parents’ early-twentieth-century Midwestern world” was rooted and decent and democratic and worth remembering. 

By tending to reminders from Lasch and others and by embracing a wide global perspective, one can see the democratic development of the Midwest properly and realize how far advanced the region was vis-à-vis the rest of the world. In the Midwest there was a zealous commitment to educating the masses so that reason and learning could underpin democratic governance. This included college education. Old World social hierarchies and privileges were broken down in the Midwest, fostering a democratic culture. Most people in other places were landless peasants, whereas in the Midwest most people were fee-simple land-owning yeoman farmers. Most people elsewhere had no guaranteed civil rights, unlike the citizens of the constitutional polities of the Midwest. People actively voted in the Midwest. Religious freedom prevailed. A pragmatic and entrepreneurial spirit undergirded the culture. This is why an Ohio orator could reasonably say during the early nineteenth century that Ohio, the first of the midwestern states to emerge during the early republic and a model for those that followed, was the “truest democracy which had yet existed.” When the midwestern regionalist writer William Gallagher said in 1850 that the region was a grand “Experiment in Humanity” where the “freest forms of social development” in the world could be found, it was not puffery. He was objectively and comparatively correct. The emergent midwestern civilization, Gallagher said, could one day enable its citizens to realize “their real dignity and importance in the social scale, by proclaiming to them that they are neither slaves nor nonentities, but true men and women,” which was saying a lot in the world of 1850. Gallagher’s focus on the exceptional democratic character of the Midwest underscores why it is past time for a new look at a region whose history has been lost to the American imagination. It is a history we need now to remind us of our ideals and how many battles we have already won.

Excerpted from The Good Country: A History of the American Midwest, 1800–1900, courtesy of Oklahoma University Press.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184514 https://historynewsnetwork.org/article/184514 0
James M. Scott's "Black Snow" Traces the Line from Tokyo to Hiroshima

Tokyo, after the firebombing of March 9-10, 1945

A former Nieman Fellow at Harvard, James M. Scott is the author of four books about World War II including Target Tokyo: Jimmy Doolittle and the Raid that Avenged Pearl Harbor, Rampage:  MacArthur, Yamashita and the Battle of Manila, and The War Below.  In his latest book, Black Snow: Curtis LeMay, the Firebombing of Tokyo and the Road to the Atomic Bomb, Scott details the most destructive air attack in history, the firebombing of Tokyo on the night of March 9-10, 1945 which claimed more than 100,000 lives. His narrative includes the troubled development of the B-29 bomber and the rise of General Curtis LeMay, who developed the low-altitude firebombing strategy. Here, in an interview with History News Network, Scott discusses how and why he wrote the book.

Q.  I am curious about how you got started writing history and why you are particularly interested in the U.S. vs. Japan struggle in World War II.  

My first job out of college was as a public-school English teacher in Japan where I taught middle school in a small town of about 20,000 residents on the main island of Honshu. I volunteered as well to teach a course at night at my town’s community center. In that class, some of my students, who were children during the war, talked to me about the B-29 campaign and having to escape Japan’s cities. It was the first I ever really learned about that part of the war, and I was fascinated.

During my time there, I likewise made a trip to  Hiroshima. A couple weeks later, I flew to Hawaii to meet my parents, who were there on vacation. My father had served in the Navy, so the first place we visited was Pearl Harbor. Therefore, in the span of about two weeks, I experienced what for America represented the beginning and the end of World War II. Needless to say, I was hooked.

Q. Some of the most fascinating parts of Black Snow are the first-hand accounts of the Japanese survivors in Tokyo. These voices are often lacking in other WW II histories.  How did you access those records and how did that influence your writing of this history?

I don’t speak much Japanese unfortunately, despite having studied it while living there. I have been very fortunate over the years to develop some great friendships and contacts in Japan, who have always been so gracious to help me in my research, from pulling records to arranging interviews and translators.

The Japanese side of the story is one that I feel has long been overlooked in the examination of the firebombing of Tokyo, despite the fact that there are voluminous materials available to researchers in Japan. There is an entire museum in Tokyo dedicated to the March raid, where survivors often give lectures to school children and visitors. There are great historical accounts and numerous secondary sources as well. All of them, however, are in Japanese, which has limited American researchers from using many of those important sources.

For me, I was particularly interested in what that firestorm was like and how people survived it. I wanted to capture that visceral experience for readers, to take them inside that inferno and show them what it looked like, smelled like, and even sounded like. In short, I wanted to recreate it for readers.

Q. I was surprised to learn that General Curtis LeMay made the decision to begin the firebombing raids on his own initiative at his Pacific base, without explicit approval from Washington D.C. Was this kind of “lone wolf” action a lesson for General Marshall and other war commanders? Did the firebombing raid decision affect the atomic bomb decision process?

LeMay benefited from the unorthodox structure set up to govern the B-29 campaign. Army Air Forces commander Gen. Hap Arnold, from the outset, was adamant that his bombers function as independent operators and not be pulled into Douglas MacArthur’s or Chester Nimitz’s orbit. To do this, he convinced the Joint Chiefs of Staff to allow him to create the 20th Air Force under his direct control. He would then report directly to the Joint Chiefs. Arnold, of course, worked in Washington, so LeMay was his operator in the Marianas.

Arnold, however, suffered a major heart attack in January 1945, and ended up in Florida convalescing. In his absence, LeMay was left with Arnold’s chief of staff, Lauris Norstad. LeMay not only outranked Norstad by a star, but he also didn’t really trust him. This led LeMay, who by nature was a pretty solitary individual, to hold his cards close. He didn’t tell Norstad of his plans to firebomb Tokyo until the day of the mission, when Norstad landed in the Marianas for a visit. 

As for the atomic attacks, LeMay’s operation really served as an important trial balloon to see how the American public would respond to the mass killing of enemy civilians, particularly since this attack occurred soon after the firebombing of Dresden, which remains controversial even today. To the surprise of many in Washington, however, the American public voiced no real objection. “Properly kindled,” Time magazine wrote in 1945, “Japanese cities will burn like autumn leaves.”

Q. Most Americans are aware of how the U.S. dropped the first atomic bomb on Hiroshima, but many may not be aware that the firebombing raids on Japanese cities killed more civilians. What lessons about military decision making, if any, would you like readers to take away from Black Snow?

You are exactly right. So many folks know of the atomic attacks, but are stunned to learn of the incineration of Tokyo, Osaka, Nagoya, and dozens of other Japanese cities in the waning months of the war. The atomic attacks, however, did not happen in a vacuum. Those strikes were the last stop on America’s march toward total war.

Q.  As you note in the Epilogue, General LeMay’s reputation greatly suffered after he agreed in 1968 to be the vice presidential candidate in Governor George Wallace’s campaign for president. In Black Snow you paint a very favorable picture of LeMay as a strategic thinker and a military commander who cared about his air crews. Did your research change your view of him?

Absolutely. Like so many others, I began my project far more familiar with LeMay based on the controversial end of his career than on his wartime service. As I dug into his life, my view of him really changed. He was an incredibly hard worker, who gave tremendously of himself to the war. He rarely saw his family, and he was lucky to sleep more than four hours a night.

He had studied engineering in college, and was a natural problem solver, which is what you need in war. What was also fascinating was to read his efficiency reports in his personnel file, where amazing aviators, like Jimmy Doolittle, wrote that he was one of the best combat commanders produced by the war. You can’t get any higher praise than Jimmy Doolittle. 

Q.  What do most Japanese today think about the firebombing?  Is it taught in history books?  Will your book be published in Japan? The U.S. and Japan are, of course, close allies today, but do some Japanese feel the firebombing was a mistake or unnecessary?  

In Japan, the firebombing, much like here in the United States, has been overshadowed by the atomic attacks. That is evident in the beautiful national museum in Hiroshima compared to the small, privately funded one in Tokyo.

That said, there remains interest in Tokyo and there have been many books published on it by survivors and historians. Many of the survivors I talked to place a lot of blame on Japan’s leaders at that time, who unnecessarily prolonged the war even after it was obvious Japan was defeated. That delay in surrender cost hundreds of thousands of civilian lives.

I would love to see Black Snow published in Japan. I think the book fairly captures the story from all sides and could be a great resource for both Japanese readers and historians.  

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184437 https://historynewsnetwork.org/article/184437 0
Lindsey Fitzharris on the Pioneering Facial Reconstruction Surgeon Who Remade the Faces of Great War Veterans

Today in northeastern France, cemeteries and monuments scattered across the rolling green landscape honor the Allied dead of World War 1, but the suffering of the wounded, particularly those with cruelly disfigured faces, has often been ignored or discreetly hidden.

World War 1 produced an astonishing 40 million casualties, including some 10 million military dead and 20 million wounded. The brutality of the war, the first large-scale conflict with machine guns and accurate, rapid firing artillery and poison gas, caught political leaders, generals, and medical and medical authorities by surprise.  Artillery  caused two-thirds of all military injuries and deaths. Soldiers’ bodies, when not entirely obliterated by high explosive shells, were dismembered, losing arms, legs, noses ears and even entire faces. While losing a leg or an arm made a man a hero, a devastating head wound that left him without a nose or jaw made him a pariah, a grotesque object of pity, unfit for society.

By the armistice in November 1918, 280,000 men from France, Germany and Britain alone had sustained some form of facial trauma. At the beginning of the war, no one knew how to treat these horrifying injuries. As one battlefield nurse wrote home, “the science of healing stood baffled before the science of destroying.”

In The Facemaker, Lindsay Fitzharris, a medical historian, reveals the suffering endured by these soldiers and details the accomplishments of Dr. Harold Gillies, a London surgeon who pioneered dozens of plastic surgery procedures. Gillies’ innovative techniques gave thousands of greatly disfigured soldiers a new life, with reconstructed faces that allowed them to return home to their families. 

Dr. Harold Gillies

Gillies, born in New Zealand, had graduated from medical school in Cambridge and become a successful ear, nose and throat (ENT) surgeon in London when the war came.

Summoned by the Royal Army, he was charged with treating men with gaping holes in their faces, some missing jaws who had to be fed through a straw.  He soon put together a team of physicians and nurses at Queen's Hospital  outside London where up to a thousand patients could be undergoing treatment.

The word plastic in plastic surgery means "reshaping" and comes from the Greek plastikē. A very primitive form of reconstructive surgery, nose reshaping, was practiced as early as 800 B.C. in India, but relatively few advancements in this specialized field were made for the next 2000 years.

However, by 1914 great progress had been in other areas of medicine. Antiseptic techniques, such as sterilizing wound sites and frequent washing of hands, were standard in British medical facilities (though antibiotics were still in the future).  As the war developed, Allied armies assembled fleets of motorized ambulances and portable triage centers where the wounded could be scanned by X-ray machines.  Blood transfusion, although not perfected, was sometimes administered to those with severe blood loss.

Gillies pioneered the use of multiple skin grafts on the face with the tissue being excised from other parts of the body. He invented the tubed pedicle skin graft, also known as the walking-stalk skin flap, in which the skin for the target site is folded into a flap and attached at both ends to preserve the blood supply. Gillies’ team also pioneered the epithelial outlay technique, which enabled a new set of eyelids for those who had them burned off. The physicians at Queen’s Hospital also introduced new techniques in anesthesia and blood transfusion.

Gillies, concerned about the psychological health of his patients, encouraged the visits of family members and provided training programs for those soldiers who left the hospital with disabilities.

A long recitation of medical procedures can get boring, and Fitzharris skillfully weaves in stories of the young men who were wounded on the battlefield and were fortunate enough to make to the London hospital.

Private Percy Clare 

Private Percy Clare was advancing at the Battle of Cambrai in 1917 when a bullet passed through his cheek ripping a large hole in the left side of his face. Blood gushed down his tunic and he collapsed on the ground, as the battle raged around him. A medical orderly came by, and eyeing the large head wound and pools of blood, muttered “that sort always dies soon” and moved on. After hours passing in and out of consciousness, Private Clare was recognized by a friend, who promised to get help. More hours passed as the friend tried to locate a team of stretcher-bearers who would venture out to help Clare. Finally, he was rescued and eventually returned to England, where he underwent a series of operations at Queen’s Hospital, which Dr. Gillies ran. After several hospital stays and painful recuperation he was discharged from the army in 1918 and returned to his wife and son. He died in 1950, age sixty-nine.  

By the end of the war, Gillies and his colleagues had performed more than 11,000 operations on some 5,000 patients. In 1920 he published a book Plastic Surgery of the Face, which became an essential textbook for surgeons in many nations

In World War II, Gillies was again called upon by the British government and organized plastic surgery units at a dozen hospitals. Gillies continued to pioneer plastic surgery techniques. In 1946, he one performed on of the first sex reassignment surgeries from female to male on  Michael Dillon (born Laura Dillon). In 1951, he headed a team that completed a male-to-female reassignment surgery.

For American readers, one disappointment will be absence of any discussion of the American Expeditionary Force and its medical corps. Fitzharris, who holds a doctorate in the history of science and medicine from the University of Oxford, naturally focuses her research on archives in the U.K.

World War 1 was a world-shattering disaster that ended in an uncertain peace.  As Robert Kirby, a historian at Keele University, observed “Nobody won the last war but the medical services. The increase in knowledge was the sole determinable gain for mankind in a devastating catastrophe.”

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184346 https://historynewsnetwork.org/article/184346 0
Isaac Sears and the Roots of America in New York

An 1884 illustration depicts the Battle of Golden Hill, a skirmish between New York's Sons of Liberty and British soldiers garrisoned in Manhattan. The incident preceded the Boston Massacre by six weeks. 

I no longer even hated Rivington Street but the idea of Rivington Street, all Rivington Streets of all nationalities allowed to pile up in cities like gigantic dung heaps smelling up the world, ambitions growing out of filth and crawling away like worms.

—Al Manheim, in Budd Schulberg, what makes sammy run?  

Rivington Street on the Lower East Side has been immortalized as a feverish melting pot in novels, ballads, poems, lyrics, films, and even album covers by everyone from the Beastie Boys to Lady Gaga. But in the late 1700s, when it was originally mapped, the street was an anodyne, pastoral expanse sandwiched between Delancey Street and North Street. Later renamed Houston Street, North Street originally defined New York’s urban outer boundary at the time when, since livestock still outnumbered people, the landscape was indeed punctuated by dung heaps, though they were not uncommonly gigantic by eighteenth-century standards.

Rivington and Delancey Streets run parallel (unlike their namesakes). That the street names have endured is further proof that New Yorkers don’t know—or care— much about their history. Given the Anglophobia of the nineteenth century, at least one of them would have been renamed for, among others, Isaac Sears. (There is a two-block-long Sears Street in the borough of Manhattan, on Randalls Island, but it’s named for a firefighter trainee who died in 2008.) Neither Rivington Street’s undistinguished  geography,  nor  its demography,  reflects the gratitude that James De Lancey Jr. intended to convey by naming the thoroughfare for James Rivington. The beleaguered fellow Loyalist and blisteringly pro-British publisher had helped De Lancey dispose of his property once it became apparent that the insufferable British subjugation of New York, which had begun in 1776 and would continue for seven years, could not be sustained indefinitely. All other considerations aside, it’s no surprise that the screenwriter and novelist Budd Schulberg was professionally predisposed to harbor ill will against a street named for a publisher.

Born in London, Rivington emigrated to America in 1760. Thirteen years later, he started publishing the New York Gazetteer. The newspaper began as a relatively objective journal, although Rivington’s  personal  loyalties  were  unconcealed.  A favorite target of his vilification was the merchant patriot provocateur Isaac Sears, whom he maligned as “a tool of the lowest order; a political cracker, sent abroad to alarm and terrify.” Sears gave as good as he got, denouncing Rivington as “a servile tool, ready to do the dirty work of any knave who purchases.”

James De Lancey Jr. was a native New Yorker, his Huguenot grandfather, Stephen, having fled France and arrived in America in 1686. James Jr. inherited the family’s mercantile business and, like many fellow merchants, opposed Parliament’s heavy- handed taxation. James Jr. opposed the Stamp Act and other barriers to the colony’s commerce, a self-serving mindset that temporarily endeared him to the radicals. But he sacrificed his credibility with patriotic Americans by belatedly agreeing to subsidize the care and feeding of British troops under the Quartering Act. Presciently, he packed his belongings and left New York in April 1775 for England, never to return. Historians still debate whether the ensuing conflict was a revolution, a war for independence, or a civil war, and what proportion of Americans—some say more than half—were neither zealous Loyalists nor passionate patriots. The radicals revolting against an unrepresentative government some 3,500 miles away were the most identifiable by their words and deeds. They were dominated in the city by the triumvirate of Isaac Sears, John Lamb, and Alexander McDougall, men who, Pauline Maier wrote in The Old Revolutionaries (1980), were ambitious when “the obscure might rise to positions of power and prominence” through politics and “played the role of brokers, mediating between the various social and economic groups that made up the community.” Lamb, a writer, was the son of a convicted robber exiled from England. McDougall, a Scottish-born merchant, had been a privateer.

Sears, a fifth-generation New Englander, was born in Massachusetts in 1730. When Isaac was four, his family moved to Norwalk, Connecticut. At sixteen he was apprenticed to a captain; at twenty-two, he was already commanding a sloop that shuttled cargo between New York and Canada. He captained a trading vessel until, in the Seven Years’ War, he was commissioned as a privateer. As the Magazine of American History recounts, his exploits “gave him a great moral ascendancy over his fellow-citizens, and he seems to have fairly won over the title of ‘King’. ” By the early 1760s Sears had profited so handsomely as a privateer that he removed to New York, where he invested in trade with the West Indies. He married Sarah Drake, whose father owned the Water Street Tavern, at Trinity Church. Like so many other reluctant revolutionaries in New York, he seemed the antithesis of the rabble in arms that the British identified with the mobocracy.

Boston and Philadelphia would always maintain a friendly rivalry for the status of America’s cradle of liberty; arguably, New York’s role as the amalgamator of competitive colonies into symbiotic states and the site where the nation’s government was invented has often been overlooked. New York was the only one of the thirteen colonies that the British had seized by force rather than settled in the seventeenth century. The nineteenth-century historian Henry B. Dawson dated the first revolt against the crown to as early as 1681, when New York merchants refused to pay custom duties. On October 18, 1764, the Provincial Assembly of New York was first among the colonies—before Massachusetts in 1770, and Virginia in 1773—to appoint a Committee of Correspondence, to collaborate with its legislative counterparts on the East Coast “on the Subject of the impending Dangers which threaten the Colonies of being taxed by Laws to be passed by Great Britain.”

Britain, under a newly crowned king and an equally stubborn prime minister, hamhandedly forced the colonies to foot the lion’s share of their own defense during and after the Seven Years’ War—without giving the English expatriates and their progeny any say in the matter. Worse still, a 1763 proclamation barring American settlement west of the Appalachians, while its bestowal of title on Native Americans made it a legal benchmark of sorts, infuriated land-grabbing colonists, including George Washington, who would own some thirty-two thousand acres within the circumscribed territory. After the war, Parliament asserted its dominion by vigorously enforcing the Navigation Acts, all but granting Britain a monopoly on trade with America. The following spring, Parliament passed the odious Stamp Act. Effective November 1, 1765, the act required that officially stamped paper be purchased for all legal documents and that tax stamps be affixed to everything from pamphlets to playing cards.

On October 7, 1765, barely three weeks before the Stamp Act was to take effect, nine of the thirteen colonies, prodded by Massachusetts and Virginia, dispatched representatives to a Stamp Act Congress, which convened at New York’s city hall. Even as the Congress was still meeting in New York, the first tax stamps were delivered from England on October 23. Sears, McDougall, and Lamb threatened a licking to anyone who used them. On October 25, delegates to the congress signed a fourteen-point Declaration of Rights and Grievances, which affirmed the supremacy of Parliament but argued that the rights of Englishmen precluded the august body from levying taxes because they could only be imposed by representatives of the people. John William Leonard, writing in his History of the City of New York, 1609-1909 (1910), proclaimed the congress “the beginning of the American union.”

On October 31, one day before the Stamp Act was to take effect, the city’s merchants struck an even greater strategic blow against the crown. Two hundred voted unanimously to boycott British goods altogether until the act was repealed. “New York thus led in the great and effective movement which proved to be America’s greatest commercial attack upon Great Britain,” Leonard wrote. Philadelphia merchants followed suit on November 7; Boston’s on December 3. When the underground Sons of Liberty emerged publicly to export its strategy of defiance to other colonies, the first name on its membership roster was Isaac Sears.

It’s debatable whether Sears and many of his compatriots would have been much more amenable to subsidizing British troops had the Stamp Act and similar levies been imposed by legislators duly elected in the colonies instead of by distant, unrepresentative members of Parliament. He and other self-proclaimed patriots insisted, though, that “taxation without representation” was more than a bumper sticker. It was a matter of principle—although, in truth, because of gender, race, and property qualifications, fewer than one third of the colonists were eligible to vote for their own representatives, which meant that some two thirds of the population would have been taxed without direct representation anyway.

On June 4, 1766, the Sons of Liberty convened on the Commons, ostensibly to mark the king’s birthday and celebrate Parliament’s repeal of the Stamp Act by brazenly erecting a flagstaff called a Liberty Pole directly facing the British barracks—a defiant invitation for the Red Coats to topple it, which they did, three times, only to have the colonists immediately replace it. (After the Common Council refused to give the Sons of Liberty permission for another provocation, Sears bought a plot of land nearby and erected a twenty-two-foot totem on his own property.) On January 18, 1770, an altercation between Sears and several British soldiers posting broadsides belittling the Sons of Liberty as “great heroes who thought their freedom depended on a piece of wood” escalated into what became known (but not famous) as the Battle of Golden Hill—for the “golden grain” grown there in Dutch times—in Lower Manhattan. The British, whose broadsides presumably constituted an exercise of their free speech rights, derided the Sons of Liberty as drunken rabble while the soldiers stoically defended the populace. Accounts of casualties during the ensuing clashes varied widely (including possibly one death and several serious injuries). The better-known Boston Massacre occurred six weeks later.

New Yorkers almost beat Boston to a tea party, too. The first shipload of taxed tea was due in New York on November 25, 1773, and the Sons of Liberty were fully prepped to dump the tea chests overboard as soon as they arrived. But the tea-laden vessel was delayed, blown off course in a storm. By the time the ship was finally sighted off Sandy Hook the following April, Boston had stolen New York’s thunder.

Undeterred, Isaac Sears prevented the tea from being marketed in Manhattan. A ditty at the time by the patriot poet Philip Freneau immortalized his exploits in rhyme:

At this time there arose, a certain “King Sears,” Who made it his duty, to banish our fears,

He was, without doubt, a person of merit,

Great knowledge, some wit, and abundance of spirit, Could talk like a lawyer, and that without fee,

And threaten’ d perdition, to all that drank Tea.

A year later, in April 1775, Sears was publicly advocating revolution, a defiant act of sedition that inevitably resulted in his arrest. Freed from prison by fellow patriots who paraded him triumphantly through the city’s streets, Sears and his allies commandeered city hall, where they seized five hundred muskets that had recently arrived from England for shipment to British troops in Boston. Less than a week later, Sears and a small army of 350 men raided the custom house, where duties were collected on imports, seized control, and proclaimed that the Port of New York was closed.

The following November, for the second time, Sears violently suppressed free speech—a right that had been won by the printer Peter Zenger when he was acquitted of libeling the royal governor in 1735. (He was tried at New York’s city hall, where in 1789 the First Amendment, which would be enshrined in the Bill of Rights, was approved by Congress and sent to the states for ratification.) After Sears learned that the British governor of Virginia had seized a printing press operated by the nephew of John Holt, the patriot New York publisher, he mobilized a vigilante posse that raided the offices of Rivington’s Gazetteer. The mob confiscated the newspaper’s lead type, recasting it as bullets. (So much for the pen being mightier than the sword.) “Though I am fully sensible how dangerous and pernicious Rivington’s press has been,” Alexander Hamilton complained to John Jay, “I cannot help disapproving and condemning this step.”

Sears repaired to New Haven, then forayed episodically into New York, where he forced Loyalists, including the Reverend Samuel Seabury (the future first American Episcopal bishop), to swear allegiance to the “United States of America.” If, by the spring of 1776, the mission could be judged a success, a Connecticut delegate to the Continental Congress wrote to Samuel Adams, it was “much owing to that Crazy Capt. Sears.” Sears appropriated a British cannon from the Battery and sabotaged efforts to resupply British warships. Infuriated, Vice Admiral Samuel Graves ordered the sixty-four-gun HMS Asia to “fire upon the House of that Traitor, Sears.”

That July 9, after Washington read the newly printed Declaration of Independence to his troops in New York, Sears mustered a mob of patriots to march the mile and a half downtown to Bowling Green, where they lassoed the two-ton statue of King George III, toppled it, cleaved it into portable segments, and carted off most of the gilded lead remnants to Litchfield, Connecticut, to be melted and delivered back to the British in the form of 42,088 musket balls. Toppling an effigy was one thing, but another of Sears’s extremist provocations that same month proved too much: Washington himself thwarted the arrest of William Tryon, New York’s royal governor. (Ironically, Tryon would conspire in a bollixed scheme to kidnap Washington the following spring.)

 

Excerpted from The New Yorkers: 31 Remarkable People, 400 Years, and the Untold Biography of the World's Greatest City. Used with the permission of the publisher, Bloomsbury. Copyright © 2022 by Sam Roberts.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184245 https://historynewsnetwork.org/article/184245 0
Dangerous Rhythms: Jazz and the Criminal Underworld

“Every civilization is known by its culture and jazz is America’s greatest contribution to the world – it is our ‘classical’ music.”  – Tony Bennett

Beethoven, Bach and Mozart, along with other great classical music composers, had patrons, usually wealthy princes who took great pride in displaying “their” musician and his works.

In the first years of the 20th century, jazz, music deeply rooted in the Black experience, grew in popularity among both Black and white audiences, despite the prevailing racism and segregation laws of the South. Black jazz musicians, who could be arrested by white police for almost any reason, found “patrons” of a different sort in New Orleans (and later in Chicago and Kansas City): local mobsters.

Before the 1920s, jazz was scorned as primitive “colored” music. Mobsters, often Italian, but sometimes Irish or Jewish, saw African Americans as outsiders like themselves and they controlled most of the clubs where fans came to hear the exciting new music – and also to partake in forbidden activities such as gambling, prostitution and illegal drugs.

In his latest book Dangerous Rhythms, author T. J. English details the “symbiotic” relationship between leading Black jazz musicians and local crime bosses including Al Capone, Lucky Luciano and Mickey Cohen.

As English points out, “It is a quirk of history that around the same time that jazz was first taking shape, organized crime was also in its incubation stage...Many white people were as enthused by this new music as African Americans. The idea that jazz could cross over and become a viable source of commerce became a gleam in the eye of gangsters from sea to shining sea.”

Plantation Mentality

English notes that “From the beginning, the relationship was based on a kind of plantation mentality. The musician was an employee for hire, not unlike the waitresses, busboys and doormen.” Through corruption, payoffs to cops and politicians, the club owners guaranteed the safety of their employees. If there was a police raid, the musicians were quickly sprung from jail.

Louis Satchmo Armstrong, the most famous early jazz artist, recalled in his memoir, Satchmo: My Life in New Orleans (1954), that his first paying gig as a cornet player at age 16 in 1917 was in a saloon (really a front for gambling and prostitution) owned by a member of a local Sicilian mob family.

In New Orleans, the birthplace of jazz, the music flourished in the clubs lining the streets of the Storyville neighborhood. Jazz music, along with gambling and prostitution flourished in Storyville until the U.S. Navy (worried about its sailors) stepped in 1917 and closed down the neighborhood, forcing the dispersal of the city’s jazz musicians to points north.

Many jazz musicians settled in Kansas City, another “open city” where corruption allowed widespread gambling, prostitution, bootlegging and all-night jazz clubs. Louis Armstrong wound up in Chicago where in 1924, he took a long-term gig at Joe Glaser’s Sunset Club, one of the dozen clubs in which Al Capone had an ownership stake. Capone valued the clubs for the cash they generated, but he also enjoyed jazz music and was a frequent visitor at the Sunset Club.

Louis Armstrong knew who Capone was and recalled later that he stood out in the crowd, “a nice little cute fat boy – young– like some professor who had just come out of college to teach.”

Most popular histories of jazz music skip over the importance of the mob’s patronage, although the connection was never a secret – memoirs from Mezz Mezzrow (published 1946) and Armstrong’s 1954 book describe it. 

English is one of the first authors to research this topic in-depth and chronicle the arc of the relationship throughout the 20th century. A journalist and screenwriter specializing in crime, his previous books include Havana Nocturne, Paddy Whacked and Where the Bodies Were Buried. His screenwriting credits include scripts for NYPD Blue and Homicide.

Are you hip?

English draws upon his knowledge of organized crime to detail the mobsters’ tribal loyalties and violent rivalries. He effectively conveys the dangerous, exciting world of the “black and tan” (i.e. integrated) nightclubs. We learn that the word jazz entered was originally spelled jass and entered the popular lexicon via an editorial in the June 17, 1917 New Orleans Times-Picayune which denounced the local music as “indecent” and an “atrocity in polite society” that should be suppressed. The expression “hip” entered the underworld jargon in the early days of Prohibition when a one musician might ask another “Are you hip?” to see if they were carrying a hip flask of illegal booze.

Dangerous Rhythms contains many fascinating anecdotes about the early jazz musicians including Sidney Bechet, Fats Waller, Duke Ellington and Billie Holiday. The book also details the operations of the major clubs, including the famous Birdland and the Copacabana in New York City, both controlled by mob-connected figures.

Birdland, located just off Times Square, opened in 1949 and was billed as the “Jazz Corner of the World.” Many seminal live recordings were made in the club, including performances by Charlie Parker, Art Blakey, George Shearing, Stan Getz, Lester Young, Sarah Vaughn and Count Basie.  

The symbiotic relationship between jazz performers and gangsters extended well into the 1950s and ‘60s and reached as far as the West Coast where mob-connected institutions such as Ciro’s on the Sunset Strip in Los Angeles and the Sands Hotel in Las Vegas featured famous jazz performers.

According to English, the mob’s control over major nightclubs was finally ended in the 1980s, when “jazz declined as a significant percentage of the country’s entertainment dollar and the mob found other fish to fry.”

In addition, the mob’s organization suffered a devastating setback when federal prosecutors went after the leaders of the Five Families in New York.  Using the new RICO (Racketeer influenced and Corrupt Organizations Act) laws, prosecutors in the 1984 “Commission Trial” were able to put away key leaders for long prison sentences.

Today, of course, jazz remains very popular, is shaped by a new generation of musicians and is played in prestigious venues from Lincoln Center to the Hollywood Bowl. English concludes that “as it turned out jazz did not need the mob to survive.”

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/184065 https://historynewsnetwork.org/article/184065 0
Songs for Sale: Tin Pan Alley (Excerpt)

"Tin Pan Alley" Publishing Houses, W. 28th St., Manhattan, 1910

Ragtime’s rise to national, then international, prominence took a full decade. The first decade of the American century would see the creation and rapid rise of the music industry, which, once in place, both made ragtime a phenomenon and broke the spirits of its innovators. Recorded music needed to be marketed and sold, and songs needed to be written in order to be recorded. Lower Manhattan didn’t lack for a “can do” spirit.

In the 1900s dozens of would-be writers saw Charles K. Harris rolling around in “After the Ball” dollars and, hoping lightning would strike Lower Manhattan again, bought themselves a little office space on West 28th Street, between Broadway and Sixth Avenue. It became a warren of songwriters’ offices that was soon nicknamed “Tin Pan Alley” on account of the noise issuing from multiple bashed pianos, not to mention the wastepaper bins – filling up with abandoned songs – that were being kicked in frustration.[i]

Art and commerce were interchangeable on Tin Pan Alley. “Meet Me in St Louis” was written as an advert for the Louisiana Purchase Exposition, otherwise known as the St Louis World’s Fair, in 1904; it would become a hit all over again in 1944 as the theme to one of Judy Garland’s best-loved films. The very soul of cockney London, Florrie Forde’s “Down at the Old Bull and Bush” (1903) was actually American in origin: “Here’s the little German band, just let me hold your hand” was a lyrical clue. It had been written by Harry Von Tilzer, whose real name was Harry Gumm; his mother’s maiden name was Tilzer, and he’d added “Von” for a bit of Tin Pan Alley class. The song was an ad for Budweiser, brewed by Anheuser-Busch – you can imagine the original jingle. Von Tilzer also gave us the boisterous cheeriness of “Wait ’Til the Sun Shines, Nellie,” first recorded in 1905 by minstrel singer Byron G. Harlan, and fifty years later by Buddy Holly.

The era’s biggest American hits, emanating from the Alley – as yet untouched by Missouri’s ragtime – were largely lachrymose stuff. The sentimental “In the Shade of the Old Apple Tree,” written by one Egbert Van Alstyne, was recorded straight by the Peerless Quartet and Henry Burr in 1904, but was so sappy that it was almost immediately parodied, nearly as sappily, by Billy Murray (“I climbed up the old apple tree, ’cos a pie was the real thing to me”). More blubby yet was 1906’s “My Gal Sal,” a last flourish from Paul Dresser, the writer of “On the Banks of the Wabash,” who cried every time he sang one of his own songs.

The portly Dresser, in his cups, was legendarily generous – to his author brother Theodore Dreiser, to the homeless of New York – and gave all of his songwriting royalties away. He died in 1907, aged forty-eight, and so never lived to see himself played on screen by Victor Mature – in the 1942 biopic My Gal Sal – nor to delight in the fact that his on-screen persona would cavort with Rita Hayworth (the film remains one of Hollywood’s most complete rewrites of history – half of the songs in the film were written by Leo Robin rather than Dresser).

The biggest home-grown name, the most celebrated American composer of the decade, wasn’t really American at all. Victor Herbert had been born in Dublin in 1859 and moved to the States in the early 1890s; by 1898 he had his first operetta on Broadway, The Fortune Teller, featuring “Gypsy Jan,” “Romany Life” and “Slumber On, My Little Gypsy Sweetheart” – telltale titles that gave away his Viennese inspiration. New York remained largely immune to the charms of American music. What it needed was some pride, some self-mythologizing, and the person to do that was a smug-looking man in a straw boater called George M. Cohan.

Cohan became the first undisputed king of Broadway with a batch of songs he wrote in his mid-twenties, between 1904 and 1906, and a two-pronged attack that stood his work apart from Herbert’s light operas and home-grown ballads like the Haydn Quartet’s 1905 recording “In the Shade of the Old Apple Tree.” First, he was heavily patriotic – he was all about the New World. Secondly, he mythologized Broadway as a place of glamour (“Give My Regards to Broadway”). No hokum about apple trees; it was all city slicker sentiments and love for the new century. Cohan had been born on July 4, 1878, which entitled him to a certain amount of loud-mouth chauvinism; in 1904 he wrote the most patriotic pop song of the lot, “Yankee Doodle Dandy.” He reacted to critical reviews of his work with a sharp “So long as they mention my name.” Along what lines did he write his plays, one critic asked. “Mainly on the New York, New Haven and Hartford line.” He’s recognizably modern and even has a statue in Times Square, so why isn’t Cohan’s name better remembered? Well, he damaged his legacy by singing his own songs, which wasn’t a great idea, given his inability to stay in tune. Still, it makes for an entertaining listen today: 1911’s “Life’s a Very Funny Proposition” suggests the odd rising and falling cadence of Bob Dylan, only sung in a half-Scottish, half-French accent.

As American songwriters like George M. Cohan began to create American theatre music free of any debt to Vienna or Gilbert and Sullivan, and Will Marion Harris introduced ragtime rhythms to Broadway with his 1903 show In Dahomey, so the gramophone was reinvented for the burgeoning American age in the shape of the Victrola. Thomas Edison himself had thought that any use of the gramophone beyond dictation was in the realms of novelty, and he had a point: it recorded the human voice much better than it did the violin; for any other use it was a squeaky mechanical toy. Talking Machine World was under no illusions and wrote that “the high-brow element professed to find nothing of merit in the talking machine." The piano, on the other hand, continued to be a source of spiritual succour beyond the Victorian age. It took the business savvy of Eldridge Johnson of the Victor Talking Machine Company to make the gramophone an equally acceptable and desirable piece of household furniture in the Edwardian age.

Johnson invented several things which any record collector or twenty-first-century vinyl obsessive would be familiar with today: a straight tone arm, a recess in the middle of the disc on which you could place a paper label, and a box under the turntable in which all of the mechanical parts were neatly contained. This was his new record player, and in 1906 it went on the market as the Victrola. It came in a four-foot-tall mahogany cabinet; Edison’s machines looked like industrial lathes by comparison. Soon, President Taft had a Victrola in the White House, and Johnson milked this news for all it was worth, using photos of Taft in his sales literature.

 

[i] The first music publisher to move to the block was the successful M. Witmark and Sons – Isidore, Julius and Jay – who moved uptown from 14th Street to 49–51 West 28th Street in 1893. Others soon moved into close proximity: Paul Dresser and Harry von Tilzer from Indiana; and Charles K. Harris from Milwaukee, who had written the schmaltzy but wildly successful “After the Ball” in 1893. By 1900 West 28th Street had the largest concentration of popular- music publishers in the US. A chance hit and a couple of hundred dollars could secure you an office. Tin Pan Alley quickly became so effective at the publication and distribution of sheet music that publishers in other American cities were marginalised.

Excerpted from Chapter 3 of Let's Do It: The Birth of Pop Music: A History, with permission of Pegasus Books. 

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/183903 https://historynewsnetwork.org/article/183903 0
Inflation Opened the Door to American Neoliberalism

In America, it was inflation that opened the door to Milton Friedman’s neoliberalism.

Inflation is usually caused by one of two things: international devaluation or internal dilution of a country’s currency, or widespread shortages of essential commodities that drive up prices enough to echo through the entire economy.

The early 1970s got both, one deliberately and the other as the result of war.

Between 1971 and 1973, President Nixon pulled the United States out of the Bretton Woods economic framework that had been put together after World War II to stabilize the world’s currencies and balance trade. The dollar had been pegged to gold at $35 an ounce, and the world’s other currencies were effectively pegged to the dollar.

But the United States couldn’t buy enough gold to support the number of dollars we needed as our economy grew, so on August 15, 1971, Nixon announced to the nation and the world that he was taking the dollar off the gold standard and putting a 10 percent tariff on most imports of finished goods into the US to deal with the changes in the dollar’s value relative to other currencies.

The immediate result was that the value of the dollar rose as the world breathed a sigh of relief that the “gold crisis” was coming to an end and the dollar would become more portable. But an increased value in the dollar relative to other currencies meant that products manufactured in the US became more expensive overseas, hurting our exports.

At that time, there were 60,000 more factories in the US than today, and Walmart was advertising that everything in their stores was “Made in the USA”: exports were an important part of our economy, and imports were mostly raw materials or “exotic” goods not produced here, like sandalwood from Thailand or French wines.

To deal with the “strong dollar” problem, Nixon announced in December 1971 that the US was devaluing our currency relative to the Japanese yen, German mark, and British pound (among others) by 11 percent. It was the first-ever negotiated realignment of the world’s major currencies, and Nixon crowed that it was “the greatest monetary agreement in the history of the world.”

But we were still importing more and more goods from overseas, particularly cars from Japan, increasing our trade deficit and hurting American jobs that manufactured goods like cars that competed with the Japanese and the Germans. So in the second week of February 1973, Nixon did it again, negotiating a further devaluation of the dollar by 10 percent.

While devaluing the dollar against other currencies didn’t have much immediate impact on products grown or made in the United States from US raw materials, it did mean that the prices of imports (including oil, which was the primary energy supply for pretty much everything in America) went up.

Over the next decade, the impact of that devaluation would work its way through the American economy in the form of a mild inflation, which Nixon thought could be easily controlled by Fed monetary policy.

What he hadn’t figured on, though, was the 1973 Arab-Israeli War. Because America took Israel’s side in the war, the Arab states cut off their supply of oil to the US in October 1973. As the State Department’s history of the time notes, “The price of oil per barrel first doubled, then quadrupled, imposing skyrocketing costs on consumers and structural challenges to the stability of whole national economies.”

Everything in America depended on oil, from manufacturing fertilizer to powering tractors, from lighting up cities to moving cars and trucks down the highway, from heating homes to powering factories. As a result, the price of everything went up: it was a classic supply-shock-driven inflation.

The war ended on January 19, 1974, and the Arab nations lifted their embargo on US oil in March of that year. Between two devaluations and the explosion in oil prices, inflation in the US was running red-hot by the mid-1970s, and it would take about a decade for it to be wrung out of our economy through Fed actions and normal readjustments in the international and domestic marketplace.

But Americans were furious. The price of pretty much everything was up by 10 percent or more, and wages weren’t keeping pace. Strikes started to roil the economy as Nixon was busted for accepting bribes and authorizing a break-in at the Democratic National Committee’s headquarters in the Watergate complex. Nixon left office and Gerald Ford became our president, launching his campaign to stabilize the dollar with a nationally televised speech on October 8, 1974.

Ford’s program included a temporary 5 percent increase in top-end income taxes, cuts to federal spending, and “the creation of a voluntary inflation-fighting organization, named ‘Whip Inflation Now’ (WIN).” The inflation rate in 1974 peaked at 12.3 percent, and home mortgage rates were going through the roof.

WIN became a joke, inflation persisted and got worse as we became locked into a wage-price spiral (particularly after Nixon’s wage-price controls ended), and President Ford was replaced by President Jimmy Carter in the election of 1976.

But inflation persisted as the realignment of the US dollar and the price of oil was forcing a market response to the value of the dollar. (An x percent annual inflation rate means, practically speaking, that the dollar has lost x percent of its value that year.)

The inflation rates for 1977, 1978, 1979, and 1980 were, respectively, 6.7 percent, 9.0 percent, 13.3 percent, and 12.5 percent.

In 1978, Margaret Thatcher came to power in the United Kingdom and, advised by neoliberals at the Institute of Economic Affairs (IEA), a UK-based private think tank, began a massive program of crushing that country’s labor unions while privatizing as much of the country’s infrastructure as she could, up to and including British Airways and British Rail.

She appointed Geoffrey Howe, a member of the Mont Pelerin Society and friend of Milton Friedman’s, as her chancellor of the exchequer (like the American secretary of the Treasury) to run the British economy. Friedman, crowing about his own influence on Howe and the IEA’s founder, Sir Antony Fisher, wrote, “The U-turn in British policy executed by Margaret Thatcher owes more to him (i.e., Fisher) than any other individual.”

The ideas of neoliberalism had, by this time, spread across the world, and Thatcher’s UK was getting international applause for being the world’s first major economy to put them into place. Pressure built on President Carter to do the same, and, hoping it might help whip inflation, he deregulated the US trucking and airline industries, among others, in the last two years of his presidency.

Ronald Reagan was elected in 1980, and when he came into office, he jumped into neoliberal policy with both feet, starting by crushing the air traffic controllers’ union, PATCO, in a single week. Union busting, welfare cutting, free trade, and deregulation were the themes of Reagan’s eight years, then carried on another four years by President George H. W. Bush, whose administration negotiated the North American Free Trade Agreement (NAFTA).

America was now officially on the neoliberal path, and Friedman and his Mont Pelerin buddies were cheering it on.

By 1982, inflation was down from 1981’s 8.9 percent to a respectable and tolerable 3.8 percent; it averaged around that for the rest of the decade. Instead of pointing out that it normally takes a supply-shock inflation and a currency-devaluation inflation a decade or two to work itself out, the American media gave Reagan and neoliberalism all the credit. Milton Friedman, after all, had made his reputation as the great scholar of inflation and was a relentless self-promoter, appearing in newspapers and newsmagazines almost every week in one way or another.

Claiming that neoliberal policies had crushed over a decade of inflation in a single year, and ignoring the fact that it was just the normal wringing-out of inflation from the economy, Reagan openly embraced neoliberalism with a passion at every level of his administration. He embarked on a series of massive tax cuts for the morbidly rich, dropping the top tax bracket from 74 percent when he came into office down to 25 percent. He borrowed the money to pay for it, tripling the national debt from roughly $800 billion in 1980 to $2.4 trillion when he left office, and the effect of that $2 trillion he put on the nation’s credit card was a sharp economic stimulus for which Reagan took the credit.

He deregulated financial markets and savings and loan (S&L) banks, letting Wall Street raiders walk away with billions while gutting S&Ls so badly that the federal government had to bail out the industry by replacing about $100 billion that the bankers had stolen.

“Greed is good!” was the new slogan, junk bonds became a thing, and mergers and acquisitions experts, or “M&A Artists” who called themselves “Masters of the Universe,” became the nation’s heroes, lionized in movies like the 1987 Wall Street, starring Michael Douglas.

Reagan signed Executive Order 12291, which required all federal agencies to use a cost-benefit estimate when putting together federal rules and regulations. Instead of considering costs of externalities (things like the damage that pollution does to people or how bank rip-offs hurt the middle class), however, the only costs his administration worried about were expenses to industry.

He cut the regulatory power of the Environmental Protection Agency (EPA), and his head of that organization, Anne Gorsuch (mother of Supreme Court Justice Neil Gorsuch), was, as Newsweek reported, “involved in a nasty scandal involving political manipulation, fund mismanagement, perjury and destruction of subpoenaed documents,” leaving office in disgrace.

Meanwhile, Reagan’s secretary of the interior, James Watt, went on a binge selling off federal lands to drilling and mining interests for pennies on the dollar. When asked if he was concerned about the environmental destruction of sensitive lands, he replied, “[M]y responsibility is to follow the Scriptures which call upon us to occupy the land until Jesus returns.” According to Watt’s fundamentalist dogma, any damage to the environment would be reversed when Jesus came back to Earth and would “[make] all things new.”

Reagan cut education funding, putting Bill Bennett in as secretary of education. Bennett was a big advocate of the so-called school choice movement that emerged in the wake of the 1954 Supreme Court Brown v. Board of Education decision, which ordered school desegregation. All-white private, religious, and charter schools started getting federal dollars; public schools had their funds cut, and Bennett later rationalized it all by saying, “If it were your sole purpose to reduce crime, you could abort every black baby in this country, and your crime rate would go down.”

The Labor Department had been created back in 1913 by President William H. Taft, a progressive Republican, and Reagan installed former construction executive Ray Donovan as its head, the first anti-labor partisan to ever run the department, a position he had to leave when he was indicted for fraud and grand larceny (the charges didn’t stick) related to Mafia associates he was in tight with. As the Washington Post observed when Donovan died, “Carrying out Reagan’s conservative agenda, Mr. Donovan eased regulations for business, including Occupational Safety and Health Administration rules disliked by industry. He withdrew a rule requiring the labeling of hazardous chemicals in the workplace and postponed federal employment and training programs, equal opportunity employment measures, and a minimum-wage increase for service workers. His tenure also saw drastic cuts in the department’s budget and staff.”

That sort of thing happened in every federal agency throughout the Reagan and Bush presidencies; much of their neoliberal damage has yet to be undone.

By 1992, Americans were starting to wise up to Reagan’s scam.

Thousands of factories had closed, their production shipped overseas; working-class wages had stagnated since his first year in office, while CEO salaries exploded from 29 times the average worker’s salary in 1978 to 129 times average worker wages in 1995 (they’re over 300 times average worker wages today); and union membership had dropped from a third of workers to around 15 percent (it’s around 6 percent of the private workforce today).

The Reagan and Bush administrations negotiated the neo-liberal centerpiece, the NAFTA treaty (although they called it a “trade agreement” rather than a treaty because it couldn’t get past the constitutional requirement for a two-thirds vote in the Senate to approve all treaties), and wanted it signed the following year, in 1993.

Reprinted from The Hidden History of Neoliberalism with the permission of Berrett-Koehler Publishers. Copyright © 2022 by Thom Hartmann.

]]>
Fri, 19 Apr 2024 06:26:27 +0000 https://historynewsnetwork.org/article/183902 https://historynewsnetwork.org/article/183902 0