According To U.S. Big Data, We Won The Vietnam War
When the last helicopter rose above the American embassy in Saigon on April 29, 1975, the US had been winning the Vietnam War for over a decade. The data said so.
The strategy had been driven by a simple hypothesis, proven by history: Wars were won by inflicting damage on an enemy until they surrendered. The Pentagon set up metrics to measure that progress, the primary data point being kills (dead enemies), which was reviewed as an absolute number and expressed as a ratio against our own dead. The bigger ratio, the better the war was going, and Viet Cong casualties were generally 2x or more those of American dead.
Other metrics that factored into this 1960’s big data solution were tonnage of dropped bombs, ships intercepted trying to run the blockade, and miles of land “controlled” by US or allied forces. These data drove strategic decisions, from aerial sorties to troop numbers and movements, because they prompted choices that were scientific, and thereby avoided the biases and shortcomings of emotion or belief.
The approach was led by Robert McNamara, who’d used statistical studies to make US bomber runs more efficient during WWII, and then taken his teammates to work for Ford, where they were known as the “Whiz Kids” (and McNamara was appointed president, in 1960, right before he was recruited by a newly-elected President John F. Kennedy to be Secretary of Defense). The idea that decision making should be informed by data was not new — after all, statistical process controls had been a mainstay of American manufacturing since the early days of the 20th century — but McNamara’s genius was giving it a level of meaning and authority that was shocking in its scope.
It was also wrong...