With support from the University of Richmond

History News Network

History News Network puts current events into historical perspective. Subscribe to our newsletter for new perspectives on the ways history continues to resonate in the present. Explore our archive of thousands of original op-eds and curated stories from around the web. Join us to learn more about the past, now.

The Promise and Risks of Artificial Intelligence: A Brief History


A Brief History of AI, From Automation to Symbiosis

The Department of Defense strategy for AI contains at least two related but distinct conceptions of AI. The first focuses on mimesis — that is, designing machines that can mimic human work. The strategy document defines mimesis as “the ability of machines to perform tasks that normally require human intelligence — for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action.” A somewhat distinct approach to AI focuses on what some have called human-machine symbiosis, wherein humans and machines work closely together, leveraging their distinctive kinds of intelligence to transform work processes and organization. This vision can also be found in the AI strategy, which aims to “use AI-enabled information, tools, and systems to empower, not replace, those who serve.”

Of course, mimesis and symbiosis are not mutually exclusive. Mimesis may be understood as a means to symbiosis, as suggested by the Defense Department’s proposal to “augment the capabilities of our personnel by offloading tedious cognitive or physical tasks.” But symbiosis is arguably the more revolutionary of the two concepts and also, I argue, the key to understanding the risks associated with AI.

Both approaches to AI are quite old. Machines have been taking over tasks that otherwise require human intelligence for decades, if not centuries. In 1950, mathematician Alan Turing proposed that a machine can be said to “think” if it can persuasively imitate human behavior, and later in the decade computer engineers designed machines that could “learn.” By 1959, one researcher concluded that “a computer can be programmed so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”

Meanwhile, others were beginning to advance a more interactive approach to machine intelligence. This vision was perhaps most prominently articulated by J.C.R. Licklider, a psychologist studying human-computer interactions. In a 1960 paper on “Man-Computer Symbiosis,” Licklider chose to “avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone.” However, he continued: “There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association.”

Notions of symbiosis were influenced by experience with computers for the Semi-Automatic Ground Environment (SAGE), which gathered information from early warning radars and coordinated a nationwide air defense system. Just as the Defense Department aims to use AI to keep pace with rapidly changing threats, SAGE was designed to counter the prospect of increasingly swift attacks on the United States, specifically low-flying bombers that could evade radar detection until they were very close to their targets.


Read entire article at War on the Rocks