When the astronauts of Apollo 11 went to the moon in July 1969, NASA was worried about their safety during the complex flight. The agency was also worried about what the spacefarers might bring back with them.
For years before Apollo 11, officials had been concerned that the moon might harbor microorganisms. What if moon microbes survived the return trip and caused lunar fever on Earth?
To manage the possibility, NASA planned to quarantine the people, instruments, samples and space vehicles that had come into contact with lunar material.
But in a paper published this month in the science history journal Isis, Dagomar Degroot, an environmental historian at Georgetown University, demonstrates that these “planetary protection” efforts were inadequate, to a degree not widely known before.
“The quarantine protocol looked like a success,” Dr. Degroot concludes in the study, “only because it was not needed.”
Dr. Degroot’s archival work also shows NASA officials knew that lunar germs could pose an existential (if low-probability) threat and that their lunar quarantine probably wouldn’t keep Earth safe if such a threat did exist. They oversold their ability to neutralize that threat anyway.
This space age narrative, Dr. Degroot’s paper claims, is an example of the tendency in scientific projects to downplay existential risks, which are unlikely and difficult to deal with, in favor of focusing on smaller, likelier problems. It also offers useful lessons as NASA and other space agencies prepare to collect samples from Mars and other worlds in the solar system for study on Earth.
In the 1960s, no one knew whether the moon harbored life. But scientists were concerned enough that the National Academy of Sciences held a high-level conference in 1964 to discuss moon-Earth contamination. “They agreed that the risk was real and that the consequences could be profound,” Dr. Degroot said.