Sorting by

Skip to main content

Americans of a certain age can think back to the days of their childhood and recall how “Cold War science” was built into their political system and social imagination. The consolidation of technology and power during those days stirred dark fears about the dehumanizing effects of industrialized militaries and computerized decision-making. Now, several decades later, we have become accustomed to such threats, just as we have grown comfortable with the default image of science that produced them. But this image has not adapted easily to the changing political realities of the new century, so we should be willing to revisit the assumptions behind it.

“Believing in” science means little more than trusting its investigators and communicators.

We can look to the film genre known as “tech noir” to be reminded of the relevant sense of Cold War dread. Take, for instance, Jean-Luc Godard’s 1965 film Alphaville, a title that refers to a futuristic capital city governed by computers. The computers of Alphaville keep track of which rooms in the city are occupied; they conduct one-on-one interviews with suspected criminals; they enforce the laws of the land. The story follows the investigative mission of secret agent Lemmy Caution, who comes from the less advanced Outlands to see how the new society works. In one striking scene, we witness a queue of convicted thought-criminals awaiting execution at the side of a large indoor swimming pool. One by one they walk to the end of a diving board, deliver a final statement and are summarily shot into the pool by an armed guard. Each execution is elaborately orchestrated, ending only after a team of knife-wielding water ballerinas dives into the pool to finish off the convict with multiple underwater stabbings. We hear that one of the convicts has been condemned for the crime of weeping at the death of his wife. Before this widower meets his gruesome end, he shouts from the diving board, “One must move ahead to live. Move straight toward those that you love.” Shortly thereafter, the next convict struts onto that same narrow stage and proclaims:

Listen to me, normals! We see the truth that you no longer see. This truth is that the essence of man is love and faith, courage and tenderness, generosity and sacrifice. Everything else is an obstacle created by your blind progress and ignorance!

This impassioned fellow attempts to continue his declaration of truth even as the ballerinas force him down into the darkening water. From scenes such as this, we learn that traditional human concepts – including conscience, emotion, love and poetry – have been declared abnormal and illegal by the new order of Alphaville. The only sanctioned form of human thought is the hard, rational logic that can be analyzed by a computer.

Godard’s film was released just a few years after President Dwight Eisenhower’s farewell address of 1961 – the speech that originated the phrase “military-industrial complex” and warned about the potential problems to be faced in a world governed by the tools of the “scientific-technological elite.” The film adds bite to Eisenhower’s warning by reminding viewers that dystopian technocracy will only ever be the product of human intention, design and complicity. In the words of Alphaville’s gruff-talking central computer, Alpha Soixante, “The acts of men … will gradually destroy them … I am merely the logical means of this destruction.” In other words, humans are victims not of technology but of themselves.


As a physicist coming of age in the 1980s and 90s, I was taught to see negative depictions of science as expressions of unfair stereotypes and unfounded paranoia. After all, so it was said, the sciences that won World War II were the same sciences that ushered in the most prosperous half-century of Western culture. In fact, many in my generation grew up thinking of science as an oracle – an entity that can speak only truth, and valuable truth at that. For proof that science served the good of society, one needed only to recall the Manhattan Project, the Green Revolution and the polio vaccine. It was easy to dismiss the warnings that this oracle, like the Wizard of Oz, was in actuality human through and through.

Now, though, with the Cold War now long behind us, it is important to remember precisely that which we were once conditioned to forget: that the sciences are animated by human purposes and practices and that the institutions of modern science rest on negotiated political agreements. In a time when the integrity of science is being questioned, it will do no good to cling to the oracular mystique and implore the would-be critics of science to “pay no attention to the man behind the curtain.” Science is a human endeavor. “Believing in” science means little more than trusting its investigators and communicators.

It is not difficult to see that human judgment is an integral part of the scientific enterprise. Consider, for instance, what happens when scientists propose ideas to funding agencies that might support their research. The funders normally ask other scientists to evaluate the proposals. As part of this “peer review” process, the referees pass judgment on a variety of claims and concepts, including some that inevitably lie outside their areas of expertise. Besides judging the legitimacy of theories and experiments, they also have to weigh in  with opinions on which research questions are most important. This gives them a say in defining the purposes and setting priorities of scientific investigations. To use an outmoded way of speaking, scientific referees are asked to judge “values” as well as “facts.” And of course, their value judgments have ramifications for everyone else in society. No one should want these scientists to ignore their ethical responsibilities or to pretend that their judgments are imbued with oracular authority.

Still, somehow the old-fashioned oracle metaphor lives on. As I have suggested, this image is a liability in popular discourse about science. It can also give individual scientists distorted understandings of their own roles and responsibilities. It can lead them to think that their jobs are only to run the powerful machines of research, and that the motivations, purposes and moral direction of such work can be left to the employer or the funding agency. There is a degree of romantic freedom in this picture of a scientific contractor. Scientists can take on the work that suits them, just as the mercenary Paladin used to do in the old TV Western series, Have Gun – Will Travel. And, as with any gun-for-hire hero, the scientist is the beneficiary of unquestioned esteem within the general population. On top of this, the scientist enjoys the moral respect traditionally accorded to priests, the rightful keepers and dispensers of vital truths.


This elevated image of the scientist is not intrinsic to the job: it was cultivated by a mid-20th-century arrangement that some have called the “social contract for science.” By the mid 1940s, the United States’ military’s reliance on science had become obvious through wartime developments of penicillin, radar and the atomic bomb. Yet there was still little organizational support for science on a national scale. To address the problem, President Franklin Roosevelt and, later, President Harry Truman relied on Vannevar Bush, an elite engineer and administrative mastermind who had been a close adviser to Roosevelt throughout the war. Drawing heavily on his connections in Congress, industry and university networks, Bush set about to lay the foundations of a federal science agency. His plan met with congressional challenges over several years, but most of the legislative elements he sought were in place when the National Science Foundation Act was passed in 1950, reports Daniel Lee Kleinman in his Politics on the Endless Frontier (Duke University Press Books, 1995). Notably, the organizational plan solidified a conceptual distinction between basic and applied research. The NSF would focus on basic research, and most of its oversight would be left to scientists themselves. The individual research grant would become the primary mechanism for funding projects located at universities nationwide. Meanwhile, applied research would be the responsibility of the military and the private sector, where technological development would be most likely to yield gains in national security and economic prosperity.

This plan assured scientists that they would be isolated both from government control and from a direct responsibility for producing social goods. Thus the contract between science and society would give scientists broad freedom to pursue projects of their own design, and in return they would serve as the prospectors and miners of knowledge, building up a standing reserve of facts and know-how, the value of which would be realized when entrepreneurs found a way to put them to good use.

High-minded though it was, this scheme harbored the possibility that some scientists could become alienated from politics and societal decision-making, as well as from the products of their own work. Thus, while the contract might have been intended to establish a cloistered priesthood of scientists, it also had the potential to spawn a new working class – a scientific proletariat.

This tension between the two possible roles of the scientist – as priest and as mineworker – is reflective of the nation’s complementary aspirations, a split in its collective personality. For a century now, there have been two opposing impulses in the American political psyche. One seeks the rights of seclusion and self-determination; the other wants to build global order through organization and cooperation. By the end of World War II, both impulses had met with frustration on an international scale: Neither vision of national destiny remained unclouded or unthreatened. And the corresponding concepts of personal destiny fared no better. French movie critics of that era noted that a sense of foreboding had permeated American films during the course of the war – this is the origin of film noir. In a 2010 article in Point Magazine titled  “No Good Reason,” Philosopher Robert Pippin describes this genre’s portrayals of human agency and rationality thus:

[A]lmost no one in noirs behaves in a way that easily fits the reflective and deliberative models of action and agency prominent in philosophy since Aristotle. For one thing, it is clear that in some cases none of the principals have any clear idea at all why they are doing what they are doing. … The standard picture is of people “trapped” either (somewhat paradoxically) by themselves (by who they have become), or by an anonymous and autonomous social order or societal machine or by a vast purposeless play of uncontrollable fortune.

Critics have suggested that the sense of human entrapment in noir films was inspired by the fearsome realities of scientific warfare and the prospect of nuclear annihilation. While that interpretation may seem to put common folk under the thumb of a scientific priesthood, it also allows us to imagine the scientists themselves as victims, or at least cogs, in a political machine. Even today, this image might represent some scientists who still labor under the old social contract and experience its unintended consequences.

The noir antihero that Pippin describes is unable to articulate any good reasons for his or her actions, because the systemic constraints placed on those actions serve no meaningful purpose. Is this a plausible description of a workaday scientist who pursues research grants not out of passion or genuine interest but simply because that is what one does to get tenure and pay the bills? Or of a researcher who has never had to explain why his or her investigations are valuable to society? Perhaps so. In fact, Eisenhower anticipated the emergence of just this kind of antihero-scientist in his farewell speech when he pointed to the day when “a government contract becomes virtually a substitute for intellectual curiosity” (find the whole speech at The obvious irony, of course, is that the organizational plan for science in 1950 was intended to give scientists extensive freedom but that it might have led some to sell themselves into servitude.


A rethinking of our image of science therefore will require more than a public recognition that scientific institutions function primarily through human judgment and collective decision-making. It also will call for a revised contract that creates a deeper understanding of the moral commitments and personal responsibilities of individual scientists. Christians have names for the values that would be served at both the social and personal levels: shalom and dignity. We have to relearn how to use these terms in our discussions of science. At the moment, our predicament is a bit like that of the young Natacha, who flees to the Outlands with Lemmy Caution in the closing scene of Alphaville. Once she is freed of the central computer’s control, Natacha confesses, “I don’t know what to say. There is a word that I do not know and did not learn. … Help me.”

“Impossible,” replies Lemmy. “You have to get there yourself. If you don’t, you’re as lost as the dead souls of Alphaville.”

She goes silent for a moment, struggling to make sense of her jumbled memories. But she finally brings forth a long-lost word –love – and with this utterance, she begins to recover all of the essential human sensibilities that she had been taught to forget.

If Christians find themselves at a similar a loss for words, it might be that a distorted picture of science continues to hold them captive. When looking at science from the outside, one might still imagine that its sole purpose is to deliver infallible truths. Such thinking underestimates the relevance of the interpretations, judgments and decisions that scientists have to make. It holds scientists accountable only for their objective methods and conclusions but not for their wisdom or intentions. If these minimal expectations alone are kept in place, no one should wonder why the broader concerns of personhood remain beyond the vision and vocabulary of many who work on science from the inside.

Matt Walhout teaches physics and astronomy at Calvin College, Grand Rapids, Michigan