Skip to main content
SearchLoginLogin or Signup

Introduction

Published onApr 01, 2022
Introduction
·

[Artificial intelligence is] the conjecture every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

—John McCarthy and Marvin Minsky, 19551

[AI is] the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.

—Association for the Advancement of Artificial Intelligence, 20222

AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity.

—European Parliament, 20203

Artificial intelligence (AI) advocates generally describe it in the future tense. By inverting that convention in my 2002 essay, I intended to signal that AI also had a past—by then a half century–long history of extravagant forecasting—which was overdue for critical examination. I also wanted to suggest that, at that point, AI’s future was uncertain, as the field was undergoing a period of critical reassessment.

Almost from artificial intelligence’s inception in the 1950s, AI researchers had been periodically announcing that they were on the threshold of revolutionary discoveries that would radically transform human life as we know it, in ways that we could not begin to grasp. Artificial intelligence would, we were told, create a form of super intelligence many times greater than human intelligence, which would continue to perfect itself through machine learning, leaving us slow-witted humans behind. AI enthusiasts celebrated the prospect, seeing themselves as either creating or bearing witness to the next step in evolution. Sci-fi narratives multiplied and greatly amplified AI futurism, whether as deliverance or as the impending doom of a robotic apocalypse.

Fact or fiction, good or evil: The audacity of the message seemed to infuse AI with independent agency, a godlike mind and destiny of its own. This deflected attention from the actual networks of military, political, and economic interests promoting its development. It also absolved the hubris of the scientists attending the nativity of the electronic marvel, because they seemed to cast themselves as mere messengers or apostles serving a higher power.

By the 1990s, however, the air was growing thin. A half century of extravagant promises, substantial public investments, and meager visible returns led to disenchantment with AI’s top-down research paradigm. This early work drew on traditional philosophical studies of logic and reasoning processes to develop its models, with aspirations to formalize common-sense reasoning processes.4 For AI, it was a time for rethinking and retooling, which suggested that AI’s hold on the future tense was, at best, tenuous.

It was an opportune moment to rethink my own work on AI as well, which I had first undertaken in the mid-1980s, when enthusiasm for the promise of artificial intelligence was at a peak. The 2002 essay was not, however, intended as an obituary for artificial intelligence. No one expected its advocates to shut down shop. But at the time it did seem that the transcendent vision of the first generation of AI researchers—what is now sometimes referred to as “strong” artificial intelligence—was undergoing a radical deflation, and that it might be prudently retired.

The results of my millennial reassessment of AI, “What Was Artificial Intelligence?” is reproduced here in its original form, without any emendations.5 It is not a history of the science or mathematics of AI. That lies far beyond my competence. Rather, it is an account of the stories the AI scientists have told themselves, each other, and the world about the form of intelligence they hoped to create. In telling those stories, they also tell us a great deal about themselves.

Artificial Intelligence Paratexts and Hype Cycles

In literary theory, a paratext is “a text that relates to (or mediates) another text (the main work) in a way that enables the work to be complete and to be offered to its readers and, more generally, to the public.” It has been described metaphorically as a “threshold” or “vestibule” which allows readers to enter a text.6

“What Was Artificial Intelligence?” offers a critical analysis of twentieth-century paratexts of the AI movement: the programmatic descriptions, manifestos, and interviews that AI scientists used to explain what they thought they were doing when they did their research. Parascientific texts are similar to corporate mission statements, which are designed to cultivate and promote positive responses to an enterprise. The target audiences for the parascientific texts of AI are potential public and private funding agencies like the National Science Foundation or the Defense Advanced Research Projects Agency (DARPA), policy makers and administrators of universities and scientific research institutes, as well as scientists in other areas of expertise, science buffs, and journalists. So these parascientific texts do not assume or require competency in the technical aspects of AI. They are, in effect, translations or narrative accounts of the techno-sciences.

Artificial intelligence advocates have been especially prolific in their production of paratexts, perhaps because the technology they are seeking to develop is unprecedented, esoteric, and ethereal, and its delivery date is indefinite (although almost always described as near). According to widely accepted market analytics, new technologies—those that make it to market—typically undergo “hype cycles” of initial over-enthusiasm, followed by disillusionment, and then a plateau of productivity as the product is realistically assessed and its utility demonstrated in the marketplace.7

It is generally agreed by AI chroniclers that the artificial intelligence movement has gone through two major hype cycles that have ended in disappointment. There is little agreement on exact dates or the specific AI visions that failed to meet expectations. There does, however, seem to be agreement that AI hype-cycle peaks are unusually steep and its descents exceptionally low. In fact, the lows are so low that the AI community refers to them as AI “winters,” borrowing the seasonal trope from the Cold War concept of nuclear winter, when life on planet Earth is extinguished by nuclear devastation.8

The disparity in dating AI low points—AI winters—is a function of whether an analysis focuses primarily on (1) research funding, (2) breakthroughs or failures in specific AI-based technologies or promises, or (3) performance in the marketplace. While these dynamics are interrelated, their timing is sequential, which explains the dating disparities. For our purposes, (1) funding is most relevant since most early AI research relied heavily on government funding, and its ebbs and flows have immediate impact on AI researchers. This dates the first AI winter to the early 1970s, when mechanical translation, a Cold War priority, was declared a hopeless failure and US government funding dried up. AI research in the UK also declined during the same approximate timeframe in response to disappointment in AI’s military applications.9

By the end of the 1970s, however, there were buds on the AI tree again, in the US at least, with excitement about the potential of neural networks: AI modeling based on biological, rather than logical, models of intelligence. Interest peaked in the mid-1980s, but AI’s second winter had set in by the early 1990s—and lasted so long that one observer referred to it as a “mini ice age instead of a winter.”10 Artificial intelligence research was not completely abandoned during AI winters, but funding was scarce and researchers tended to avoid the inflated “AI” label, adopting more modest descriptors such as expert systems, machine learning, informatics, pattern recognition, or knowledge-based systems.11

The second winter lasted through the 2008 global financial crisis. The past decade has, however, inaugurated a new AI hype cycle with momentum that dwarfs the two previous cycles. Not only are governments heavily investing in AI for strategic and economic purposes, but there have also been large infusions of Silicon Valley wealth and other private investments in AI research and development. Even more significantly, the US and China have entered into a global competition for AI supremacy, comparable to the US-USSR’s Cold War space race.12 The World Economic Forum has also cast AI in a central role in what it is calling the “Fourth Industrial Revolution.”13 In the current hype cycle, “AI is the new gold.”14

Fossil Poetry

My original engagement with AI evolved out of a long-standing interest in the sociology and politics of knowledge—in this instance, the human factors that shape scientific knowledge. In the mid-1980s, a number of path-breaking studies of gender and science were published. I followed that early literature closely until the floodgates it opened made it impossible for any single scholar to follow it all.

At the time, I thought artificial intelligence could offer an especially rich resource for studying the social constituents of science because of its future orientation, as well as its departure from the usual protocols of empirical science. That is, AI does not exist in nature. It can’t be apprehended by the senses, observed in the wild, or dissected in the laboratory. Rather, AI is a projection of the hopes and dreams of AI researchers.

It is science in a formative stage. The development of artificial intelligence can be studied in real time, unlike so many classic studies in the history and sociology of science, which reach into the past to expose the social factors in the development of science, usually focusing on discredited or superseded scientific claims. So, artificial intelligence seemed an ideal case study for sociologists of knowledge.

AI was also of interest from another perspective. For four decades, global politics had been organized around the generative metaphor of the Cold War. The 1980s were a transformative period, from glasnost to the fall of the Berlin Wall and the demise of the Soviet Union. Change was in the air. Advances in computer technology, including the PC revolution, were also transforming business, as instant international cash transfers became possible. Policy makers and pundits were vying to name this new constellation, to capture its gestalt. The “information age” and “information economy” were gaining traction. If AI could keep its promises, it would be the engine of the future. Because of its resonance within popular culture, references to AI, no matter how vague, seemed to add scientific authority to the ideological thrust of political and economic claims. So the paratexts of AI possessed significant geopolitical relevance in the 1980s, as well as sociological interest, even though “globalization” ultimately won the naming game. Since then, new efforts to fill the void with AI futurism have gained traction.

To summarize, then, during my early studies of AI in the mid-1980s, AI was at the peak of its second hype cycle. My return to the topic in 2000 was at the low point in that cycle, AI’s second winter. Today we are at or near the pinnacle of a third cycle. The tenor of the parascientific texts of AI reflect these temporal locations: The hyperbole soars approaching the peaks, and is chastened in the valleys.

Until recently, however, there has been relatively little critical scholarly analysis of artificial intelligence as a social construct and political force. That is now changing, and changing rapidly and dramatically. Writing in 2021, AI scholar Kate Crawford asserted, “A decade ago, the suggestion that there could be a problem of bias in artificial intelligence was unorthodox.”15

In 1986, when I first broached the subject, it was not just unorthodox. To suggest that AI models might contain social fingerprints approached heresy. There were, however, a few prominent critics of AI’s inflated claims, most notably computer scientist Joseph Weizenbaum and philosopher Hubert Dreyfus. Nevertheless, to suggest that AI was gendered was beyond the pale, although Weizenbaum seemed to intuit it in his reference to AI modelers as “big children” who have not given up their “sandbox fantasies” or dreams of omnipotence.16 When he wrote this in 1988, there is no way that his metaphor would have evoked images of “big girl children” doing AI research, let alone indulging dreams of omnipotence. Elsewhere, Weizenbaum is unambiguous about the gender of the “unwashed and unshaven” who “are oblivious to their bodies and the world in which they move” and “exist, at least when so engaged, only through and for computers.”17 In a 2006 interview, he explicitly identified the masculinist bias of AI dreams of omnipotence and accused some AI extremists of “uterus envy.”18

I published some of my early work on artificial intelligence and the information economy; however, these AI studies had almost no resonance. AI was not yet on the radar of social science research. It was only on my own radar because, at that time, my life was disproportionately populated by engineers, programmers, and hardline quantitative social scientists. My many part-time gigs as a graduate student had included drawing computer flowcharts and editing field engineers’ reports. So I was often immersed in tech talk, with its instrumental values of parsimony, efficiency, and economy. To navigate this unfamiliar terrain, I drew on my sociological training and undertook an informal ethnographic study of the dialect of these tribes. It sensitized me to tech talk’s instrumental strengths as well as its blindspots. I also worked on survey research projects and took advanced courses in social science data analysis. As a result, I was keenly aware of the “extracting and abstracting” processes necessary to “clean up” survey research data in order to prepare it for computer processing. I had strong reservations about those efforts, as I suspected that some of the more revealing aspects of human behavior could be found in the anomalies that the clean-up scrubbed away.

My interest in language had deeper roots. I learned early that subtexts are often as important as texts, and sometimes more important. When I began studying AI, the linguistic turn in the social sciences was just beginning. So my own approach to discourse analysis in the 1970s and 1980s drew on an eclectic mix of sources drawn from social science and the humanities. Ralph Waldo Emerson’s description of language as “fossil poetry” and his conception of metaphors as portals of knowledge also made a strong impression. In addition, I benefited from the wisdom of my dissertation advisor, Llewelyn Z. Gross (1914–2014), who studied “socio-logics”—the patterned reasoning processes in natural languages that formal logic does not accommodate—from philosophical and sociological perspectives.19 Later still, I discovered Lakoff and Johnson’s Metaphors We Live By (1980) and Philosophy in the Flesh (1999), which affirmed and legitimized my intuitive sense that “by their metaphors you shall know them.”20 In the last thirty years there has been a revolution in the analysis of conceptual metaphors, based on Lakoff and Johnson’s work, which has transformed thinking about thinking and textual analysis in many fields.

These were the sensibilities, along with feminist standpoint theory, that I brought to the analysis of the rhetoric of AI. Add the demographics of computer science in the 1980s. Unlike the earlier era, when women programmers played key roles in developing the field, computer science had become a boys’ club that required round-the-clock devotion in the elite graduate centers—what is now known as living on “Silicon time.” Epitomized by the legendary lore of MIT grad students’ nocturnal antics, it was common practice to restrict student access to scarce mainframe computer time to late-night hours at most research universities and, more often than not, the atmosphere became hostile to women.

That, in summary, is the background to the essay: (1) the disparate timing of my two periods of AI inquiry—the first in the mid-1980s and the second in the early 2000s; (2) my perception of AI’s relevance to the sociologies of knowledge, politics, and gender; and (3) my approach to the study of AI’s power-knowledge through the metaphors of its parascientific texts.

Season Three: Return of the Astronauts

Most researchers revisit their early work with trepidation. We frequently encounter the voices of our former selves as alien, shudder at our stale epiphanies, and find perverse comfort in our low readership numbers on Google Scholar. Nonetheless, I went there again in 2021 because references to artificial intelligence suddenly seemed to be everywhere—not just in AI paratexts or science fiction—but in news stories, in international affairs reporting, in radio and television commercials, and in popular culture more generally. It seemed that globalization had been superseded by the AI gold rush.

The mainstreaming of a new, globalized artificial intelligence hype cycle was clearly underway. I assumed AI had finally crossed the long-awaited threshold and had discredited my early skepticism. I took a deep dive into current AI paratexts to catch up on these new developments and, unexpectantly, found myself in familiar territory: similar tropes, the same breathless expectancy, even more extravagant sandbox fantasies. The new buzz is big data—your data, reader, extracted from your online activity. The AI component is pattern recognition, which abstracts and sorts that data. The methods are statistics and probability.

Nonetheless, there has been substantial AI progress since my last visit. Much of that progress centers around developments in what is now known as “weak” or “narrow” AI, which can do specific tasks with far more speed, volume, and efficiency than any human. For example, Amazon’s recommendations to users based on their past searches and purchases. There have also been major victories for “strong” AI, too. In 1997, IBM’s Deep Blue computer famously beat world chess champion Gary Kasparov. In 2017, a computer beat the top-ranked player in the game of Go, which is considered infinitely more mathematically complex than chess; it also requires strategizing and trial-and-error machine learning. These AI developments have applications that extend far beyond games. According to artificial intelligence enthusiasts, they move AI closer to passing the Turing Test, the standard for the achievement of artificial intelligence (thinking machines) set in 1950—the point when machines exhibit intelligible behavior that is indistinguishable from human behavior. Some predict that this AI threshold will be crossed before the end of this decade. Others remain resolute in their skepticism.

The emergence of AI skeptics is the real news—and from my perspective, the good news for humankind. If, as Crawford claims, criticism of AI bias was unorthodox a decade ago, it is becoming increasingly mainstream today. When I wrote “What Was Artificial Intelligence?” I had Weizenbaum, Dreyfus, and computer scientist Bill Joy to lean on.21 Today there is a significant community of critical scholars studying many aspects of the artificial intelligence movement.

In a 2000 jeremiad in Wired magazine (cited more fully in my essay), Joy, co-founder and chief scientist of Sun Microsystems, warned that the technologies being developed in the twenty-first century—genetics, nanotechnology, and robotics—would be so powerful, accessible, and amenable to abuse that they could pose a greater threat to humankind than the weapons of mass destruction of the twentieth century. He saw the astronautic fantasies of late twentieth-century AI scientists, which called for abandoning an over-populated, contaminated, and warming Earth, in favor of interplanetary colonization—or, alternatively, merging with and becoming robots—as forms of denial that abdicate responsibility for life on earth. Joy pushed the panic button in hopes of initiating public dialogue about techno-futures which, to that point, had been shaped without it, by military strategists, military contractors, scientists, engineers, and by his fellow tech entrepreneurs.

That dialogue now exists and it is not confined to academic conferences, seminars, and computer labs, although robust research agendas are underway in those venues—far too robust to explore here, unfortunately. There is also a vibrant cyber-activist community responding to the surveillance regimes and authoritarianism enabled by digital technologies, working to create more just, equitable, transparent, and accountable forms of digital democracy. International critique and regulation of big tech companies is already well advanced; and the United Nations has made combating gender bias in AI a priority, noting that research has “unambiguously” found gender biases in AI training data sets, algorithms, and devices.22 Similar racial biases have also been well-documented, including in devices already deployed in policing and criminal justice contexts.

There are, however, very powerful forces aligned against these critical communities, with very deep pockets to fund political campaigns, lobbying, advertising, and public relations, to keep the current AI hype cycle spinning. The twenty-first-century titans of tech are the spiritual grandsons of the big-children-with-a-sandbox fantasies that Weizenbaum described. Unlike their forebears, however, they have the resources to indulge their astronautical fantasies of omnipotence and immortality. They are creating their own space programs, commissioning plans for colonizing the moon, and even exploring the possibility of producing an alternative universe, a virtual reality “metaverse,” where the humans left behind can go for fun and games when the physical world becomes too boring or unpleasant. To wit, Jeff Bezos, Elon Musk, and Richard Branson have created their own space programs, and Bill Gates and Mark Zuckerberg are working on creating a “metaverse.” Some of these men also generously fund philanthropic initiatives, some to advance their own policy initiatives, but others presumably from altruistic motives. They have many sand pails, but in a democracy, policy is made by representatives of the public, in theory at least to serve the common good.

Most of the new AI critics are not Luddites. They are not pushing to abandon advanced computer research or to retire the robots. They are, however, calling for rejection of artificial intelligence’s post-human eschatology, and for replacing it with one that embraces and advances “[a] human form of life [that] is fragile, embodied in mortal flesh, time-limited, and irreproducible in silico.”23

To begin this reclamation of our tools and toys, artificial intelligence critic and policy expert Frank Pasquale calls for replacing the hype of AI with IA, “human intelligence augmentation.” He proposes a four-point set of “laws of robotics” to supersede science fiction writer Isaac Asimov’s 1942 laws for machines, developed in his short story, “Runaround.”

They are:

(i) “Robotic systems and AI should complement professionals, not replace them.”
(ii) “Robotic systems and AI should not counterfeit humanity.”
(iii) “Robotic systems and AI should not intensify zero-sum arms races.”
(iv) “Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).”24

Pasquale is writing from a different temporal and disciplinary location than my essay and he uses different words, but he draws the same conclusion as “What Was Artificial Intelligence?”

Coda

Rereading the 2002 essay, I did not blush. I found it timely, in places even eerily prescient. That is not a brag. It is a testament to the longevity of the hyperbolic mythopoetics of the artificial intelligence movement—from their embryonic inception in Turing’s 1950 essay, to their embellishment by the self-proclaimed descendants of Golem, and their inheritance by the present masters of the digital universe. While the essay is of some historical interest, it remains a relevant brief for the kind of humane and inclusive IA that Pasquale and many other critical AI scholars are now seeking.

Comments
0
comment
No comments here
Why not start the discussion?