Electric Dreams Introduction: The Dialectic of Technological Determinism

Is Resistance Futile?

Why do we think what we think about computers? A computer is just a tool. Or, more specifically, a medium – a means of processing and communicating information. It lets us do so with incredible speed and efficiency, but in principle, the hardware is as open-ended as a blank piece of paper. Just as the tiny wood fibers of a sheet of paper can absorb ink into any pattern a user might create, the binary circuits of a computer can store any input.

But we rarely think of a computer as a “blank slate,” and for good reasons. The writer beginning a manuscript with pencil and paper sees only the blank page, two-dimensional and practically weightless. As I sit here at my computer, on the other hand, the means of production making my intellectual work possible are almost overwhelming: a keyboard clacks at every stroke, a mouse sits off to the side, a heavy monitor takes up much of the desk, and the big, rectangular processor makes whirring, chugging noises (and on rare, unpleasant occasions, suddenly stops working all together). My entire office is oriented around the demands of my computer: the keyboard sits on a pull-out shelf to be at the ergonomically correct height for my hands (I have carpal tunnel syndrome), the monitor is positioned to avoid glare from any window, and behind my desk, nests of wires snake from monitor to processor to printer to speakers to modem to phone line to power supply to . . .

Of course, the seeming transparency of the act of writing with pencil and paper hides its own complex social processes. The chair and desk had to be built, shipped, and bought. That thin strip of paper began long ago as a tree, before being whittled down, processed, packaged, and sold. As design historian Henry Petroski has shown, even our current pencil is the result of centuries of technological refinement.[1]

And I’ve only been talking about the physical technology. Even more significant is the rich cultural matrix in which any medium is embedded. Learning to use a computer may seem like a daunting task for many; but to use a pen and paper, you need to have learned how to read and write, a far more complex – and still far from universal – skill.

In short, it takes a lot of work to produce any “blank slate,” and no two media get there the same way. The difference with computers is that the social processes creating that slate are still visible. Americans don’t take computers for granted yet.

This difficulty with computers creates inequity – what’s known as “the digital divide.”[2] Some people can comfortably use computers, many others can’t. Since this uneven distribution of technological expertise corresponds so closely to disparate levels of education, wealth, and social status, the importance of computers in contemporary life in many ways reinforces unequal social relations. Well-off kids from the suburbs grow up around computers; poor children are less likely to get that level of exposure. Boys are more likely than girls to be encouraged to explore computing.

But the fact that computers aren’t taken for granted yet also offers an opportunity. Users of almost any communications technology are typically alienated from that technology in contemporary society. We read our books, watch TV, talk on the telephone, with very little awareness of how these processes work, or how they might function differently. Science Studies theorists such as Bruno Latour, Wiebe Bijker and Trevor Pinch refer to this as the “black box effect,” through which the choices and conflicts which produce a technological object are hidden in a walled-off machine whose operations are simply taken for granted.[3] While most users are alienated from computers, too, the difference is that they know they’re alienated. The very clunkiness of computers creates a level of self-consciousness about the computing process; there’s no way to pretend it all simply happens “naturally.”

Once the use of a technology becomes “natural,” the battle to shape the uses and meanings of that technology is to a large degree finished. Specific systems of practices have become essentialized, so that historically contingent processes are now seen as inherent to the medium. Take the example of television. We all think we know what “television” is. But that’s not what TV had to be. As scholars such as Raymond Williams and Ithiel de Sola Pool have demonstrated, TV could have been developed, marketed, and regulated as a two-way, open system, like the phone system, in which every viewer is also a broadcaster. Instead, of course, TV (at least in the US) became a centralized, one-way form of communication.[4] I’d argue that the point at which the battle was lost was when the way that TV was structured – the contingent result of political and economic struggles – became reified into the common-sense notion that that’s simply what TV “is.”

This is not to say that the cultural meanings of every technology aren’t continually being challenged, struggled over, and altered. Current political battles over the regulation of media ownership, for example, show how the meanings of television continue to be contested terrain.[5] But the early stages of flux, when meanings haven’t yet stabilized, are the most critical moments in defining the shape of new technologies. These are the moments when strategic interventions have the most opportunity to effect real change, before the powerful inertia of naturalization sets in. This is the era we are still in with regards to the personal computer.

It is in the interests of many of the pundits of computer culture to presume that the ultimate meanings of computers are already settled. Pre-empting the moment when we will simply take those meanings for granted, they suggest the future is already inevitable. Their argument rests on the logic of technological determinism, which presumes that certain supposedly unstoppable technical capabilities of computers will necessarily determine their future use. Thus, MIT Media Lab chief Nicholas Negroponte, author of the surprise best-seller Being Digital, insists that the nature of computers as digital storage devices will determine how they are used in the future.[6] Likewise, one of his corporate sponsors, AT&T, produced a series of now-famous Blade Runner-esque visions of the future in the mid-1990s – a globetrotting mother tucking her child in via a long-distance TV phone transmission, an executive sending a fax from the beach – all with the insistent tag line, “You will.” Notice how even the futurists’ favorite verb, “will,” so quickly slips from the future tense into the command tense.

But the future hasn’t happened yet. Computers are not yet taken for granted. We still aren’t sure what we think of them. It’s all still up for grabs. And so the struggles over the way computers are used, regulated, and understood are crucially important.

The Utopian Sphere and the Dialectic of Technological Determinism

Why are the struggles over the meanings of computers so important? Well, of course, because computers hold an important place in American culture today – to say nothing of the American economy. But beyond this, I believe the debates over computers are particularly significant because they are where we talk about the future. Debates over the meanings of computers are notoriously speculative. This has the disadvantage of encouraging ahistorical, ungrounded hype, but it’s also part of what makes cyberculture so compelling and important. The debates over cyberculture take place in what I call the utopian sphere: the space in public discourse where, in a society which in so many ways has given up on imagining anything better than multinational capitalism, there’s still room to dream of different kinds of futures.

Let me clarify “utopian sphere.” My claim is not that cyberspace is a medium where one can transcend the bounds of race, gender, age, etc., and discover a kind of utopia on earth. (Although this is exactly what much internet hype still baldly proclaims.) As we shall see, cyberspace is much more grounded in the “real world,” in all its inequities and injustices, than this fantasy would admit. Rather, the debate over the uses and meanings of computer technology is one of the few forums within the contemporary public sphere where idealized visions of the future can be elaborated without instant dismissal.

Here’s an example: contemporary debates over unemployment. The range of acceptable discourse on this topic in the American public sphere – what Pierre Bourdieu calls “doxa”[7] – is depressingly narrow. Liberals argue for faster economic growth to create more jobs. Conservatives insist this isn’t worth the risk of higher inflation. The imaginative scope of possible remedies is astonishingly thin: a decrease in the Federal Reserve rate, perhaps a modest government jobs program. What’s beyond the scope of debate is the current structure of the economy as a whole. At a time when increases in productivity as the result of new technologies are making more and more workers unnecessary, should we scale back the 40-hour work week? Or perhaps, at a point when many people’s labor is simply unwanted and unneeded, should we reconsider the “work ethic” altogether, and try to find other ways in which to define social worth? These ideas can be broached on the fringes of academic discourse,[8] but not in the talk shows, newspapers, and magazines of mainstream America.

However, there is one public space in which these possibilities can be explored: the discourse over future uses of new technologies. While the left has no room in the American public sphere to critique the merits of the 40-hour work week, writers who call themselves “futurists” do have an open space to suggest that, some day, many of our needs will be cared for by machines, and 40 hours (or more) of labor a week will be unnecessary. Couched in this science-fictional language, there’s even room to suggest that capitalism itself might some day be rendered obsolete. In as ubiquitous a dream of the future as Star Trek, for example, the replicator, a machine which can produce a copy of any object, appears to have done away with the market. The economy instead seems to work under the principle “from each according to his ability, to each according to his needs.”[9]

This form of futurism may seem hopelessly technologically determinist, less about the future we want than the future machines will give us. Capitalism may disappear in Star Trek, but only because of a deus ex machina: the replicator. But the presumption of technological determinism actually functions as a cover, authorizing a safe space in which to articulate utopian values. The public religion of technology can momentarily suspend the “pragmatist” doxa which derails utopian projects as impossible and utopian thinking as a foolish waste of time. It opens up a space – a utopian sphere – where we can imagine what we might want the future to look like.

Thus, technological determinism is double-edged. On the one hand, it is in many ways politically disabling because it denies human agency and proclaims specific changes to be inevitable. At the same time, however, the rhetoric of technological determinism is often what opens up room for utopian speculation. This dynamic is what I call the dialectic of technological determinism. Again and again in the pages that follow, we will see this tension in the cultural history of computing, as promises of a brighter future through technology both open up and shut down utopian thinking.[10]

My notion of a “utopian sphere” borrows from two strands of theoretical discourse: post-Marxist theorists’ examination of the utopian dimensions of mass culture and Jürgen Habermas’s conception of the “public sphere.”[11]

The German critic Ernst Bloch pioneered the examination of the utopian elements of mass culture in The Principle of Hope. Arguing against a model of ideology as simply “false consciousness,” Bloch argues that all ideology must contain the seeds of utopian desires – hints of a vision for a better world. Without this vision, it could not win the consent of its subjects. In contemporary capitalist culture, however, this utopian impulse is suppressed and diverted into consumerism, nationalism, and other oppressive ideological formations. This is the dynamic described by Fredric Jameson, influenced by Bloch, in his essay “Reification and Utopia in Mass Culture.”[12] For Bloch, the goal of the critic is to bring this utopian longing to the surface, and explore its possibilities. As Douglas Kellner writes, “Critique of ideology, Bloch argues, is not merely unmasking (Entlarvung) or demystification, but is also uncovering and discovery: revlations of unrealized dreams, lost possibilities, abortive hopes – that can be resurrected and enlivened and realized in our current situation.”[13]

One might argue that today, the once-radical-seeming gesture of searching for glimpses of Utopia in late-capitalist mass culture has been repeated so often in the field of cultural studies that it is in danger of seeming little more than a meaningless cliché. If Utopia is everywhere, why does it even matter where we look? This is the redundancy Meaghan Morris identified back in 1990 in her essay “Banality in Cultural Studies.”[14] Part of the problem is that rarely do cultural studies critics distinguish the specific properties of the Utopias they identify, beyond a vague hope of transcendence or a gesture towards radically egalitarian politics. But all Utopias are not identical; different visions embody different values and different political priorities. The notion of a utopian sphere offers room to see Utopia not as a fuzzily defined goal, but as a site of debate and conflict, out of which emerge different distinct visions of the future.

Granted, the utopian sphere does not operate the way Habermas’s public sphere does.[15] Habermas’s model of a public sphere rests on the idea of a civil dialogue conducted in broad daylight, in a transparent language where all questions can be clearly engaged and evaluated. The utopian sphere I’m trying to identify, by contrast, is a shadow sphere. At times, its ideas may be explicitly expressed, as in the manifestoes of the open source movement described in Chapter Nine; more often, visions of the future are more vague and obscure, as in Star Trek’s fuzzy post-capitalist fantasy. The utopian sphere is at times a utopian unconscious, repressed and barely visible without the work of critical recovery.

My goal is to drag cyberculture’s utopian sphere into the light of the public sphere – to draw out the hopes and fears implicit in computer culture’s visions of the future, unravel the rhetoric of technological determinism, and evaluate underlying political ideals and assumptions. One purpose of this work, to be sure, is ideological demystification – to puncture hollow promises. But it is also a project of ideological recuperation – an attempt to draw ideas and inspiration from one of the few corners of late capitalist culture that hasn’t lost the capacity to dream of different futures, and in so doing to rejuvenate a public sphere shriveled by neoliberal cynicism.

Fredric Jameson notes in The Seeds of Time that under late capitalism Utopia is in some sense unrepresentable.[16] These days, he points out, it seems easier to imagine the end of the world than the end of capitalism. The utopian visions I discuss here are all inevitably partial, conflicted, and compromised. But they’re as close as American culture gets to a utopian discourse; working through their promise and limitations can help us to get beyond them. As Henry Jenkins and David Thorburn write in the introduction to their collection, Democracy and New Media,

The utopian rhetoric predicting an imminent digital revolution is simplistic and often oblivious to complex historical processes. But its tenacious, diverse history is instructive and significant. For one thing, such pervasive talk about revolutionary change implies some fundamental dissatisfaction with the established order. Even if we believe that the concept of a digital revolution is empty rhetoric, we still must explain why a revolution, even a virtual one, has such appeal. A surprising range of thinkers on the right and the left have used the notion of “the computer revolution” to imagine forms of political change. Examining the rhetoric of digital revolution, we may identify a discourse about politics and culture that appears not only in academic writing or in explicitly ideological exchanges, but also in popular journalism and science fiction. This rhetoric has clear political effects, helping to shape attitudes toward emerging technologies. And even if such discourse is not an accurate measure of the impact of new media, it may nonetheless nourish serious discussion about core values and central institutions, allowing us to envision the possibility of change. Utopian visions help us to imagine a just society and to map strategies for achieving it.[17]

Much contemporary left discourse, as Meaghan Morris elsewhere points out, suffers from “insatiable critique,” the knee-jerk impulse to deconstruct without offering any more constructive alternative.[18] The analysis of the utopian sphere suggests how the tools of cultural studies can do more than incessantly demystify (an increasingly superfluous project in an age of cynical reason)[19] or discover forms of “resistance” defined only by their negation of hegemony. They can help identify and cultivate the spaces out of which new movements may emerge.

What’s So Special About Cyberculture?

By emphasizing discourse about computers as a utopian sphere, I don’t mean to imply that other aspects of American culture don’t contain glimpses of Utopia. In fact, following Bloch and Jameson, I would argue that just about any mass cultural text must inherently have a utopian aspect, if it is to connect with the desires of its audience. This is why cultural studies’ repeated discovery of the utopian (or, in a different framework, the resistant) in specific popular texts can start to feel so redundant. It’s shooting fish in a barrel. There’s always a utopian aspect to popular culture. Mass culture is often categorized as “escapism.” This is usually meant as a dismissal, but it actually reveals the power of popular texts to transport audiences momentarily to a different, better world. That moment of escape is a utopian moment, even if the rest of the text circumscribes and undercuts the power of that glimpse of a better way of life.[20]

Cyberculture, then, is not different in kind, but in degree. It is a space with more room for the utopian: a discourse where visions of the future may be more explicitly elaborated, and where different visions of the future may come into conflict. I don’t mean to presume that cyberculture is necessarily the only discourse where this kind of utopian thinking takes place. I hope, in fact, that this work will encourage other critics to trace other utopian spheres. But cyberculture, with its emphasis on the future, is a good place to start, and a fertile ground for new political visions.

Cyberculture follows in a long tradition of technological utopianism in American culture. Debunkers of cyberhype often draw this connection. James W. Carey and John J. Quirk in their essay “The Mythos of the Electronic Revolution” wryly trace a history of hollow promises and bad faith back to the development of electricity:

. . . the rhetoric of the electronic revolution . . . attributes intrinsically benign and progressive properties to electricity and its applications. It also displays a faith that electricity will exorcise social disorder and environmental disruption, eliminate political conflict and personal alienation, and restore ecological balance and a communion of humans with nature. [21]

Carey and Quirk aren’t wrong to demystify the ideology of technological progress. Indeed, much of this book does similar work.[22] But they don’t tell the whole story. It’s not surprising that utopians’ promises often look foolish in retrospect – their dreams unfulfilled, their hopes exploited, their faith in technology misplaced. But against the inertia of everyday life, theirs are the voices which insist that history isn’t yet over, that these are not the best of times, that the world can still be changed. The rhetoric of technological progress has galvanized the progressive movements of modern times, as the following examples demonstrate.

Edward Bellamy’s hugely popular 1888 utopian novel Looking Backward helped set the agenda for Progressive Era reform. It was the second-largest selling book of the nineteenth century in the United States, after Uncle Tom’s Cabin.[23] Franklin Rosemont writes,

Well into the new century, up to the eve of the First World War, the great majority of those who brought something new and original to the cause of working-class emancipation in the United States [including Eugene Debs, Charlotte Perkins Gilman, Upton Sinclair, and many others] . . . were those whose first steps as radicals had been guided by what Elizabeth Cady Stanton called “Edward Bellamy’s beautiful vision of the equal conditions of the human family in the year 2000.”[24]

Likewise, the technological futurists of the 1920s and 1930s were not all simply technophilic apologists for capitalism. Andrew Ross’s study of early twentieth century futurists delineates three groups – technocrats, socialists, and progressives – each of whom offered a critique of business as usual:

At a time when science and technology were becoming the primary rationales for capitalist growth, technocrats, socialists, and progressives each assumed, in a publicly visible way, that they were the historical heirs to a tradition of technological futurism – a tradition not at all adequately described by today’s derogatory term “technophilia.” For technocrats, it was a tradition in which expertise, rationality, and knowledge challenged the arbitrary diktat of capital; for socialists, it was a tradition in which the technological forces of production undermined the existing social order even as they reinforced it; and for progressives, it was a tradition in which technology was the ally of democratization and the enemy of limited production for profit.[25]

Ross contrasts the “critical technocracy” of early science fiction with the dystopianism of the contemporary science fiction genre of cyberpunk. Cyberpunk largely follows the spirit of Carey and Quirk, relentlessly demystifying the “naïve” technophilia of its forebears. But Ross argues that this abandonment of utopia simply accedes positive visions of the future to corporate control. “Cyberpunk literature, film, and television express all too well the current tendency to unhitch the wagon from the star, to disconnect technological development from any notion of a progressive future. In doing so, they leave the future open to those for whom that connection was and still is a very profitable state.”[26] Ross wrote this warning in 1991, before the dawn of the internet age. But it helps explain the corporate takeover of the cyberpunk vision through Wired magazine and dot-com hype, as we’ll see in Chapter Eight.

Like much contemporary cybercultural studies, my own work has been greatly inspired by one particularly influential contemporary utopian vision: Donna Haraway’s “Cyborg Manifesto.” First published in Socialist Review in 1985, and subsequently anthologized in many collections and college coursepacks, Haraway’s essay sounded a call for a new kind of socialist feminist analysis of technology and culture. Rejecting the nostalgia and technophobia of much of the 1980s left, she called for a forward-thinking perspective which embraces the radical possibilities in the interface of human and machine. She concluded, famously, “I would rather be a cyborg than a goddess.”[27]

The Circuit of Culture

Electric Dreams is a genealogy: an attempt to better understand the debates of the present by identifying the traces they carry of the past. I trace the struggles over the meaning of computers by examining key episodes in the cultural history of computers, slicing through a series of carefully chosen texts to reveal cross-sections of cultural conflicts and tensions.

To address what I hope to be a diverse readership, I have tried not to presume specialized knowledge. So, I’ll spend some time explaining basic technical concepts in computing. Likewise, while my work concentrates on a series of key moments in the history of computing, I’ll try to fill in the spaces in between, to give a sense of how these moments fit into a broader narrative.

I began my work on computer culture in the 1990s by writing about computer games, cyberpunk literature, and other contemporary topics. As I began to survey the state of cyberculture criticism, however, I found a frustrating lack of historical perspective. The thrill of speculating about the future made it easy to ignore the past. I decided I wanted to complement my contemporary critique with a more historical understanding of how computers have come to mean what they mean today. Inspired by works on the cultural history of technology such as David Nye’s Electrifying America and Lynn Spigel’s Make Room for TV, I decided to study not only contemporary computer culture, but also the social and cultural roots of computing.

I started my research by surveying a wide range of primary sources. I visited the Smithsonian Museum of American History’s Division in Information Technology and Society, whose collections include owners’ manuals for early PCs, user group newsletters, and an original Altair. At the Museum of Broadcasting in New York and the John W. Hartman Center for Advertising History at Duke University, I viewed archives of dozens of television commercials and hundreds of magazine ads for computers. I read every article about computers in Time, Newsweek, Life, Consumer Reports, and other general interest publications published between 1950 and 1985, as well as coverage of computers in alternative press publications such as The Whole Earth Review and Rolling Stone. I also studied the personal computing publications which emerged in the 1970s and 1980s, such as Dr. Dobb’s Journal, Byte, Creative Computing, and MacWorld.

I’ve never been a conventional historian, however. My training is in literary and cultural studies. My goal has not been to craft a master narrative through a comprehensive assemblage of primary source materials, in the manner of such valuable historical works as Martin Campbell-Kelly’s and William Aspray’s Computer or Paul Ceruzzi’s A History of Modern Computing. Rather, my method is to focus more closely on specific texts, to understand them in greater detail. The close examination of these texts, in turn, may reveal cultural tensions and ideological undercurrents invisible from the heights of a grand narrative.

In Doing Cultural Studies: The Story of the Sony Walkman, Paul du Gay outlines “the circuit of culture”: the five interlinked processes through which every cultural text or object passes.[28] These processes include:

  • Production: the economic and labor structures under which the object is created and manufactured.
  • Consumption: the social context in which consumers purchase the product and integrate it into their lives.
  • Regulation: the legal and political framework which shapes how the product is distributed and used.
  • Identity: the ways the product contributes to the formation of subjectivities.
  • Representation: the discourse through which ideas about and images of the object are expressed and debated.

All of these processes are continually cross-linked and intertwined in feedback loops. There can be no consumption without production. Consumer response, in turn, influences future production. Identity depends on the process of representation, which depends on both the production and consumption of signs. Regulation determines the institutional structures which constrain and define all the processes, while those processes may in turn reshape regulatory practices. Any work of cultural analysis must engage all these processes to understand the full context of its object of study.

Nonetheless, each work of cultural study must choose where to start – where on the circuit of culture to anchor its examination. Different areas of focus lead to different methodologies. Most histories of computers, including Computer and A History of Modern Computing, have concentrated on the production process, tracing the economic and technological development of the computer industry. Other works of cybercultural studies have focussed on the consumption process, through ethnographic research (Nancy Baym’s Tune In, Log In) or psychological study (Sherry Turkle’s Life on the Screen). Works of legal analysis have examined the system of intellectual property which underlies the regulatory process (Laurence Lessig’s Code and Other Laws of Cyberspace; Siva Vaidhyanathan’s Copyrights and Copywrongs), while works by feminists (Sadie Plant’s Zeroes and Ones) and critical race theorists (Lisa Nakamura’s Cybertypes) have studied the role of computers in the formation of identity.

All of these perspectives are valuable, and inform my work greatly. The approach of this book, however, begins in an examination of the fifth process: representation. My method is rooted in the close analysis of a range of texts about computers, including books, films, magazine articles, television shows, advertisements, and software itself. The value of this approach is that it can show us details that other perspectives can miss. Individual texts bear the traces of their cultural contexts, in both their surface meanings and their suppressed subtexts. Many of the texts this book will examine will reveal themselves, under close examination, to be rich cross-sections of conflicting visions of computing, repositories of American society’s hopes and fears about new technologies.

The perspectives of all the other links in the circuit of culture will inform my study of representations of computers. In turn, this angle may provide new insight into all the processes along the circuit of culture. My discussion of the sexual politics of Desk Set and 2001 in Chapter Three, for example, can help us understand how representations of computers have influenced the construction of gender identity. Likewise, the discussion of the rhetoric of Moore’s Law in Chapter Four can elucidate the assumptions and dynamics underlying the production of the personal computer. The analysis of the semiotics of computer games in Chapter Six opens up new ways to think about the experience of gaming for consumers of computer software. And the discussion of the discourse of Napster and Linux in Chapters Nine and Ten can help us understand the possible implications of these software systems for the regulation of intellectual property.

Having decided to anchor my study in the analysis of a series of key texts, I have picked my texts carefully, selecting works which stand at crossroads in the cultural history of personal computers, at the emergence of new visions of computing and of the future. In particular, I have chosen texts which seem to have been critically influential in the subsequent cultural history of computers. Let me clarify influential. While all of these texts were significant interventions in computer discourse, I don’t mean to imply that subsequent developments in the cultural history of computers were caused by these texts. This work isn’t meant to be an anti-materialist intellectual history. Rather, I mean that these are texts which seemed to speak to large audiences, and whose traces can be seen in subsequent texts. Why these texts turned out to be “influential” – why the visions of computing in these texts continued to hold currency – is a more complicated historical question. While I have been anxious to critique the pitfalls of technological determinism, I can’t offer as reassuringly coherent an alternative. Still, while insisting on the contingency of history, I have tried to point to some of the broader intersecting factors influencing the cultural history of computers by situating my discussion of these texts in the broader context of the entire circuit of culture.

I have also looked for texts of semiotic richness. Having concluded that almost all the texts I encountered in my research were characterized by tension and contradiction, I looked for paradigmatic texts, texts which best seemed to crystallize surrounding tensions and contradictions. One might think that my interest in semiotic complexity would be at odds with my desire to study texts which reach large audiences – the former pointing to “sophisticated” literary texts, the latter to “simple” popular ones. But, to the contrary, I have found the popular texts I’ve studied to be deeply complex artifacts. In fact, popular texts by their nature are invariably sites of internal tension and conflict. For one thing, popular media such as film, television, and computer games are collaborative media, demanding the input of multiple contributors with differing viewpoints. In addition, to appeal to a broad and heterogeneous audience, popular texts must be able to encompass diverging perspectives, while speaking to deeply held hopes and fears.

My definition of “text” stretches very widely in the chapters that follow, including novels, essays, films, television shows, advertisements, magazine articles, and computer programs. I have purposely chosen as broad a range of texts as I could, to attempt to capture the range of computer culture. Whatever kind of text I study, my goal remains much the same: to unravel the conflicted visions of the future in computer culture.


Electric Dreams is composed of three sections. Part I, Mainframe Culture, examines ideas about computers from Charles Babbage’s “invention” of the difference engine in the mid-nineteenth century through the 1960s. In this era, computers were massive, institutionally owned machines runs by cadres of trained specialists. Part II, The Personal Computer, looks at the emergence in the 1970s and 1980s of the personal computer as broadly available consumer product. Part III, The Interpersonal Computer, turns to the rise of the Internet.

Part I, Mainframe Culture, consists of three chapters. Chapter One, “Charles Babbage and the Politics of Computer Memory,” investigates the historical memory of cyberculture, tracing the contested legacy of Charles Babbage, the so-called “father of computing.” Babbage’s nineteenth century inventions, the difference engine and the analytical engine, presaged many of the developments of modern computing. But his machines were considered failures in their day, and nobody followed in his footsteps. Many twentieth century pioneers of computing were not even aware of Babbage’s work. Babbage poses a challenge to cyberculture’s reigning historical model of technological determinism, which presumes that if a machine can be built, it inevitably will be built, with inevitable social and political consequences. To engage contemporary debates over the memory of Babbage, the chapter examines several texts, including William Gibson and Bruce Sterling’s The Difference Engine, a science fiction novel which explores the contingency of history by positing an alternate version of events in which Babbage was successful, ushering in a steam-driven Information Age in the midst of Victorian England.

Chapter Two, “Ideologies of Information Processing: From Analog to Digital,” charts the transition from the “analog” machines of the 1930s, which used continuous measurement devices (akin to the slide rule) to solve complex equations, to the digital machines of the 1940s now widely thought of as the first true computers. These machines processed information as a series discrete units represented in binary form as a series of 0s and 1s. I argue that underlying this transition rests a set of ideological assumptions in computing culture that value quantifiable, reified “precision” over the fuzziness of the natural world. The chapter then traces the legacy of this transition in contemporary cyberculture, contrasting the technophilic fetishization of digital media such as the compact disc with the nostalgic appeal of vinyl records, hip-hop turntable scratching, and other signifiers of the analog.

Chapter Three, “Filming the Electronic Brain,” turns to popular film to chart the utopian hopes and dystopian fears inspired by the emerging computer industry in the 1950s and 1960s. It begins by looking at Desk Set, a 1957 Hepburn/Tracy comedy in which the introduction of a computer (then called an “electronic brain”) into the research library of a television network provokes widespread job anxiety. While the film’s narrative optimistically concludes that the computer will only be a boon to the librarians, this putative happy ending fails to successfully contain the undercurrent of anxiety provoked by the specter of deskilling and technological unemployment. By contrast, the 1968 film 2001 reverses text and subtext. While the narrative places the computer HAL in the role of the villain, the film’s prediction of a future of sentient machines inspired a generation of computer science researchers who were undeterred by the narrative’s technophobic warning. The chapter also examines the gender anxieties raised by this new, disembodied form of intelligence. Desk Set positions its computer as a romantic threat, while 2001 codes HAL as a threat to heteronormativity.

Part II, The Personal Computer, turns to the rise of the PC as a commodity available for individual purchase. Chapter Four, “The Many Creators of the Personal Computer,” examines the range of groups whose competing visions influenced the development of what we now know as the PC. It looks at the early failed attempts by hobbyists and computer manufacturers to introduce computers into the home, the rise of the “time-sharing” model among mainframe programmers, the development of a technotopian vision of the democratization of information by the People’s Computer Company and other California-based organizations, the production of the microprocessor by semiconductor companies looking to expand their markets, the coining of “Moore’s Law” to naturalize the industrial practice of planned obsolescence, and finally the emergence of the first PCs out of the subculture of electronics hobbyists.

Chapter Five, “Apple’s 1984,” looks at how the PC moved from being an esoteric hobbyist’s device to a fetishized, mass-produced commodity. It examines the approaches used to advertise computers by the early PC manufacturers, centering on a discussion of “1984,” the blockbuster ad which introduced the Apple Macintosh. The enormously influential spot, directed by filmmaker Ridley Scott, ran nationally only once, on the 1984 Super Bowl, but remains one of the most well-remembered and influential television commercials ever produced. The commercial succeeded in popularizing the California technotopians’ vision of the personal computer as a tool for the democratization of information. In the process, however, it denuded that vision of its broader political critique, replacing a community-based ideal of shared information processing with an individualist fantasy of empowerment through consumption. This new ideological formulation paved the way for the libertarian technohype that dominated cybercultural discourse through the 1990s.

Chapter Six, “The Rise of Simulation Games,” looks at computer programs themselves as texts. It concentrates on the “simulation” genre of computer games which emerged in the late 1980s, including SimCity and Civilization. Drawing on film theory, reader-response theory, and the work of Donna Haraway and Fredric Jameson, it examines how computer games blur the boundary between reader and text. It argues that much of the pleasure of computer games comes from the feedback loop generated by the constant interaction between user and machine, which creates a distinct form of “cyborg consciousness.” It concludes that computer games offer a distinct opportunity to develop new aesthetic practices which may be more capable than older cultural forms at representing the complexity of late capitalist society.

Finally, Part III, The Interpersonal Computer, turns to the rise of the internet. Chapter Seven, “Imagining Cyberspace,” looks at the diffusion and changing meanings of “cyberspace,” a term coined by science fiction writer William Gibson. In Gibson’s 1984 novel Neuromancer, the term has dual, linked meanings, encompassing what we would now call “the internet” and “virtual reality.” In the late 1980s, as “VR” became a hot fad, the term “cyberspace” became a useful term to describe the simulated environment produced through the combination of 3D goggles, motion-sensitive gloves, and other associated technologies. When the internet emerged as a mass medium in the mid 1990s, however, the use of the term shifted to this new, less tangible terrain. The chapter suggests that VR was perhaps a transitional point in the development of our notion of the internet, offering a more accessibly embodied model of an information landscape. It warns, however, that the transition from VR to the internet involves a possible loss: whereas VR at least acknowledged a link between mind and body, the internet offers the fantasy of completely disembodied knowledge – and in the process, often occludes the still very real bodies of programmers, communication workers, and users who make the internet possible.

Chapter Eight, “Dot-com Politics,” examines the ideology of the Dot-com boom by looking at Wired, Silicon Valley’s magazine of record in the 1990s. Wired, it argues, came to prominence by forging a common ground between “hackers” interested in exploring the democratic potential of new information technologies, and communication executives hoping to exploit these new media for maximum profit. That common ground was techno-libertarian utopianism – a smug faith that unchecked laissez-faire capitalism could both maximize personal liberty and solve social ills through economic and technological development. (The irony of this rhetoric was particularly rich, given the enormous role of government funding in the rise of the computer industry and the internet.) Both groups got something out of this exchange: hackers won access to finance capital, while industrialists gained access to the subcultural capital which made cyberculture hip and marketable. The evasions of this ideological fantasy, however, eventually came home to roost, as the Dot-com crash widely exposed its inadequacy and hypocrisy.

Chapter Nine, “Napster and Beyond,” turns from the computer industry to the music industry, to examine some of the broader consequences of the digitization of culture. It examines the rise and fall of the Napster online music service, which allowed millions of users to share digitally recorded songs over the internet. The crisis provoked by Napster, it argues, opened up the music industry as a utopian sphere, a space in which to experiment with new models of the relationship between culture and commerce. The chapter looks at four paradigms: first, the current, waning CD-oriented model, which conceives of music as tangible physical commodity; second, the pay-per-download model followed by online services such as iTunes and the new Napster, which conceive of music as an intangible bundle of rights; third, the subscription model of the competing online service Rhapsody, which conceives of music as a utility; and fourth, the file sharing model of the original Napster and its descendants, which conceives of music as folk culture.

While the dot-com boom offered a technotopian vision based on a narrowly self-interested understanding of economic and technological development, Chapter Ten, “Linux and Utopia,” looks the alternate cybertopian vision offered by proponents of “open source” software such as the Linux operating system. Such software, often called “freeware,” is licensed in a way which allows any user to freely copy, distribute, and/or modify the software. Linux software is developed collaboratively, among a large group of volunteer programmers around the world, communicating via the Internet. It has emerged as a viable alternative to Microsoft Windows and Apple OS X, particularly in developing countries. Linux takes the hacker ideal of the free flow of information and pushes it to its logical conclusion, offering a critique of the regime of copyright. The chapter looks at the debates within the Linux community between idealists who see Linux as a model for a new vision of intellectual property and economic relations, and accomodationists interested in incorporating Linux into the conventional framework of capitalism.

Finally, the conclusion, “Cybertopia Today,” will look at several current cybertopian visions, including alternate fuel systems, blogging, and smart mobs.


[1]. Petroski, The Pencil.

[2]. See Lenhart et al, “The Ever-Shifting Internet Population”; Mossberger et al, Virtual Inequality.

[3]. See Latour, Science in Action; Pinch and Bijker, “The Social Construction of Fact and Artifacts”; Winner, “Upon Opening the Black Box and Finding It Empty.”

[4]. See Williams, Television; Pool, Technologies of Freedom.

[5]. See, for example, McChesney, The Problem of the Media.

[6]. Negroponte, Being Digital.

[7]. See Bourdieu, Outline of a Theory of Practice.

[8]. See Aronowitz and DiFazio, The Jobless Future.

[9]. The quote, of course, is from Marx. For more on the economics of Star Trek, see Friedman, “Capitalism: The Final Frontier.”

[10]. This dynamic is a particularly heightened version of the process described by Jameson in “Reification and Utopia in Mass Culture.”

[11]. See Habermas, The Structural Transformation of the Public Sphere.

[12]. See also Jameson, Marxism and Form.

[13].  Kellner, “Ernst Bloch, Utopia and Ideology Critique.” See also Kellner and O’Hara, “Utopia and Marxism in Ernst Bloch.”

[14]. Morris, “Banality in Cultural Studies.”

[15]. Whether contemporary American discourse contains anything like Habermas’s vision of a public sphere, though, is an open question. See Robbins, ed., The Phantom Public Sphere; Schudson, The Power of News.

[16]. Jameson, The Seeds of Time, xii.

[17]. Jenkins and Thornburn, “Introduction.”

[18]. Morris, “Cultural Studies and Public Participation.”

[19]. See Sloterkijk, Critique of Cynical Reason; Zizek, The Sublime Object of Ideology.

[20]. Richard Dyer eloquently makes this point in his examination of the utopian in Hollywood musicals, “Entertainment and Utopia.”

[21]. Carey with Quirk, “The Mythos of the Electronic Revolution,” 116.

[22]. For further demystifications of cyberhype, see Pfaffenberger, “The Social Meaning of the Personal Computer”; Winner, The Whale and the Reactor.

[23]. Jacoby, Review of Looking Backward.

[24]. Rosemont, “Edward Bellamy,” 83.

[25]. Ross, Strange Weather.

[26]. Ross, Strange Weather, 135.

[27]. Haraway, “Manifesto for Cyborgs,” 181. Chapter Five will more fully address Haraway’s cyborg theory.

[28]. du Gay et al, 3. du Gay cites a similar approach developed by Richard Johnson in “The Story So Far: And for the Transformations.”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s