Electric Dreams Chapter Two

Chapter Two: Ideologies of Information Processing: From Analog to Digital

 

So far, I haven’t discussed how a computer processes information in much detail. In this chapter, I want to turn to this more technical subject, to look at the ideological conflicts underlying a critical transformation in information processing: the shift from analog to digital which began in the 1940s and 1950s, and which continues to this day.

Instead of the proliferation of Babbage’s digital computing devices, the nineteenth century saw the slow development of more limited calculating machines. The cash register was invented in 1879; by the turn of the century, it was used by most store owners in the United States.[1] In 1890, the U.S. Census bureau hired Herman Hollerith to design a machine capable of collating demographic information about millions of people. Hollerith designed a system in which each person’s demographic information was stored on an individual punch card. His successful company would ultimately change its name to International Business Machines, or IBM for short.

By the 1920s and 1930s, scientists were designing more sophisticated machines to handle more complex mathematical questions. The most famous of these was the differential analyzer, built by Vannevar Bush and his students at MIT in the 1930s. While many machines of this era were developed to calculate only specific equations, the differential analyzer was designed to be a more flexible machine. As Campbell-Kelly and Aspray write, it could address “not just a specific engineering problem but a whole class of engineering problems that could be specified in terms of ordinary differential equations.”[2]

Machines such as the differential analyzer were in many ways what we would now call “computers.” But most histories of computing pass quickly over these devices, treating them as a wrong turn in the development of computing technology.[3] That’s because, unlike Babbage’s devices, these were analog machines. Not until the era of World War II were the first digital computing devices successfully built. The transition from analog to digital after World War II (or, to put it another way, the return to Babbage’s digital conception of computing) was a critical point in the history of computing, and the distinction between analog and digital continues to be a crucial – and ideologically loaded – concept in contemporary computing culture.

 

Analog and Digital

 

Let’s start by clarifying the two terms. This useful distinction comes from Stan Augarten’s Bit by Bit:

 

[D]igital and analog . . . describe different methods of counting or measuring various phenomena, and the distinction between them is best illustrated by two gadgets that are found in almost every car: a speedometer and an odometer. As a recorder of miles traveled, an odometer is a digital device, which means that it counts discrete entities; as a measurer of miles per hour, a speedometer is an analog device, because it keeps track of velocity. When we count things, regardless of what those things may be, we are performing a digital operation – in other words, using numbers that bear a one-to-one correspondence to whatever it is we’re enumerating. Any device that counts discrete items is a digital one. By contrast, when we measure things, whether to find their weight, speed, height, or temperature, we are making an analogy between two quantities. Any gadget that does this is an analog one. Scales, rules, speedometers, thermometers, slide rules, and conventional timepieces (the kind with hands) are all analog instruments, whereas odometers, . . . mechanical calculators, and the overwhelming majority of electronic computers are digital devices.[4]

 

Digital computers process information through mathematical calculations, following set, clearly defined rules (called “algorithms”), just as humans do when calculating with pencil and paper. The way in which analog computers process information is more difficult to explain. Here’s Herman H. Goldstine’s The Computer from Pascal to von Neumann:

 

[A]nalog machines depend upon the representation of numbers as physical quantities such as length of rods, direct current voltages, etc. . . . The designer of an analog device decides which operations he wishes to perform and then seeks a physical apparatus whose laws of operation are analogous to those he wishes to carry out. He next builds the apparatus and solves his problem by measuring the physical, and hence continuous, qualities involved in the apparatus. A good example of an analog device is the slide rule. As is well-known, the slide rule consists of two sticks graduated according to the logarithms of the numbers, and permitted to slide relative to each other. Numbers are represented as lengths of the sticks and the physical operation that can be performed is the addition of two lengths. But it is well-known that the logarithm of a product of two numbers is the sum of the logarithms of the numbers. Thus the slide rule by forming the sum of two lengths can be used to perform multiplications and certain related operations.[5]

 

Larry Owen explains how the differential analyzer operated:

 

The . . . machine consisted of a long table-like framework crisscrossed by interconnectible shafts . . . Along one side were arrayed a series of drawing boards and along the other six disc integrators. Pens on some of the boards were driven by shafts so as to trace out curves on properly positioned graph paper. Other boards were designed to permit an operator, who could cause a pen to follow a curve positioned on a board, to give a particular shaft any desired rotation. In essence, the analyzer was a device cleverly contrived to convert the rotations of shafts one into another in a variety of ways. By associating the change of variables in an equation with a rotation of shafts, and by employing an assortment of gearings, the operator could cause the calculator to add, subtract, multiply, divide, and integrate.[6]

 

Like the slide rule, early analog computing machines could only process single types of equations; for each new problem, a new physical model had to be built. But more sophisticated models such as the differential analyzer could solve a wide range of problems that could be modeled in terms of differential equations.[7] As Alan Bromley writes,

 

In that the Differential Analyzer can be set up to solve any arbitrary differential equation and this is the basic means of describing dynamic behavior in all fields of engineering and the physical sciences, it is applicable to a vast range of problems. In the 1930s, problems as diverse as atomic structure, transients in electrical networks, timetables of railway trains, and the ballistics of shells, were successfully solved. The Differential Analyzer was, without a doubt, the first general-purpose computing machine for engineering and scientific use.[8]

 

Digital and Binary

 

Before going further, one other clarification is in order here: between digital and binary. Any discrete numerical system is digital. When I calculate a math problem on paper, I’m performing a digital operation in base ten, the “decimal” system. The “binary” system is another digital system: base two. It uses only two numerals, 0 and 1, rather than decimal’s ten numerals. Just as base ten represents quantities by sequencing 0 through 9 in columns representing, from right to left, 100(ones), 101 (tens), 102 (hundreds), and so on, base two represents quantities by sequencing 0 and 1 in columns representing 20, 21, 22, and so on. Thus, the number 2 in base ten is written as 10 in base two; 5 in base ten is equivalent to 101 in base two; and so on. Base two can represent any quantity as a series of zeroes and ones.

         Babbage’s original design used base ten, as did some of the early digital computers of the 1940s. But soon computer engineers concluded that storing numbers in base two could be much more efficient – larger numbers can be stored using much less memory in base two. Today, “digital technology” almost always means information translated into binary code and stored as a series of “on” and “off” charges, representing zeroes and ones, in electronic media such as magnetic disks and silicon chips. It’s worth remembering, though, that digital does not inherently mean binary. There are possible digital systems with more than two stages, states between on and off. Morse Code, for example, is a trinary system, made up of three states: no pulse, short pulse (dot), and long pulse (dash).[9] The binary system in a sense is the ultimate, Manichean extension of the logic of the digital. Just as digitization slices the holistic analog world into discrete, precise units, binarization represents those units through combinations of only two categories: 0 and 1.

 

From Analog to Digital

 

The demands of World War II inspired the development of new computing technologies. While most of this research took place in universities, it was bankrolled by the American and British militaries, motivated by the need for fast, accurate calculation of ballistics information for artillery and antiaircraft systems.[10] While Vannevar Bush perfected his analog differential analyzer at MIT, at the University of Pennsylvania’s Moore School, J. Presper Eckert and William Mauchly developed the Electronic Numerical Integrator and Calculator, or ENIAC. By most historians’ accounts, ENIAC is the first true digital computer.[11]

         ENIAC was not completed in time to be used in the war. But the technical challenges of the conflict convinced the military to continue to pursue computing research. In fact, as Paul Edwards writes, “from the early 1940s until the early 1960s, the armed forces of the United States were the single most important driver of digital computer development.”[12]

At the beginning of this period, however, it was by no means certain that this development would center on digital rather than analog computers. Analog computers seemed to have many advantages over the new digital computers.[13] The war had been fought with analog devices such as automatic pilots, remotely controlled cannon, and radar systems. As a result, many more engineers were versed in analog than digital techniques. And, as Edwards points out, “analog computers integrated very naturally with control functions because their inputs and outputs were often exactly the sort of signals needed to control other machines (e.g., electric voltages or the rotation of gears).”[14] Digital computers, on the other hand, required new digital-to-analog conversion techniques in order to control other machines. In addition, they were larger, more expensive, and unreliable compared to analog computers.

         Digital computers, however, held the promise of greater precision, speed, and objectivity. Analog machines can be made to very precise levels of operation, but they are vulnerable to physical wear on their parts. And like any ruler, they offer only an approximation of measurement, based on the level of precision to which they are graded. A properly functioning digital machine, on the other hand, will always give the same answers with numerical exactitude. And while an analog machine is limited to processing equations for which a physical model can be built, a digital machine can process any algorithm. (Of course, as we’ll discuss further, any algorithm is only as good as the information it’s been fed.) By the end of the 1940s, digital computing research projects were winning funding, while analog systems were languishing.[15] In 1950, MIT shut down Bush’s differential analyzer.

         Most histories of computing treat the transition from analog to digital as the inevitable result of the superiority of digital technology. But Aristotle Tympas, in his study of the history of the electrical analyzer, argues that the transition from analog to digital was a matter of ideology, not necessity.[16] Tympas describes the battle between analog and digital laboratories to secure military funding as matter of bureaucratic infighting rather than technological merit. He quotes one of the digital researchers, George E. Valley, describing one critical confrontation:

 

The director of the competing [analog] laboratory spoke first. . . . He had not learned, as I had learned . . .that in such situation you stuck a cigar in your face, blew smoke at the intimidating crowd, and overawed the bastards. . . . From that time on the [digital] Lincoln system was truly accepted, but if anyone thinks that SAGE was accepted because of its excellence alone, that person is a potential customer of the Brooklyn Bridge. It was accepted because I shouted an impolite order at the leader of the competition, and he obeyed me. We were at the court of King Arthur, and I have prevailed.[17]

 

         The transition from analog to digital computing had its trade-offs. Brute calculating force replaced hands-on experience. As one life-long worker in electric power transmission put it:

 

Digital computers and software development provide a tremendous increase in the calculating capability available to those performing network studies. In my opinion, however, this leap forward was not entirely beneficial. It resulted in the substitution of speed of calculation for the brains and analytical skill of the calculator; a decrease in the generation of innovative ideas and methods for the development of the network; and an emphasis on the process of the calculation rather than the functioning of the network. I believe that in some ways, digital computers have been harmful to the development of creative ideas for power system development.[18]

 

Likewise, Warren Weaver, director of the Natural Sciences Division of the Rockefeller Foundation, rued the passing of the differential analyzer:

 

[I]t seems rather a pity not to have around such a place as MIT a really impressive Analogue computer; for there is vividness and directness of meaning of the electrical and mechanical processes involved . . . which can hardly fail, I would think, to have a very considerable educational value. A Digital Electronic computer is bound to be a somewhat abstract affair, in which the actual computational processes are fairly deeply submerged.[19]

 

Historian of technology Larry Owens, in “Vannevar Bush and the Differential Analyzer: The Text and Context of an Early Computer,” examines Bush’s machine as a kind of text. He concludes that the differential analyzer reflects the values of early twentieth century engineering culture. Engineers’ training of that era emphasized the crafts of machine-building and penmanship, and valued the tangible engagement with mathematical concepts offered through the drawing board and the slide rule. In that spirit, the differential analyzer drew out its solutions with pen on paper, in elegant curves. “Forged in the machine shop, the analyzers spoke the Graphic Language while they drew profiles through the landscape of mathematics.”[20] The digital computer, by contrast, embodied an alternate set of values, reflecting the scientific and military culture of the post-World War II era: abstract rather than concrete, intangible rather than tactile.

 

Analog vs. Digital Today

 

Fifty years after the transition from analog to digital computing, digital technologies continue to hold out the promise of perfection. What began in the 1940s with mainframe computers continues to this day, as music, books, and other media are relentlessly “digitized.” Media recorded in digital format promise sharper clarity and less distortion (although this depends on the precision of the recording mechanism). And because digital files are stored as numerical quantities, they aren’t subject to degradation in quality as copies are made. A fifth-generation digital sound file, properly copied and error-corrected, is identical to the first, while a fifth-generation cassette recording invariably will have more tape hiss than musical information. Digital data can also be easily transferred through a multitude of channels – telephone lines, fiber-optic cable, satellite transmissions – again without loss of quality. The result is an enormous economy of scale: almost all of the costs are in the general infrastructure, rather than in the individual product. All these advantages lead cyberpundits such as Nicholas Negroponte to proclaim that an unstoppable, transforming era of digitization is upon us.

         But against the digital bandwagon has emerged a backlash. Defenders of analog argue that digital’s promise of precision is often just an illusion. A digital clock may be able to give you a readout to the tenth of a second, but that doesn’t mean it’s actually more accurate than an old-fashioned clock with hands. The easiness with which digital information can be copied can lead to the illusion that the cost is “free,” but to download a “free” MP3 audio file from the internet first requires quite a bit of infrastructure – a computer, monitor, speakers, modem, network connection, internet account, and appropriate software. One thing digital delivery does to deliver its economy of scale is to transfer much of the infrastructure cost to the consumer – instead of just a CD player, you need all the above to enjoy the benefits of online audio.

         The most vocal proponents of analog are music fans who trumpet the virtues of vinyl recordings over CDs and other digital formats.[21] CDs overturned vinyl as the record industry’s best-selling medium in the 1980s, on a wave of publicity championing the superiority of digitally recorded music. (It didn’t hurt that the major label distributors stopped allowing retailers to return unsold copies of vinyl records, making it impossible for most stores to continue to stock vinyl versions of albums.)[22] CDs work by “sampling” a sound wave—taking a series of snapshots of sonic information, at a rate of 44,100 samples per second. When this choppy accumulation of sound snippets is played back, it appears continuous, much in the way a film, composed of still frames, creates the illusion of fluid motion when projected at twenty-four frames per second. Vinyl, on the other hand, is an analog medium. It records sound waves continuously, replicating the waves themselves in the grooves of the disc. Vinyl enthusiasts argue that the piecework nature of digital recording creates an arid, crisp sound, as the subtleties between samples drops out. Vinyl, by contrast, preserves a warmer, more whole sound. Even the imperfections of vinyl – the hiss and pop of dust and scratches on the surface of the disc – can be seen as virtues, reminders of the materiality of the recording and playback process. Much contemporary dance music takes advantage of this evocation of materiality, sampling hiss-and-pop-filled vinyl drum loops to give beats a “dirtier” sound. (My all-digital synthesizer, the Roland MC-505 “Groovebox,” even has a “vinylizer” feature, which adds simulated vinyl sounds to a track.)[23]

         Eric Rothenbuhler and John Durham Peters make the provocative argument that analog recording is inherently more authentic than digital, because “the phonograph record and the analog magnetic tape do contain physical traces of the music. At a crude level this is visible with the naked eye in the grooves of the record . . . . The hills and valleys of those grooves are physical analogs of the vibrations of music.” [24] By contrast, a CD contains only numbers. The CD player translates those numbers by following a decoding scheme established by the corporations which developed CD technology, Sony and Phillips. “These numbers are related to waveforms by a convention arrived at in intercorporate negotiations and established as an industry standard; but they could be anything.”[25] Record albums, Rothenbuhler and Peters conclude, “bear the trace of a body and have an erotics impossible to CDs. . . . To favor phonography is to favor a particular kind of hermeneutic, one attentive to conditions of embodiment.”[26]

         The real world is analog, the vinyl enthusiasts insist. Digital, by offering the fantasy of precision, reifies the real world. This complaint can be extended to a more global critique of computer culture: the binary logic of computing attempts to fit everything into boxes of zeros and ones, True and False. Many ethnographers of computer programmers have remarked on how the binary mindset of computer programming seems to encourage a blinkered view of the world. As Tracy Kidder writes in The Soul of a New Machine, “Engineers have . . . a professional code. Among its tenets is the general idea that the engineer’s right environment is a highly structured one, in which only right and wrong answers exist. It’s a binary world; the computer might be its paradigm. And many engineers seem to aspire to be binary people within it.”[27]

Computer usability expert Donald Norman argues that the root of users’ frustrations with modern technology lies in the conflict between digital and analog modes of information-processing. Computers are digital, but people are analog. In his rejoinder to Negroponte, “Being Analog,” Norman writes,

 

We are analog beings trapped in a digital world, and the worst part is, we did it to ourselves. . . . We are analog devices following biological modes of operation. We are compliant, flexible, tolerant. Yet we people have constructed a world of machines that requires us to be rigid, fixed, intolerant. We have devised a technology that requires considerable care and attention, that demands it be treated on its own terms, not ours. We live in a technology-centered world where the technology is not appropriate for people. No wonder we have such difficulties.

 

From Bivalence to Multivalence

 

The rift between analog and digital, then, echoes a series of familiar oppositions:

 

analog                digital

slide rule             calculator

pencil                  keyboard

paper                  screen

material              ideal

modern               postmodern

natural                technological

real                     virtual

index                   symbol[28]

female                 male

soft                     hard

hot                      cold

holistic                atomizing

nostalgic             futuristic

neo-Luddite        technotopian

body                   mind

goddess              cyborg

 

But to neatly divide the world into categories of analog and digital is already to concede to digital’s binary logic. And as with most binary oppositions, this one is susceptible to deconstruction. As Jonathan Sterne points out, “[t]oday’s sound media – whether analog or digital – embody and extend a panopoly of social forms. It does not matter whether the machine is question uses magnetic particles, electromagnetic waves, or bits to move its information: sound technologies are social artifacts all the way down.”[29]

Some of the most interesting recent developments in computer science suggest ways out of binary thinking. The field of “fuzzy logic” is one attempt to develop an alternative. Bart Kosko, author of Fuzzy Thinking, writes,

 

. . . in much of our science, math, logic and culture we have assumed a world of blacks and whites that does not change. Every statement is true or false. Every law, statute, and club rule applies to you or not. The digital computer, with its high-speed binary strings of 1s and 0s, stands as the emblem of the black and white and its triumph over the scientific mind. . . . This faith in black and the white, this bivalence, reaches back in the West to at least the ancient Greeks. . . . . Aristotle’s binary logic came down to one law: A OR not-A. Either this or not this. The sky is blue or not blue. It can’t be both blue and not blue. It can’t be A AND not-A.[30]

 

In contrast to bivalent Aristotelian logic, fuzzy logic is “mutivalent.” In fuzzy logic, an object may be both A AND not-A. The root of “fuzzy logic” is in the concept of “fuzzy sets,” developed by Lotfi Zadeh.[31] Fuzzy sets are sets who elements belong to them to different degrees. An object may be partly A, and partly not-A. As Daniel McNeill and Paul Freiberger write, “Suppose two people in a living room are watching Bonfire of the Vanities on the VCR. The (fuzzy) set of annoyed people in the room is Sam/0.85 and Pam/0.80. Its complement, the set of not-annoyed people, is Sam/0.15 and Pam/0.20.”[32]

Proponents of fuzzy logic compare its perspective to that of Eastern philosophies which reject binarism. Kosko suggests the yin-yang symbol, in which black and white are inextricably intertwined, as “the emblem of fuzziness.”[33] Fuzzy logic also fits well with the challenges to traditional scientific positivism offered by such developments as Heisenberg’s uncertainty principle and Godel’s incompleteness theorem, which demonstrate that not all scientific statements can be proven to be true or false.[34]

 

[Insert Figure 2.1 here. Caption: The yin-yang symbol]

 

Beyond the philosophical challenges offered by fuzzy logic, the system has also turned out to be of great practical value. Ian Marshall and Danah Zohar write,

 

Suppose engineers want to make an intelligent traffic light that can time itself to change from red to green at different intervals, depending on how light or heavy the traffic flow is. The binary switch of a digital computer is too crude to do this. Binary switches are either on or off. But fuzzy chips that allow traffic lights to readjust constantly have now been invented. They also delicately adjust subway control systems, the loading sensors of washing machines, the contrast buttons of TV sets, and a whole host of other “smart” machines.[35]

 

         Fuzzy logic does not abandon the world of the digital – fuzzy chips are digital devices, not analog ones. And in some ways, fuzzy logic may not be quite as fuzzy as proponenets like Kosko claim. The mathematics of fuzzy sets allow for states between A and not-A. But they still presume the ability to know as an absolute truth where a value lies between A and not-A. In the VCR example, fuzzy logic allows for a state between annoyed and not-annoyed. But how can we know with mathematical certainty that Sam is precisely 85% annoyed, and Pam 80%? Perhaps we are only 90% certain that these numbers are accurate. But how certain can we be of that 90% figure? Pursuing this question further leads to an infinite regress of uncertainty – illustrating, to some of fuzzy logic’s critics, the futility of attempting to deconstruct Aristotelian logic while retaining the fantasy of mathematical precision.[36]

         But if fuzzy logic does not truly transcend the prison-house of digital thinking, perhaps this is appropriate. Rather than a unilateral rejection of binarism, it may be best seen as an accomodation with it: a fuzzy bridge between the worlds of analog and digital.

 


[1]. Lubar, InfoCulture, 296.

[2]. Campbell-Kelly and Aspray, Computer, 63.

[3]. For an extreme example, see Kidwell and Ceruzzi, Landmarks in Digital Computing, which chooses to omit analog computers entirely. (There’s no companion Landmarks in Analog Computing.)

[4]. Augarten, Bit by Bit, 13.

[5]. Goldstine, The Computer from Pascal to von Neumann, 39-40.

[6]. Owens, “Vannevar Bush and the Differential Analyzer,” 72.

[7]. See Campbell-Kelly and Aspray, Computer, 63.

[8]. Bromley, “Analog Computing Devices.”

[9]. Downey, “Virtual Webs, Physical Technologies, and Hidden Workers.”

[10]. See Edwards, The Closed World.

[11]. ENIAC’s place of honor is subject to some debate. In 1973, a U.S. patent judge declared Iowa State physics professor John V. Atanasoff and his graduate student Clifford Berry, who together built the Atanasoff-Berry Computer in the period from 1939 to 1942, to be the “inventors” of the computer. For a strong defense of Atanasoff and Berry, see Shurkin, Engines of the Mind. Most computing historians, however, argue that Atanasoff’s machine was not flexible enough to count as a real computer. See Augarten, Bit by Bit; Campbell-Kelly and Aspray, Computer; Ceruzzi, The History of Modern Computing. Meanwhile, British history books often give the honor to ENIGMA, the powerful code-breaking machine developed by the British army during World War II.

[12]. Edwards, The Closed World, 43.

[13]. See Edwards, The Closed World, 66-70.

[14]. Edwards, The Closed World, 67.

[15]. See Edwards, The Closed World, 76-79.

[16]. Tympas, “From Digital to Analog and Back.” See also Tympas, The Computor and the Analyst. Tympas argues that there’s a politics behind the canonical historiography: historians of computing have considered only the labor of a technical elite of digital designers, rather than the large numbers of hands-on analog laborers. “[W]e know practically nothing of the computing labor of  a great mass of men possessing computing skills because the computing machines with which they worked are rendered historically unimportant on the grounds that their a posteriori designation belongs to an inferior technical class – analog computers.” (Tympas, “Perpetually Laborious,” 78.)

[17]. Valley, Jr., “How the SAGE Development Began,” 224. Quoted in Tympas, “From Digital to Analog and Back,” 45-6.

[18]. Casaaza, The Development of Electric Power Transmission. Quoted in Tympas, “From Digital to Analog and Back,” 47.

[19]. Weaver, correspondence to Samuel Caldwell. Quoted in Owens, “Vannevar Bush and the Differential Analyzer,” 66.

[20]. Owens, “Vannevar Bush and the Differential Analyzer,” 95.

[21]. See Friedman, “Vinyl”; Perlman, “Consuming Audio.”

[22]. Negativland, “Shiny, Aluminum, Plastic, and Digital.”

[23]. On the subject of synthesizers, a similar debate rages between partisans of analog and digital synths. While prized for their “unnatural” electronic sounds when first introduced, today analog synths are celebrated by aficionados for the “warmth” of their tone, compared to the “cold” tone of digital machines that attempt to replicate the old analog sound with samples. See Colbeck, Keyfax.

[24]. Rothenbuhler and Peters, “Defining Phonography,” 246.

[25]. Rothenbuhler and Peters, “Defining Phonography,” 245.

[26]. Rothenbuhler and Peters, “Defining Phonography,” 258-9. On the other hand, Rothenbuhler and Peters do acknowledge the democratic potential in the increased manipulability of digital recordings, which can easy be sampled, chopped up, and recontextualized. Citing Walter Benjamin’s “The Work of Art in the Age of Mechanical Production,” they conclude, “The manipulability characteristic of digital recording and playback spells both the dream of democratic co-creation and the nightmare of lost nature. As in Benjamin’s analysis of technical reproducibility, we here encounter both new possibilities for audience engagement and the loss of an aura.” (Rothenbuhler and Peters, “Defining Phonography, 252) We will return to the democratizing potential of digital music in Chapter Nine, when we discuss Napster and other digital music distribution systems.

[27]. Kidder, The Soul of a New Machine, 146. For more on the binary mindset of computer programmers, see also reporter Fred Moody’s I Sing the Body Electronic: A Year with Microsoft on the Multimedia Frontier, and programmer Ellen Ullman’s searching memoir, Close to the Machine: Technophilia and Its Discontents. For an even more critical perspective, see the neo-Luddite critiques discussed in Chapter Eight.

[28]. The distinction between index and symbol, drawing on the semiotic scheme of Charles Peirce, is suggested by Rothenbuhler and Peters, “Defining Phonography,” 249.

[29]. Sterne, The Audible Past.

[30]. Kosko, Fuzzy Thinking, 5.

[31]. Zadeh, “Fuzzy Sets.”

[32]. McNeill and Freiberger, Fuzzy Logic, 37. Quoted in Gehr.

[33]. Kosko, Fuzzy Thinking, 14.

[34]. See Ligorio, “Postmodernism and Fuzzy Systems”; Negoita, “Postmodernism, Cybernetics, and Fuzzy Set Theory.”

[35]. Marshall and Zohar, Who’s Afraid of Schrodinger’s Cat?, 162.

[36]. See Paul Rezendes, “Keeping an Eye on the Scientists: Bart Kosko’s Fuzzy Thinking Tries to Save Logical Positivism.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s