• Go to The Future Does Not Compute main page
  • Go to table of contents
  • This document: http://www.praxagora.com/~stevet/fdnc/ch18.html
  • And the Word Became Mechanical



    This is Chapter 18 of The Future Does Not Compute: Transcending the Machines in Our Midst, by Stephen L. Talbott. Copyright 1995 O'Reilly & Associates. All rights reserved. You may freely redistribute this chapter in its entirety for noncommercial purposes. For information about the author's online newsletter, NETFUTURE: Technology and Human Responsibility, see http://www.netfuture.org/.

    On a black night in the early 1980s, a fierce scream congealed the darkness deep within MIT's Artificial Intelligence Laboratory. The late-working engineer who went to investigate discovered Richard Stallman -- one of the nation's most brilliant programmers -- sitting in tears and screaming at his computer terminal, "How can you do this? How can you do this? How can you do this?" /1/

    The image is no doubt provocative, revealing as it does a striking urge to personify the computer. And yet, perhaps we make too much of such occurrences. After all, the computer is hardly unique in this respect. Don't I curse the door that jams, implore my car's engine to start on an icy morning, and kick a malfunctioning TV with more than just mechanical intent? The fact is that we manifest a strong tendency to personify all the contrivances of our own devising. The computer simply takes its place among the numerous other objects to which, with a kind of animistic impulse, we attribute life.

    This may be the point worth holding on to, however. Once we acknowledge our anthropomorphizing compulsions, we must immediately grant that the computer is ideally designed to provoke them. Whatever our considered, philosophical understanding of the computer and its intelligence, we also need to reckon with this "animistic" tendency -- at least we do if we seek self-knowledge, and if we would prevent our own subconscious impulses from infecting our philosophical inquiries.

    The embodiment of intelligence

    Anyone can write a program causing a computer to display stored text and stored images on a screen. So, at a bare minimum, a computer can do anything a book can do -- it can present us with a physics textbook, provide road maps, entertain us with stories, exhibit art reproductions, and so on. It's true that few of us would choose to read War and Peace on our terminal screens. Nevertheless, much of what we experience from a computer, ranging from the individual words and icons that label a screen window, to the content of email messages, to the text of Shakespeare, is in fact the computer's "book" nature -- that is, its relatively passive ability to display stored text. The only intelligence here is the same, derivative intelligence that books may be said to possess.

    Lacking any better term, I will call this wholly derivative intelligence of the computer its book value. However, the notion extends to other sorts of stored material in the computer besides text -- for example, voice recordings and video images. Just as a book displays "someone else's" intelligence and not its own, so also do the tape recorder and television. In what follows, I'll use "book value" to include all such derived content.

    No one would claim that its book value represents what the computer itself does. Those who talk about how computers will be able to think and converse are not merely referring to the way tape recorders and books "speak." They have in mind an autonomous activity of the computer -- an activity directly expressing intelligence.

    So I will use these terms -- "book value" and "activity" -- as rough markers capturing the distinction I'm after. While I have had, and will have, a great deal to say about the logical or mathematical character of computational activity, here I propose to look at certain peculiarities of book value in our culture. It is a curious fact that, today, book value -- the detached word -- readily takes on a life of its own, whether or not it is associated with the computer. At the same time, we seem strongly inclined to adopt an anthropomorphizing or superstitious stance toward this detached word.

    Getting computers to think the easy way

    Several years back I spent some time monitoring the USENET newsgroups dealing with artificial intelligence (AI). Most of those who participated in the discussions were engineers or academics pursuing professional work in AI. One contributor described his undertaking this way:
    I am currently writing a program that allows the user to build a network consisting of thoughts and relations. The user starts by building a thought node. Each thought node contains a pointer to a list of relation nodes. Each relation node contains a pointer to another thought node. Every time a new thought is created, a relation node is added to the relation list of the current thought....

    What are we to make of this language? A few innocent-sounding sentences and suddenly we have a computer dealing with "thoughts" and relations -- all in the context of discussion about artificial intelligence! One easily overlooks the fact that the speaker is apparently talking about nothing more than a tool for creating a cross-referenced outline or network diagram. The computer itself is no more dealing in thoughts than does a typewriter.

    Despite his loose language, this contributor may just conceivably have had properly humble intentions when he submitted his ideas for consideration. The same cannot be said for the writer who informed his colleagues that he had hit upon just the ticket for giving computers free will. It requires that we write a program for a "decision system with three agents":

    The first agent generates a candidate list of possible courses of action open for consideration. The second agent evaluates the likely outcome of pursuing each possible course of action, and estimates its utility according to its value system. The third agent provides a coin-toss to resolve ties.
    
    Feedback from the real world enables the system to improve its powers
    of prediction and to edit its value system.
    

    So much for free will. So much for the problem of values -- "ought" versus "is." So much for the question of who these "agents" are that consider possible courses of action, understand likely outcomes, and apply values. It is all beautifully simple.

    This same contributor rebuts the notion that computers cannot experience feelings. His appeal is to diagnostic messages -- the words of advice or warning that programmers instruct the computer to print out when, for example, a user types something incorrectly.

    A diagnostic message is a form of emotional expression. The computer is saying, "Something's wrong. I'm stuck and I don't know what to do." And sure enough, the computer doesn't do what you had in mind.

    One wonders: are we up against an exciting new understanding of the human mind here, or an animistic impulse of stunning force? Do we confront theory or superstition? The USENET discussions in which such observations as these are launched continue month after month in all seriousness. The messages I have cited here were selected from hundreds of similar ones. We might hope that this is no more than the all-too-frequent descent of electronic discussion groups to a lowest common denominator. But there is evidence that the problem goes far beyond that.

    Natural ignorance

    Professor Drew McDermott, himself an AI researcher, published an essay in 1981 entitled "Artificial Intelligence Meets Natural Stupidity." In it he remarked on the use professional researchers make of "wishful mnemonics" like UNDERSTAND or GOAL in referring to programs and data structures. He wondered how we would view these same structures if we instead used names like G0034. The programmer could then "see whether he can convince himself or anyone else that G0034 implements some part of understanding." In a similar vein, he describes one of the early landmark AI programs: "By now, `GPS' is a colorless term denoting a particularly stupid program to solve puzzles. But it originally meant `General Problem Solver', which caused everybody a lot of needless excitement and distraction. It should have been called LFGNS -- `Local-Feature-Guided Network Searcher.'" He goes on to say,
    As AI progresses (at least in terms of money spent), this malady gets worse. We have lived so long with the conviction that robots are possible, even just around the corner, that we can't help hastening their arrival with magic incantations. Winograd ... explored some of the complexity of language in sophisticated detail; and now everyone takes `natural-language interfaces' for granted, though none has been written. Charniak ... pointed out some approaches to understanding stories, and now the OWL interpreter includes a `story-understanding module'. (And, God help us, a top- level `ego loop.') /2/

    I once sat in a conference where the head of a university computer science department was asked how a computer could possess a selfhood and a knowledge of itself. He immmediately replied that it is easy enough to create a program variable giving the computer a name for itself, and to cause the computer to associate the record of its past activity with this name. Thus, "since [the computer program] has a term for itself, it can remember what it did, and can tell you what it has thought about." So much for the nature of memory, personal experience, and selfhood.

    These stories show that it is not only casual users who are liable to mistake book value for what the computer itself does or understands. If a word flashing on the screen suggests to our cooperating minds a native intelligence within the machine, sophisticated engineers appear similarly susceptible to the name of a variable in a program listing. One is reminded of the word's powerful evocations during ancient times. As we are often told, our ancestors did not always distinguish clearly between word and thing. The word bore within itself some of the inner, spiritual significance of the thing to which it referred. But this only makes our own behavior all the more puzzling, for we are certainly inclined to reject what we take to be the superstitions of the ancients. And we do not believe -- or at least we say we do not believe -- our machines to be possessed by spirits.

    What, then, can we make of a technically trained person who casually attributes thoughts and feelings to a computer on the basis of its diagnostic messages? Has he become pathologically superstitious about machines, or has he instead become pathologically separated from his own world of thoughts and feelings, which he reduces to a collection of vague abstractions of the sort he can easily attribute to a computer? Or are these two sides of the same phenomenon?

    Computerized therapy

    One of the classic, if now hackneyed, episodes in early AI work occurred in the mid-1960s, when Professor Joseph Weizenbaum of MIT wrote a language analysis program called ELIZA. He initially designed the program to imitate a Rogerian psychotherapist -- a relatively easy task, he tells us, because much of the therapist's technique "consists of drawing his patient out by reflecting the patient's statements back to him." In ELIZA's case, the patient would type comments on a typewriter, and the computer would respond in kind. One conversation between ELIZA and a young lady went like this (ELIZA's responses are in capital letters):
    Men are all alike.

    IN WHAT WAY

    They're always bugging us about something or other.

    CAN YOU THINK OF A SPECIFIC EXAMPLE

    Well, my boyfriend made me come here.

    YOUR BOYFRIEND MADE YOU COME HERE

    He says I'm depressed much of the time.

    I AM SORRY TO HEAR YOU ARE DEPRESSED

    It's true. I am unhappy.

    DO YOU THINK COMING HERE WILL HELP YOU NOT TO
    BE UNHAPPY

    I need some help, that much seems certain.

    WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP

    Perhaps I could learn to get along with my mother.

    TELL ME MORE ABOUT YOUR FAMILY

    My mother takes care of me

    WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU

    My father.

    YOUR FATHER

    You are like my father in some ways.

    WHAT RESEMBLANCE DO YOU SEE

    You are not very aggressive but I think you don't want me to
    notice that.

    WHAT MAKE YOU THINK I AM NOT VERY AGGRESSIVE

    You don't argue with me

    WHY DO YOU THINK I DON'T ARGUE WITH YOU

    You are afraid of me.

    DOES IT PLEASE YOU TO BELIEVE I AM AFRAID OF YOU

    My father is afraid of everybody

    WHAT ELSE COMES TO MIND WHEN YOU THINK OF
    YOUR FATHER

    Bullies.

    DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOYFRIEND MADE YOU COME HERE

    As Weizenbaum points out, the computer's language analysis was very primitive. Its programming reflected a single context -- that of the therapeutic dialog -- and its "understanding" consisted substantially of keying in on certain words and drawing on a set of stock responses. (Of course, it also had to do some basic grammatical parsing of English text.) Nevertheless, Weizenbaum reports on the "shock" he experienced upon learning how seriously people took the program:

    Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room. Another time, I suggested I might rig the system so that I could examine all conversations anyone had with it, say, overnight. I was promptly bombarded with accusations that what I proposed amounted to spying on people's intimate thoughts .... I knew of course that people form all sorts of emotional bonds to machines .... What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people. /3/

    There is a sense in which we must agree with Weizenbaum -- a sense that is central to my own argument. Yet I suspect he would acknowledge another side to the issue. How delusional is it to assume an intelligence behind the use of language? Language, as we all learn in school, is one of the chief distinguishing features of man. We simply never come across language that has not issued, in one way or another, from a human mind. If I find a series of words neatly impressed upon the sand of a desert island, I will conclude -- no doubt correctly -- that I am not alone. There is another human speaker on the island.

    Furthermore, we have become fully attuned to mechanically mediated human communication. While telephone, radio, or TV might hopelessly disorient a time-traveling Roman, we take it as a matter of course that these devices put us in touch with other people. Weizenbaum's secretary, quite undistracted by the mechanical contrivances she was dealing with, immersed herself from habit in the meaning of the text addressed to her, and she felt (with good justification) that this text originated in another mind, one that had considered how to respond to just the sorts of comments she was making. What she was most likely not doing was considering explicitly whether she was speaking with the computer itself, or a programmer, or some other person. She was simply conversing with words. Who was behind them didn't matter. The episode may say more about the pervasive and accustomed anonymity of our society than anything else.

    Words in the void

    The word has increasingly detached itself from the human being who utters it. This detachment received a huge impetus with the invention of the modern printing press in the fifteenth century. The phonograph, telephone, radio, and TV encouraged an ever more radical separation. Today, even in live musical performances, lip-synching is common -- who knows whether it is a recording or a human larynx from which the sound arises? (Actually, with or without lip-synching, the immediate source of the sound is a loudspeaker rather than a larynx, which is why the controversy over lip-synching is mostly irrelevant.)

    If a phone connection puts me at one mechanical remove from my conversational partner, a recorded phone message more than doubles the indirection, for here there is no possibility of interaction. But, no, that's not quite true. As phone systems become ever more sophisticated, I am allowed to push buttons in an increasingly articulate manner. At the same time, the spliced-together snippets of recorded speech manage to respond in an increasingly intelligent manner. And, like Weizenbaum's secretary, I follow along with all the seriousness of someone conversing with a real person, even if I am more or less aware of the arrangement's limitations. This awareness will no doubt attenuate with time, even as the mechanical devices gain in deftness.

    Many in our society have only recently experienced the shock that comes when one first realizes that the "person" who rang the phone is really a recording. But those among us who are already accustomed to recordings will readily acknowledge a certain process of acclimatization: as the collage of recorded and "real" voices becomes more and more intricate, and as the underlying programming responds more and more flexibly to our needs, we make less and less of a distinction between the various levels of genuineness. We are comfortable doing business with the words themselves.

    The System speaks

    We have, it seems, long been training ourselves to disregard the distance between the verbal artifacts of intelligence and their living source. The words themselves are source enough. Little of the day goes by without our being assaulted on one side or another by disembodied words speaking for we know not whom. Street signs, billboards, car radios, Walkmans, newspapers, magazines by the grocery checkout stand, televisions in every room, phone callers we have never met, movies, video games, loudspeakers at public gatherings, and -- if we work with computers -- a Noachian deluge of electronic mail, news, network-accessible databases, and all the other hidden vessels of the information that is supposed to empower and liberate us.

    We live out much of our lives under the guidance of these words-as- artifacts. How can we do otherwise? How can I pierce behind the intricate mechanism mediating the words I hear, so as to discern the true speaker? How can I discover more than a few fragments of the individuality of Peter Jennings or Tom Brokaw behind the formulas and technology of the evening news? (Think how different it would be to watch and listen as the cameras inadvertently picked up a half-hour's off-the-air conversation between Brokaw and one of his colleagues. And how different again to participate in a face-to-face exchange with them.)

    We are so used to words that have become disconnected from their human source that we scarcely notice the peculiarities of our situation. But we do notice on at least some occasions, as the unflattering stereotype of the bureaucrat makes clear. When I quibble with a disenfranchised clerk over some irrelevant regulation, with whom am I speaking? Not the clerk himself, for he has no authority to speak on his own account. (That is probably the main cause of my frustration; I thought I was going to converse with a person.) But if my conversation is not with the clerk, who is it with? Nobody, really -- which is why I quickly begin to blame the System.

    The fact is that, like Weizenbaum's secretary, I am merely conversing with words, and these words are produced by a vague mechanism neither I nor anyone else can fully unravel. Yes, the words somehow originate with human beings, but they have been subjected to a kind of organizational/mechanical processing that renders them simplistic, too-logical, slightly out of kilter. They are impossible to trace, and so I don't try. Why try? It is the disembodied words that determine my fate. They are the reality.

    One way to think of the challenge for our future is this: how can I work toward a society in which every transaction is as deeply human a transaction as possible? To make an exchange human is to reduce the distance between words and their source, to overcome the entire mediating apparatus, so that I am myself fully present in my words. Even when stymied by a bureaucracy, I can at least choose to address (and respect) the human being in front of me. This will, in fact, encourage him to step out of the System in some small way rather than retreat further into it as a defense against my anger. On the other hand, I support the System just so far as I give further impetus to the automated word.

    Superstition

    I began this chapter by asking what stands behind our "animistic" urge to personify mechanical devices. I then distinguished between the book value of the computer and its native activity. The remainder of the chapter to this point has focused upon book value: the ease with which both user and programmer attribute book value to the computer as if it were an expression of the computer's own active intelligence, and the degree to which the word has detached itself from human beings and taken up an objective, independent life within our machines, organizations, and systems.

    There may appear to be a paradox here. On the one hand, the increasing objectification of what is most intimately human -- our speech; on the other hand, an anthropomorphizing liaison with our mechanical contrivances. But, of course, this is not really a paradox. When speech detaches itself from the speaker, it is indeed objectified, cut off from its human source. But it still carries -- if only via our subconscious -- some of its ancient and living powers. And so its association with machinery readily evokes our personification of the machinery.

    We may think ourselves freed from superstition. But if we take superstition to be a susceptibility to the magical effects of words, then it would be truer to say that ours is the age in which superstition has come into its own. Having reduced the word to a dead abstraction residing outside ourselves, we subject ourselves to its invisible influence. Our single largest industry, centered on Madison Avenue (but also operative in every corporate Marketing Department) is dedicated to refining the instruments of magical control. Seeming to be alien, a hollow physical token and nothing more, approaching us only as a powerless shape from without, the word nevertheless has its way with us. We detach the word from ourselves and it overpowers us from the world.

    Nor is it an accident that the great social and political ideologies have arisen only during the last couple of centuries. These elaborate word-edifices, detached from their human sources, sway and mobilize the surging masses. The passionate believer in an -ism -- what is it he believes in? An idea from which all human meaning has been purged, leaving only an abstraction and the controlling passion of belief itself. That is what makes the ghastly, inhuman contradictions possible: a communist workers' paradise to be achieved by disfranchising or massacring the workers; a capitalist common good to be achieved through the universal cultivation of selfishness.

    Religion and ideology

    Religion, too, has doubtless taken on a more ideological character over the last few centuries -- as suggested, for example, by sectarian fragmentation. But the more "primitive" phases of religion contrast strikingly with the genesis of modern -isms. The prophets spoke in similes, images, koans, symbols, and parables; their words were accessible only to those "with ears to hear." The words had to be meditated upon, slowly penetrated by means of an answering word within the hearer. And the believer was ever driven back to the human source for understanding; the words of the prophet were inseparable from his life. They were not written on the subway walls, nor on religious billboards.

    It is quite otherwise with ideology. For Communist revolutionaries around the world, not much depended on the person of Marx or Lenin, or on the authenticity of words attributed to them. Faith was vested in empty generalities -- the proletariat, the revolution, the classless society. Words that change the course of the world are no longer bound to their human source. Unlike the symbol and parable -- and much more like scientific descriptions -- they have become abstract and capable of standing by themselves, for they are nearly contentless. They are automatic -- fit to be spoken by the System -- and therefore the human subconscious from which the System has arisen is invited to supply the real meaning.

    Some people believe we have seen the end of ideology. My own fear is that we are seeing its perfection. The disembodied word no longer requires even the relatively impersonal support of the activist's cell, the political movement, the faceless bureaucracy, the machinelike corporation, the television evangelistic campaign. The machine itself, uncannily mimicking the human being, now bears the word alone -- and we call it information. Our -ism is declared in our almost religious devotion to a life determined, not from within ourselves, but by divine technological whim.

    It is easy enough to see a danger in the reaction of Weizenbaum's secretary to ELIZA. For despite it being an understandable reaction in view of our culture's detachment of the word, it is clearly not a healthy reaction. She evidently proved blind to the laughable limitation of her therapist and the essential artificiality of the exercise, because she could not distinguish properly among book value, mechanical activity, and human presence. And having lost her ability to trace word to speaker, she must have lost as well some of her ability to deal truly with human beings in general; for the human being as productive spirit, as the source of the word, had at least temporarily escaped her reckoning.

    We may scorn her for that, but it would not be wise. Better to note that the test she failed becomes daily more subtle, and that the rest of us, too -- whether in passively absorbing television commercials, or beating our heads against petty bureaucracies, or allowing electronic mail to put us into "automatic mode" -- fail it every day.

    References

    1. Cobb, 1990.

    2. McDermott, 1981: 145-46.

    3. Weizenbaum, 1976: 3-4.

  • Go to The Future Does Not Compute main page
  • Go to table of contents
  • This document: http://www.praxagora.com/~stevet/fdnc/ch18.html