Processed World #13

Issue 13: April 1985 http://www.processedworld.com/

AttachmentSize
processedworld13proc.pdf6.78 MB

Sweet Relief

fiction by jake

In an extraordinary world, her day was the most ordinary possible. She walked to work, passing shops, offices, and galliers, each evenly-lit inside and restrained and symmetrical on the outside in the modern style. Her own work building said ""Gresham'' on the outside and the inside was made of white tile and wallboard and partitions. This was early morning in the city, when the light was golden and hesitant; it did not yet stretch curvaceously around tall buildings the way it did in late afternoon, the time of long shadows and, for office workers, stupor.

The thing is, she thought to herself while hanging up her coat and moving to the office coffee pot, you've got to get your mind more active--take a class or something, if you can bear sitting in a classroom for three hours after sitting for eight at a typewriter. She thought this often. Behind a fog, other secretaries were making their sporadic dull morning-talk. But what kind of class? She had never gotten past this question.

Limousines gliding down the avenue outside her window might have been strange black water birds, with an occasional white swan. . .but inside, the proud and the powerful sit, she thought, catching a glimpse of a hand holding a telephone receiver inside one of the murky windows. She smiled slightly, her attention drawn back to the swan image: there was nothing very angry or willful about her. She loved what could take her away from the world.

Rolling paper in the platen, she began to think idly: weight, weight, you've got to lose some weight. . .running it like a chant through her head. After typing lists of stock numbers and prices for an hour or so, she vaguely began to think about the thing, hoping it would not come over her but it did. This kind of antsyness in her stomach was not hunger, but it made her rise like a robot and walk to the vending machines down the hall. This urge is hopeless to fight, she thought, once it comes on. It blew in like a squall from the lonely spaces in her brain and while eating, in the hall or in the bathroom away from co-workers, she stared straight ahead, vacantly and it was pleasant.

Well, that's it, she thought, swallowing the last of the three candy bars and crumpling the wrappers. Now the argument-with-self would ensue: No, no, no, I told you not to eat that crap! But it's so awful here, no one even talks to me, and I'm wasting my life! How can you deny yourself this trifling pleasure when this room and your whole daytime existence is so sour? Well, isn't your nighttime existence a zero deal too? And do you know why? Because you're such a blimp! Oh c'mon! Is that a real reason or just an excuse?

During the argument, her face was smooth; she bit her lip the tiniest bit, but that could have indicated concentration over the paperwork which she was now taking to task.

Lunchtime was better; it was with Lucinda, a co-worker who had lots of children at home who wore her out. Oh, they throw themselves on me from the moment I get home till the time I fall asleep, she was saying as they sat on the park bench not 20 feet from the noisy avenue. Lucinda laughed a lot and her exhaustion was not evident. Her long black hair got into her sandwich and they both laughed. Then they had to go back inside for the next half of the day, which was always the worst.

She had forgotten about the other thing that happened sometimes when she felt in lighter spirits, like after a nice lunch. It drove her crazy. Surely it won't keep me from work, she thought, but then it started. A huge feeling of horniness leapt upon her. It made her feel her nipples against her blouse and the creases behind her knees. It made her want to laugh insanely at the office--the absurd, stultifying cubicles, alphabetical files, and all the silly people with pointy shoes and impeccable grooming.

If only to dash out the door and into the little park, she thought. If only to strike up a conversation with someone there, something simple about feeding pigeons! Someone out there who doesn't have a boring existence like this, someone who could tell me what daytime is really like!

Asking if anyone would like anything from the deli, she ran out quickly and brought back a soda pop for the receptionist and cookies for herself. She wolfed them down while shuffling through the papers. Afterwards, through the greasy, stuffed feeling, she felt the thick beating of her heart. The thing had returned, and she began to rock very slightly and slowly back and forth in her chair, one foot tucked under her, typing all the while. Sweat rose to her forehead; the rest of the office was a clicking machine far away behind a blue fog. She got up and went into the bathroom. But I don't have to go to the bathroom, she thought, sitting there.

Oh damn you! Why do you have to get so out of line! Why? What if somebody saw that? Then you're really gonna be in trouble. . .you'll have to quit! You're completely unhinged, you idiot! I can see it now. . .dropped out of the workforce at age 22 due to uncontrolled masturbation. . .oh god, what is wrong with you?

But as she argued with herself, her anxious fingers began tugging and digging and massaging. She was afraid someone would come in. If I could just get rid of this tension and get rid of it fast, she thought. Then work would be easier. . .to concentrate on. Each rising and falling breath was shortened and then the outlandish became the exciting: Do you know where you are and what you're doing? Oh, if those nags even knew! You're crazy you cunt, cunt. . .cunt!

For a full minute she dropped limply there on the toilet, then suddenly gasping as if she'd heard terrible news, she got up quickly and went to her cubicle.

Now it was 3:30 and there was no more stalling to do, no more change for the vending machines. You better do some exercising, you slob, she thought vaguely, feeling tired. Maybe I need a shrink. . .it's some kind of compulsive condition. No one had looked at her at all when she had come back into the room. Who cares what they think. ...why do I have these grotesque urges?

Outside, she could see shadows growing long and the sky began to glow purple and red behind dark cigar-shaped clouds. Dusk was coming and the city would churn away into the night. Somewhere out there, life was going on.

What should I have for dinner?

--Jake

Table of Contents

Letters
from our readers

Kelly Call Girl
fiction by kelly girl

Sweet Relief
fiction by jake

The Way It Was
reminiscence by ana logue

The Oppressed Middle
review by lucius cabins

Poetry
by henry calhoun jr, barbara schaffer, acteon blinkage & ligi

Graffiti
photo essay by zoe noe & acteon blinkage

Mind Games
article by tom athanasiou

Once More Unto the Bridge, Dear Friend
tale of toil by primitivo morales

Hot Under The Collar
high-tech workers, eradicating tv, good 'zines

The Way It Was

Ana Logue reminisces about being a temporary worker in New York in 1965.


Temping in the Office of the Past

Whenever I see "Carmen" I am reminded of the factory-like, New York office where I worked in the summer of 1965. Like Bizet's tobacco factory, it was hot (there was a drought and air conditioning usage was rationed to save water, the city's slogan was "Don't flush for everything"), the workers were all female, and life was startlingly real outside the doors we would rush through at 4:45 in the afternoon.

Johnson was president, the war in Vietnam was "escalating," and the Olin Mathiesson Chemical Corporation, a major gun powder producer, hired me, through the Olsten temp agency, to tear carbon papers from bills of lading and stuff envelopes seven hours a day, at $1.35 a hour ($.10 above the minimum wage).

I was 18 and had just finished my freshman year of college. In those days young women were called girls, and I was very much a girl. I was not in love, I do not think I thought about love. A stranger to passion, but not to the joys of making out in the back seats of big American cars, my disappointments were not deep, my faith in my future infinite.

Back then jobs were plentiful and rents were cheap. It took me one day to find work, one week to find a three-room, furnished apartment on West 20th Street, two subway stops from Greenwich Village, for $80.00 a month. I shared the apartment with Terry, another college girl in New York for the summer. Terry knew how to type and found a job as a secretary. She started in her office as a temp, but her boss decided to hire her full time without paying the agency fee. That meant she could not receive any phone calls at the office. One never knew when the agency would be calling to see if she was there.

Every morning, dressed in a skirt and blouse or dress and wearing nylons, despite the heat, I would take the subway to Columbus Circle and walk west on 57th Street to 10th Avenue to an immense four-story loft building where Olin had its billing department on the third floor. The first two floors were a Thom McAnn shoe warehouse.

The modular office had not yet been invented. I worked in a completely enclosed room in the middle of the floor that, except for its size, might have been a broom closet. I had never been in a room without windows before. It was something I never got used to. How often did I raise my eyes from my work and instinctively search the walls for sunlight!

As you entered this room you saw two rows of desks, all facing the door. On the right, where I sat at the last desk, were the five carbon-tearers. We were all between the ages of 18 and 20. Our job was to separate the carbons from a white original and three multi-colored copies. The blue copies went in one pile, the greens in another, and the yellows in a third. The whites we folded and stuffed into envelopes. When we had a respectable number of stacks of paper in front of us, we would bring them to baskets on a table near the supervisor's desk and pick up some more forms to be separated. I do not know what happened to them next.

The five desks in the left row supported comptometer machines which looked like a cross between an electric typewriter and a cash register. In those early days of office automation, they were a kind of "dedicated" bill processor. The women who operated these machines were the professionals to whom we unskilled carbon-tearers always deferred.

The supervisor's desk was on the wall next to the door, facing the workers, like a school teacher facing a classroom.

It seems incredible to me now, eleven women in one room, seven hours a day, five days a week, five of us doing totally mindless work, five of us having to concentrate on our work, and one watching. All in that closed space.

It seemed incredible to me then, too. I could tolerate the job because it was only for a few months, but what about the others? I don't remember anyone ever complaining. Three of the five carbon tearers lived at home, were engaged to be married or had serious boyfriends, and would, presumably, quit on marriage or childbirth. The fourth was a college-student temp like myself. The comptometer operators, on the other hand, were in their twenties and thirties and mostly married. (The husbands all worked in blue-collar jobs, which were common at the time but low status in those status-conscious years.)

New York is a profoundly ethnic city. Ethnic identity is as important there as public school affiliation is to the English upper-class. Ethnically, we were quite a mix. Our supervisor, Miss Glenda Briggs, was a very thin, white, southern lady of about 40. The comptometer operators: one Yugoslav, one German, a New York black, a Jamaican black, and a Puerto Rican. The carbon tearers: two Jews, two Germans, one Puerto Rican.

Socially, as a group, we had nothing in common. I had discovered "pot" that summer, and Terry and I spent most of our time hanging out in the Village. We both went to school in Michigan and friends from out-of-town were forever crashing in our apartment. Everybody played the guitar that year and real life started after 5pm. Monday mornings I would take a capsule of deximil before leaving the apartment. On speed, mindless, repetitive work can almost be satisfying. I never discussed my home life at the office.

But we talked a lot at work. Kelly, one of the comptrollers was pregnant. She had already had one miscarriage, so the talk had to do with her health and what the doctor had said. I listened hard to the secrets of womanhood.

Mostly the talk was about what each had cooked for dinner last night and what they would make this evening. Having no interest in food, this was very boring and depressing for me. Then it came about that I invited some friends for dinner, and I didn't know how to cook. I explained my problem to the women at work, and Marie, the Yugoslav, gave me a recipe for meatloaf (ground beef, bread crumbs, onions, eggs, and tomato sauce) that I still use.

The other major topic was television. Since we didn't have a TV, I couldn't participate in those conversations either.

The images come back, after twenty years, incompletely. But I remember these women better than any others I have worked with since. I remember that Janet, the Jamaican, always had a perfectly coiffed bouffant. One day I complimented her for it, and she laughed and said it was a wig. I remember that Gretchen, one of the Germanic carbon-tearers, was tall, pale, and flat, and had very thick ankles. She was also stupid and mean. Arrogance in her (in everybody?) was a display of a limited mind.

Marie called her 12-year-old son up every afternoon from the phone on the supervisor's desk. The conversation was always the same: what are you doing, what do you have for homework, I'll make chicken (or beef, or stew) for supper. How I pitied that child, how sad I was for the mother whose life revolved around him. (Now I, like Marie, call my son every afternoon, to affirm my existence, my real life, that has nothing to do with the work at hand.)

Karen, the other temp, was something of an enigma. She was the first person I had ever met who could only speak in clichés. She talked a lot, was friendly, but never said anything. Once I asked her what her agency was paying, and she answered, "I never discuss money." She had told us that she had been going to a college up-state but had had to move back home after her married sister had died. "But how did she die? " I finally asked. "Well," she drawled in her sing-song voice, "she went shopping for some panties at Gimbels, and she had just had a baby, and nobody knows what was going on in her mind, but she jumped in front of a BMT train."

Inez, the carbon-tearer with the most seniority, was my only real friend on the job. She was a 19-year-old Puerto Rican woman who didn't speak Spanish. She had suffered for this, she confided, because her teacher thought she was cheating by being in Spanish 1. Inez had gone to City College for one year and had majored in history. But she was now engaged to Robert, who was studying business administration, and she had dropped out to make some money so they could marry. But since she had taken an academic course in high school, she didn't have any marketable skills. We used to talk about what we read in the newspaper and play gin rummy during our breaks and lunch hours.

Glenda, our supervisor, sticks in my mind in her navy suits and white blouses and her prematurely white hair always perfectly curled. She had moved with the company from down South and lived with her mother, whom she had brought with her. In my eyes she had the strange power of tragic gentility and spinsterhood.

When a comptometer operator left her job, presumably for marriage or motherhood, the policy had been to train the carbon-tearer with the most seniority to replace her. The last woman to move up in the ranks this way was Carol a streetsmart black woman whose sharp tongue belied the women's sewing circle politeness that usually prevailed. But, as soon as she was trained, Carol gave notice. She was moving on to a better paying job with another firm.

Management's response to Carol's ingratitude was worthy of a modern, capitalist Soloman. Henceforth there would be no more on the job training; all future openings for comptometer operators would be filled from the outside. This was devastating for Inez who was next in line to be promoted, and everyone in the office, including Glenda, expressed their regrets.

Carol was replaced by Dorothy. Dorothy dressed like a beatnik--pierced ears, wide skirts--and was very unhappy with whatever it was that had fated her to this job. She bragged about her weekends at Cape Cod to women who had never heard of the place but knew she was bragging. She was extremely unpopular. Even I, who sympathized with her aspirations, was afraid to talk to her lest I became contaminated in the eyes of the others. Besides, I was the lowliest and youngest of temps, and she did not look to me for help.

Glenda, who was a very diplomatic boss who could act like one of the girls without ever forgetting who she was, also knew how to put people down. She had no use for Carol, or later Dorothy, the office rebels, and used sarcasm to turn everyone against them. It all seemed dreadfully unfair.

But the strongest image is of female comraderie and the giggling, the tensions, the occasional outburst of emotion. Normally we ate our sandwiches in the employees' cafeteria, but on paydays the 45-minute lunch break was extended to one hour, so we could cash our checks. Then (and also when it was someone's birthday) we would all go to lunch together at an Italian restaurant and even have a cocktail. How lovely it was to go out together in a group, laughing, taking up the whole sidewalk, in the sunshine!

--by Ana Logue

The Oppressed Middle

review by lucius cabins

SCENES FROM CORPORATE LIFE: The Politics of Middle Management by Earl Shorris, 1981, Penguin Books.

During my time as a temp in downtown San Francisco, I worked for many different managers. I never became particularly friendly with them, but I did find ways to "manage'' my managers. Mostly they left me alone as long as they got the work they wanted out of me.

Though I never was close to any managers, it was obvious that most of them suffered the same intimidation and hassles that I faced as their peon. But if bosses were as oppressed as I was, I reasoned, why were they so willing, even eager, to carry out the ridiculous dictates of the company? How had they turned into complacent embodiments of corporate policies? Why were they so ready to enforce completely arbitrary policies which oppressed them as much as me? It couldn't just be the money, or could it?

Scenes From Corporate Life, a detailed exploration of the corporate manager's life, is an attempt to answer these questions. The book, which originally had the same title as this review, depicts the duplicity, shallowness, manipulations, and general stupidity that prevail among managers. The portrait will be familiar to anyone who has labored in the office world. Earl Shorris (who was a long-time middle manager himself) argues convincingly that common business practices produce corporations which are essentially totalitarian institutions.

For Shorris, totalitarianism is the process of destroying autonomy. Corporate totalitarianism idolizes efficiency in its bureaucracies and takes its ideology from industrial psychology, management textbooks and classes. The result is a microworld where the autonomy of human beings is systematically thwarted.

Among his vignettes he describes techniques effective in intimidating and controlling both managers and knowledge workers. For example, the annual bonus system is used almost as a piece- rate kind of motivation for the middle-level employees. And yet, because of the company's need to keep people off guard and unsure of themselves the awarding of bonuses is often arbitrary and out of line with actual events. The ubiquitous "secret'' salary works to keep people separate and to compete more intently with what they think the other is getting, rather than banding together to get the same higher pay. "To make atoms of the mass, corporations have no more obvious device than keeping secret men's earnings.''

But "men do not merely acquiesce, they choose to live under totalitarian conditions. . . out of fear, mistaking its effect upon them because they do not think of the meaning of their actions.'' Managers have accepted an externally- imposed definition of happiness (i.e. material wealth, career advancement) provided by The Organization and its leaders. In so doing they have ceded their autonomy as free human beings to an abstract end and reduced themselves to mere means. In sacrifices "for the company'' Shorris identifies the essential ingredient of a totalitarian society: human beings actively, even willingly, participating in self-delusion and renunciation of their own freedom, in exchange for a false sense of security.

"In the modern world a delusion about work and happiness enables people not only to endure oppression but to seek it and to believe that they are happier because of the very work that oppresses them.''

A rather dry philosophical analysis of totalitarianism and corporate life prefaces the bulk of the book, which features 40- odd vignettes of typical managerial dilemmas, followed by Shorris' observations. Some of the scenes involve very high-level executives, others involve first-line supervisors. Together, they illustrate the pathetic dark side of a manager's worklife: isolation, loneliness, the "need'' to avoid seeing their oppression, the "desire'' to obey corporate mores. The author inadvertently reveals himself in many of his observations as an example of the very dynamics he criticizes.

• An executive who's working overtime to redo an error-filled report by a sales analyst, has an hysterical internal monologue of desperation and frustration. Shorris notes that loneliness has less to do with solitude than it does with social atomization. "The loneliness that destroys men by atomizing them comes when they are among the familiar faces of strangers. . . At the heart of the loneliness of business one finds the essence of the notion of property: competition. . . Loneliness, terrible, impenetrable, and as fearsome as death, incites men to cede themselves to some unifying force: the party, the state, the corporation. All lonely creatures are frightened; to be included provides the delusion of safety, to cede oneself masks the terror of loneliness, to abandon autonomy avoids the risk of beginnings.'' Aren't these the same reasons people join cults and various "extremist'' groups?

• A middle-class manager who grew up to stories of his mother bringing food to his father at the factory where he was in a sit-down strike. . . has come to blame unions for inflation, and the US's sagging position in the world market. During a strike he crosses a picket line to jeers of "Scab!!'' and has a crisis of will. He nearly becomes catatonic when he gets into his office. The point here is that the manager, unlike the striking workers, has no social support system. This manager knows it since he grew up in a militant union household.

• A public relations man and his friend, an engineer, have fights through the years about the way different processes or products are described to the public; the engineer wants more technically precise language, the PR man wants to make an impact by keeping things simple. The author notes the use Nazi Germany made of simplifications (and could also have put in some analysis of how Reagan and Co. do the same). What emerges is an insightful glimpse of language: "Simplifications are perfectly opaque. . . simplifications impose "one-track thinking' upon the listener; they cannot be considered. . . In its use as propaganda, language passes from the human sphere to that of technology. Like technology. . . it does not recognize the right to autonomous existence of any person but the speaker. To disagree with the language of the technological will is to disobey.'' But one can, and Shorris does, disagree with and disobey the language of the technological-propagandistic will.

The power of totalitarian thinking, according to Shorris, is a belief in the ultimate perfectability of the world, a resolution into certainty that will provide happiness for all forever. This pursuit of perfection reminds me of the engineer's pursuit of complete automation, or the biologist's pursuit of "better'' life forms through genetic engineering. The goal is to eliminate contingency, uncertainty, freedom. "Totalitarianism begins with a concept greater than man, and even though this concept is his perfection, the use of man as a means robs him of his dignity. To raise man up to perfection by debasing him is a contradiction: totalitarian goals of perfection are logically impossible.''

Against totalitarianism "stands the beckoning of human autonomy, with its promise of the joy of beginnings and the adventure of contingency. . . All rational men know that no matter how they choose they cannot eliminate unhappiness or achieve perfection in the world.'' One of Shorris' key points is that human society is inevitably imperfect because it is intrinsically complex, unpredictable, full of ambiguities. He rejects all systems or utopias, whether that of Rousseau, Plato, or Marx, on the grounds that such goals reduce human life to a means toward the abstract ends found in the philosophers' minds.

But Shorris, perhaps over-involved, exaggerates the power and control of the "system.'' For example, he thinks the totalitarian system has become so efficient and dominant that it no longer depends on hysteria, war, murder or hate to enforce its power. Yet he realizes that total efficiency is an impossible pursuit doomed to ultimate failure. In fact, totalitarian thinking is hysterical and does depend on hate, war and murder (look at the US campaign against Nicaragua). Totalitarian governments or executives depend on these emotional bulwarks. Without hate, war and fear, their power would erode rapidly.

Because he overestimates its power Shorris is too pessimistic about resistance to the system. His claims that "The sudden and apparently unprovoked dismissal of a few people or even of one person makes the rest docile. . .'' and "Only those who can put aside thought and misconstrue experience survive'' are obviously not always true. Otherwise how did Shorris survive? Many of us with experience in the corporate office world have despaired when co-workers go along with the most absurd demands and expectations with barely a peep, but we have also seen people question and revolt against what enslaves them. Individuals retain their autonomy, in spite of the best efforts of bosses to intimidate it out of existence.

The Manager's Bias

Shorris writes from a distinctly managerial perspective. For example, he thinks we live in a materially-glutted world. Although there is certainly a lot of waste and ostentatious wealth, there are many places in the world where there is "not enough'' for basic, intelligent survival. The real glut in most people's lives is one of twisted images and not goods.

Despite his narrow view of economic reality it leads Shorris to an important perception: ". . .economic necessity. . . demands the creation of Sisyphean tasks: nothing comes to have as much value as something. . .'' In particular, the "nothing' of value is information. Too many people are engaged in the production and circulation of utterly useless information. And from this perception, he draws conclusions about the general uselessness of most office work. The computer also stands naked: "The computer has not led to a revolution in any area but records retention and retrieval in a society that already suffers from the retention and retrieval of too much useless information. . . The major effect of these time-saving devices has been the necessity of finding ways to waste time.''

From within the decision making structures that have produced the rationalization of work processes, Shorris comments on the motivations of efficiency experts. Most workers assume management experts are consciously hostile to the workers' well-being, and there are certainly individuals who have been. But Shorris defends industrial psychologists and management theorists as being honest fellows trying to improve company operations, but inadvertently leading to oppressive conditions for workers. Evil or not, the hostility toward workers is built into their jobs. If you work for them, you realize their honesty or dishonesty isn't the point. It's what they do.

Being distant from the shop-floor realities of the factory, Shorris romanticizes the blue-collar worker's life and the reality of the modern trade union as well. Underlying this romanticization is his notion of "alienation.' Since he rejects materialist philosophy, he also rejects Marxist analysis of alienation. In Capital alienation stems from the division between the individual and the products of his or her labor, and from the chasm between the individual and the system of social reproduction. For Shorris, alienation is a feeling, the essential component of human consciousness: "It is man's capacity to feel alienated that makes him human. . . Alienation as part of man's consciousness always leads him toward freedom and improvement of the material conditions of his life. . . he enjoys the inevitable discontent of consciousness, for he can compare his life to his infinite imagination.''

Shorris contends that this feeling of alienation is precisely the autonomous subjectivity that the totalitarian corporation attacks. Since the 19th century, work has been rationalized repeatedly, but only in the white-collar world has that process been extended to workers themselves. Factory work has involved rationalization of the workers, too, but Shorris' roots in the office prevent his seeing this as clearly.

Shorris believes that, contrasted to office workers, blue collar workers are dignified and relatively free. He claims that trade unions have provided a buffer between factory workers and company goals for rationalizing work and ultimately the workers. For Shorris, unions are basically democratic, flexible institutions which have adapted very successfully to the modern capitalist economy. In so doing, they have insulated the factory worker from fear, which is the crucial element in the rationalization of men.

In his enthusiasm for his analysis of unions and alienation, Shorris goes overboard. For example, "Such business tactics as multinational manufacturing, "Sunbelt strategy,' mergers and acquisitions, or diversification have less and less effect on industrial plants and workers as unions learn to defend their members from the threats to wages and stability arising from new business situations.'' This is patently ridiculous. A brief look at the steel industry and the Rust Bowl of Ohio-Pennsylvania or the copper industry of Arizona belies this silly claim.

These assertions are reminiscent of the wistful longing for something better that is more typically associated with the frustrated low-level employee. In this case, however, it is the voice of an oppressed manager looking back down the social hierarchy for what seems to him to be a relatively idyllic life. It would be bad enough if he stopped at those comments, but he doesn't. Because so many factory workers with whom he has talked define their "real'' lives according to what they do outside the wage-labor arena, Shorris concludes the union worker is "a man very much like the creature dreamed of in Marx's German Ideology: he does one thing today and another tomorrow. . . he is human and free, paying but one fifth of his life to enjoy the rest of his days, and doing so for only twenty-five or thirty years until he retires. . . the life. . . for the worker in communism is beginning to be real for many blue collar workers. Leisure exists, and the blue collar worker enjoys his leisure without real or symbolic constraints.'' Huh?!! Sound like any blue collar workers you know?

Human Thought: Seed of Revolt?

Ultimately, Shorris pinpoints human oppression not in social institutions but in human nature itself, and concludes that ". . .the primary task of freedom is no less than for man to overcome his own nature, to do his business in a way befitting a creature capable of transcending himself.''

His strong point is the analysis of why people go along with the absurdity of modern corporate life. More than most, he has described the mechanisms of domination and control. But in typical liberal and "idealistic'' fashion, he sees the solution in simply thinking:

"Only in thinking can man recognize his own life. In that alienated moment he is the subject who knows his own subjectivity. . . Only the thinking subject, who cannot be a means, can know when he has been made a means in spite of himself. . .''

When it comes to solutions or recommendations, the only specific suggestion he makes is that managers should see their subordinates as equals in order to see themselves as the equals of their superiors. ". . .it requires that a man see himself and all others as subjects, creatures who began the world when they came into it and continue to be potential beginners.''

But no mention is made of the social system, part of which he has so assiduously taken apart during the book. It's as if he himself cannot identify his own oppressor: "Without knowledge of their oppressors, men cannot rebel; they float, unable to find anything against which to rebel, incapable of understanding that they are oppressed by the very organization that keeps them afloat.'' We hear nothing of capitalism, wage-labor, the state, or existing social institutions in general, as being at the root of the problems. Instead, he ultimately seeks to explain totalitarianism and corporate life in terms of individual psychology.

Shorris hopes for a world of subjects freely contesting among themselves. This "human condition'' is one of constant change and interpersonal conflict. While I agree that perfection in human society is an unattainable and oppressive goal, I think he takes far too fatalistic an attitude about human possibilities. Whereas we might be able to create a society of great material abundance and a lot more fun, with far less work and virtually no coercion, if we can get together enough to organize it, Shorris settles for the discontented, alienated thoughts of the lone thinker.

Changing minds is essential, but changing life takes collective action.

--Lucius Cabins

Mind Games

article by tom athanasiou

The world of artificial intelligence can be divided up a lot of different ways, but the most obvious split is between researchers interested in being god and researchers interested in being rich. The members of the first group, the AI "scientists,'' lend the discipline its special charm. They want to study intelligence, both human and "pure'' by simulating it on machines. But it's the ethos of the second group, the "engineers,'' that dominates today's AI establishment. It's their accomplishments that have allowed AI to shed its reputation as a "scientific con game'' (Business Week) and to become as it was recently described in Fortune magazine, the "biggest technology craze since genetic engineering.''

The engineers like to bask in the reflected glory of the AI scientists, but they tend to be practical men, well-schooled in the priorities of economic society. They too worship at the church of machine intelligence, but only on Sundays. During the week, they work the rich lodes of "expert systems'' technology, building systems without claims to consciousness, but able to simulate human skills in economically significant, knowledge-based occupations (The AI market is now expected to reach $2.8 billion by 1990. AI stocks are growing at an annual rate of 30@5).

"Expert Systems''

Occupying the attention of both AI engineers and profit-minded entrepreneurs are the so-called "expert systems.'' An expert is a person with a mature, practiced knowledge of some limited aspect of the world. Expert systems, computer programs with no social experience, cannot really be expert at anything; they can have no mature, practiced knowlege. But in the anthropomorphized language of AI, where words like "expert,'' "understanding,'' and "intelligence'' are used with astounding--and self-serving-- naivete, accuracy will not do. Mystification is good for business.

Expert systems typically consist of two parts: the "knowledge base'' or "rule base,'' which describes some little corner of the world--some "domain'' or "microworld''; and the "inference engine,'' which climbs around in the knowledge base looking for connections and correspondences. "The primary source of power. . .is informal reasoning based on extensive knowledge painstakingly culled from human experts,'' explained Doug Lenat in an article that appeared in Scientific American in September 1984. "In most of the programs the knowledge is encoded in the forms of hundreds of if-then rules of thumb, or heuristics. The rules constrain search by guiding the program's attention towards the most likely solutions. Moreover. . .expert sytems are able to explain all their inferences in terms a human will accept. The explanation can be provided because decisions are based on rules taught by human experts rather than the abstract rules of formal logic.''

The excitement about expert systems (and the venture capital) is rooted in the economic signficance of these "structural selection problems.'' Expert systems are creatures of microworlds, and the hope is that they'll soon negotiate these microworlds well enough to effectively replace human beings.

Some recent expert systems, and their areas of expertise, are CADUCEUS II (medical diagnosis), PROSPECTOR (geological analysis), CATS-1 (locomotive trouble shooting), DIPMETER adviser (sample oil well analysis), and R1/XCON-XSEL (computer system sales support and configuration.) Note that the kinds of things they do are all highly technical, involve lots of facts, and are clearly isolated from the ambiguities of the social world.

Such isolation is the key. If our sloppy social universe can be "rationalized'' into piles of predictable little microworlds, then it will be amenable to knowledge-based computerization. Like automated teller machines, expert systems may soon be everywhere:

@U5In financial services like personal financial planning, insurance underwriting, and investment portfolio analysis. (This is an area where yuppie jobs may soon be under direct threat.)

@U5In medicine, as doctors get used to using systems like HELP and CADUCEUS II as interactive encyclopedias and diagnostic aids. These systems will also be a great boon to lawyers specializing in malpractice suits.

@U5In equipment maintenance and diagnosis. "Expert [systems] are great at diagnosis,'' said one GE engineer. In addition to locomotives, susceptible systems include printed circuit boards, telephone cables, jet engines, and cars.

@U5In manufacturing. "Expert systems can help plan, schedule, and control the production process, monitor and replenish inventories. . ., diagnose malfunctions and alert proper parties about the problem.'' (Infosystems, Aug. '83).

@U5In military and counterintelligence, especially as aids for harried technicians trying to cope with information overload.

But Do They Work?

If these systems work, or if they can be made to work, then we might be willing to agree with the AI hype that the "second computer revolution'' may indeed be the "important one.'' But do they work, and, if so, in what sense?

Many expert sytems have turned out to be quite fallible. "The majority of AI programs existing today don't work,'' a Silicon Valley hacker told me flatly, "and the majority of people engaged in AI research are hucksters. They're not serious people. They've got a nice wagon and they're gonna ride it. They're not even seriously interested in the programs anymore.''

Fortune magazine is generally more supportive, though it troubles itself, in its latest AI article, published last August, to backpeddle on some of its own inflated claims of several years ago. Referring to PROSPECTOR, one of the six or so expert systems always cited as evidence that human expertise can be successfully codified in sets of rules, Fortune asserted that PROSPECTOR's achievements aren't all they've been cracked up to be: "In fact, the initial discovery of molybdenum [touted as PROSPECTOR's greatest feat] was made by humans, though PROSPECTOR later found more ore.''

Still, despite scattered discouraging words from expert critics, the AI engineers are steaming full speed ahead. Human Edge software in Palo Alto is already marketing "life-strategy'' aids for insecure moderns: NEGOTIATION EDGE to help you psyche out your opponent on the corporate battlefield, SALES EDGE to help you close that big deal, MANAGEMENT EDGE to help you manipulate your employees. All are based on something called "human factors analysis.''

And beyond the horizon, there's the blue sky. Listen to Ronald J. Brachman, head of knowledge representation and reasoning research at Fairchild Camera and Instrument Corporation "Wouldn't it be nice if. . . instead of writing ideas down I spoke into my little tape recorder. . .It thinks for a few minutes, then it realizes that I've had the same though a couple of times in the past few months. It says, "Maybe you're on to something.'<+P>'' One wonders what the head of knowledge engineering at one of the biggest military contractors in Silicon Valley might be on to. But I suppose that's besides the point, which is to show the dreams of AI "engineers'' fading off into the myths of the AI "scientists''--those who would be rich regarding those who would be god. Mr. Brachman's little assistant is no mere expert system; it not only speaks natural English, it understands that English well enough to recognize two utterances as being about the same thing even when spoken in different contexts. And it can classify and cross-classify new thoughts, thoughts which it can itself recognize as interesting and original. Perhaps, unlike Mr. Brachman, it'll someday wonder what it's doing at Fairchild.

Machines Can't Talk

The Artifical Intelligence program at UC Berkeley is trying to teach computers to do things like recognizing a face in a crowd, or carrying on a coherent conversation in a "natural'' language like English or Japanese. Without such everyday abilities so basic we take them completely for granted--how would we be said to be intelligent at all? Likewise machines?

The culture of AI encourages a firm, even snide, conviction that it's just a matter of time. It thrives on exaggeration, and refuses to examine its own failures. Yet there are plenty. Take the understanding of "natural languages'' (as opposed to formal languages like FORTRAN or PASCAL.) Humans do it effortlessly, but AI programs still can't--even after thirty years of hacking. Overconfident pronouncements that "natural language understanding is just around the corner'' were common in the '50s, but repeated failure led to declines in funding, accusations of fraud, and widespread disillusionment.

Machine translation floundered because natural language is essentially--not incidentally--ambiguous; meaning always depends on context. My favorite example is the classic, "I like her cooking,'' a statement likely to be understood differently if the speaker is a cannibal rather than a middle American. Everyday language is pervaded by unconscius metaphor, as when one says, "I lost two hours trying to get my meaning across.'' Virtually every word has an open-ended field of meanings that shade gradually from those that seem utterly literal to those that are clearly metaphorical. In order to translate a text, the computer must first "understand it.''

TA for Computers

Obviously AI scientists have a long way to go, but most see no intrinsic limits to machine understanding. UCB proceeds by giving programs "knowledge'' about situations which they can then use to "understand'' texts of various kinds.

Yale students have built a number of "story understanding systems,'' the most striking of which is "IPP,'' a system which uses knowledge of terrorism to read news stories, learn from them, and answer questions about them. It can even make generalizations: Italian terrorists tend to kidnap businessmen; IRA terrorists are more likely to send letter bombs.

How much can we expect a program like IPP to learn? How long will it be before its "understanding'' can be "generalized'' from the microworld of terrorism to human life as a whole? In what sense can it be said to understand terrorism at all, if it cannot also understand misery, violence, and frustration? If it isn't really understanding anything, then what exactly is it doing, and what would it mean for it to do it better? Difficult questions these.

The foundation stone of this "IPP'' school of AI is the "script.'' Remember the script? Remember that particularly mechanistic pop psychology called "Transactional Analysis''? It too was based upon the notion of scripts, and the similarity is more than metaphorical.

In TA, a "script'' is a series of habitual stereotyped responses that we unconsciously "run'' like tapes as we stumble through life. Thus if someone we know acts helpless and hurt, we might want to "rescue'' them because we have been "programmed'' by our life experience to do so.

In the AI universe the word "script'' is used in virtually the same way, to denote a standard set of expectations about a stereotyped situation that we use to guide our perceptions and responses. When we enter a restaurant we unconciously refer to a restaurant script, which tells us what to do--sit down and wait for a waiter, order, eat, pay before leaving, etc. The restaurant is treated as a microworld, and the script guides the interpretation of events within it, once a script has been locked in, then the context is known, and the ambiguity tamed.

But while behavior in a restaurant may be more or less a matter of routine, what about deciding which restaurant to go to? Or whether to go to a restaurant at all? Or recognizing a restaurant when you see one? These problems aren't always easy for humans, and their solution requires more than the use of scripts. In fact, the research going on at Berkeley is specifically aimed at going beyond script-bound systems, by constructing programs that have "goals'' and make "plans'' to achieve those goals. Grad students even torture their programs by giving them multiple conflicting goals, and hacking at them until they can satisfy them all.

Anti-AI

The academic zone of AI is called "cognitive studies.'' At UC Berkeley, however, cognitive studies is not just AI; the program is interdisciplinary and includes philosophers, anthropologists, psychologists, and linguists. (The neurophysiologists, I was told, have their own problems.) Specifically, it includes Herbert Dreyfus and John Searle, two of the most persistent critics of the whole AI enterprise. If Cal hasn't yet made it onto the AI map (and it hasn't), it's probably fair to say that it's still the capital of the anti-AI forces, a status it first earned in 1972 with the publication of Dreyfus' What Computers Can't Do.

Dreyfus thinks he's winning. In the revised edition of his book, published in 1979, he claimed that "there is now general agreement that. . . intelligence requires understanding, and understanding requires giving the computer the background of common sense that adult human beings have by virtue of having bodies, interacting skillfully in the material world, and being trained into a culture.''

In the real world of AI, Dreyfus's notion of being "trained into a culture'' is so far beyond the horizon as to be inconceivable. Far from having societies, and thus learning from each other, today's AI programs rarely even learn from themselves.

Few AI scientists would accept Dreyfus' claim that real machine intelligence requires not only learning, but bodies and culture as well. Most of them agree, in principle if not in prose, with their high priest, MIT's Marvin Minsky. Minsky believes that the body is "a tele-operator for the brain,'' and the brain, in turn, a "meat machine.''

The Dark Side of AI

"Technical people rely upon their ties with power because it is access to that power, with its huge resources, that allows them to dream, the assumption of that power that encourages them to dream in an expansive fashion, and the reality of that power that brings their dreams to life.''

--David Noble, The Forces of Production

As fascinating as the debates within AI have become in recent years, one can't help but notice the small role they allocate to social considerations. Formal methods have come under attack, but generally in an abstract fashion. That the prestige of these methods might exemplify some imbalance in our relationship to science, some dark side of science itself, or even some large social malevolence--these are thoughts rarely heard even among the critics of scientific arroganace.

For that reason, we must now drop from the atmospherics of AI research to the charred fields of earth. The abruptness of the transition can't be avoided: science cloaks itself in wonder, indeed it provides its own mythology, yet behind that mythology are always the prosaic realities of social life.

When the first industrial revolution was still picking up steam, Fredrick Taylor invented "time/motion'' study, a discipline predicated on the realization that skill-based manufacturing could be redesigned to eliminate the skill--and with it the automony--of the worker. The current AI expert systems' insight that much of human skill can be extracted by knowledge engineers, codified into rules and heuristics, and immortalized on magnetic disks is essentially the same.

Once manufacturing could be "rationalized,'' automation became not only possible, but in the eyes of the faithful, necessary. It also turned out to be terrifically difficult, for reality was more complex than the visions of the engineers. Workers, it turned out, had lots of "implicit skills'' that the time/motion men hadn't taken into account. Think of these skills as the ones managers and engineers can't see. They're not in the formal job description, yet without them the wheels would grind to a halt. And they've constituted an important barrier to total automation: there must be a human machinist around to ease the pressure on the lathe when an anomalous cast comes down the line, to "work around'' the unevenness of nature; bosses must have secretaries, to correct their English, if for no other reason.

Today's latest automation craze, "adaptive control,'' is intended to continue the quest for the engineer's grail--the total elimination of human labor. To that end the designers of factory automation systems are trying to substitute delicate feedback mechanisms, sophisticated sensors, and even AI for the human skills that remain in the work process.

Looking back on industrial automation, David Nobel remarked that "men behaving like machines paved the way for machines without men.'' By that measure, we must assume ourselves well on the way to a highly automated society. By and large, work will resist total automation--in spite of the theological ideal of a totally automated factory, some humans will remain--but there's no good reason to doubt that the trend towards mechanization will continue. Among the professions, automation will sometimes be hard to see, hidden within the increasing sophistication of tools still nominally wielded by men and women. But paradoxically, the automation of mental labor may, in many cases, turn out to be easier than the automation of manual labor. Computers are, after all, ideally suited to the manipulation of symbols, far more suited than one of today's primitive robots to the manipulation of things. The top tier of our emerging two-tier society may eventually turn out to be a lot smaller than many imagine.

As AI comes to be the basis of a new wave of automation, a wave that will sweep the professionasl up with the manual workers, we're likely to see new kinds of resistance developing. We know that there's already been some, for DEC (Digital Equipment Corporation), a company with an active program of internal AI- based automation, has been strangely public about the problems it has encountered. Arnold Kraft, head of corporate AI marketing at DEC: "I fought resistance to our VAX-configuration project tooth and nail every day. Other individuals in the company will look at AI and be scared of it. They say, "AI is going to take my job. Where am I? I am not going to use this. Go Away!' Literally, they say "Go Away!'' [Computer Decisions, August 1984]

Professionals rarely have such foresight, though we may hope to see this change in the years ahead. Frederick Hayes-Roth, chief scientist at Teknowledge, a Palo Alto-based firm, with a reputation for preaching the true gospel of AI, put it this way: "The first sign of machine displacement of human professionals is standardization of the professional's methodology. Professional work generally resists standardization and integration. Over time, however, standard methods of adequate efficiency often emerge.'' More specifically: "Design, diagnosis, process control, and flying are tasks that seem most susceptible to the current capabilities of knowledge systems. They are composed largely of sensor interpretation (excepting design), of symbolic reasoning, and of heuristic planning--all within the purview of knowledge systems. The major obstacles to automation involving these jobs will probably be the lack of standardized notations and instrumentation, and, particularly, in the case of pilots, professional resistance.'' Hayes-Roth is, of course, paid to be optimistic, but still, he predicts "fully automated air-traffic control'' by 1990-2000. Too bad about PATCO.

Automating the Military

On October 28, 1983, the Defense Advanced Research Projects Agency (DARPA) announced the Strategic Computing Initiative (SCI), launching a five-year, $600 million program to harness AI to military purposes. The immediate goals of the program are "autonomous tanks'' (killer robots for the Army, a "pilot's associate'' for the Air Force, and "intelligent battle management systems'' for the Navy). If things go according to plan, all will be built with the new gallium arsenide technology, which, unlike silicon, is radiation resistant. The better to fight a protracted nuclear war with, my dear.

And these are just three tips of an expanding iceberg. Machine intelligence, were it ever to work, would allow the military to switch over to autonomous and semi-autonomous systems capable of managing the ever-increasing speed and complexity of "modern'' warfare. Defense Electronics recently quoted Robert Kahn, director of information processing technology at DARPA, as saying that "within five years, we will see the services start clamoring for AI.''

High on the list of military programs slated to benefit from the SCI is Reagan's proposed "Star Wars'' system, a ballistic missile "defense'' apparatus which would require highly automated, virtually autonomous military satellites able to act quickly enough to knock out Soviet missiles in their "boost'' phase, before they release their warheads. Such a system would be equivalent to automated launch-on-warning; its use would be an act of war.

Would the military boys be dumb enough to hand over control to a computer? Well, consider this excerpt from a congressional hearing on Star Wars, as quoted in the LA Times on April 26, 1984:

"Has anyone told the President that he's out of the decision- making process?'' Senator Paul Tsongas demanded.

"I certainly haven't,'' Kenworth (Reagan science advisor) said.

At that, Tsongas exploded: "Perhaps we should run R2-D2 for President in the 1990s. At least he'd be on line all the time.''

Senator Joseph Biden pressed the issue over whether an error might provoke the Soviets to launch a real attack. "Let's assume the President himself were to make a mistake. . .,'' he said.

"Why?'' interrupted Cooper (head of DARPA). "We might have the technology so he couldn't make a mistake.''

"OK,'' said Biden. "You've convinced me. You've convinced me that I don't want you running this program.''

But his replacement, were Cooper to lose his job, would more than likely worship at the same church. His faith in the perfectability of machine intelligence is a common canon of AI. This is not the hard-headed realism of sober military men, compelled by harsh reality to extreme measures. It is rather the dangerous fantasy of powerful men overcome by their own mythologies, mythologies which flourish in the super-heated rhetoric of the AI culture.

The military is a bureaucracy like any other, so it's not surprising to find that its top level planners suffer the same engineer's ideology of technical perfectability as do their civilian counterparts. Likewise, we can expect resistance to AI-based automation from military middle management. Already there are signs of it. Gary Martins, a military AI specialist, from an interview in Defense Electronics (Jan. '83): "Machines that appear to threaten the autonomy and integrity of commanders cannot expect easy acceptance; it would be disastrous to introduce them by fiat. We should be studying how to design military management systems that reinforce, rather than undermine, the status and functionality of their middle-level users.''

One noteworthy thing about some "user interfaces'': Each time the system refers to its knowledge-base it uses the idiom "you taught me'' to alert the operator. This device was developed for the MYCIN system, an expert on infectious diseases, in order to overcome resistance from doctors. It reappears unchanged, in a system designed for tank warfare management in Europe. A fine example of what political scientist Harold Laski had in mind when he noted that "in the new warfare the engineering factory is a unit of the Army, and the worker may be in uniform without being aware of it.''

Overdesigned and unreliable technologies, when used for manufacturing, can lead to serious social and economic problems. But such "baroque'' technologies, integrated into nuclear war fighting systems, would be absurdly dangerous. For this reason, Computer Professionals for Social Responsibility has stressed the "inherent limits of computer reliability'' in its attacks on the SCI. The authors of Strategic Computing, an Assessment, assert, "In terms of their fundamental limitations, AI systems are no different than other computer systems. . . The hope that AI could cope with uncertainty is understandable, since there is no doubt that they are more flexible than traditional computer systems. It is understandable, but it is wrong.''

Unfortunately, all indications are that, given the narrowing time-frames of modern warfare, the interplay between technological and bureaucratic competition, and the penetration of the engineers' ideology into the military ranks, we can expect the Pentagon to increasingly rely on high technology, including AI, as a "force and intelligence multiplier.'' The TERCOM guidance system in cruise missiles, for example, is based directly on AI pattern matching techniques. The end result will likely be an incredibly complex, poorly tested, hair-trigger amalgamation of over-advertised computer technology and overkill nuclear arsenals. Unfortunately, the warheads themselves, unlike the systems within which they will be embedded, can be counted upon to work.

And the whole military AI program is only a subset of a truly massive thrust for military computation of all sorts: a study by the Congressional Office of Technology Assessment found that in 1983 the Defense Department accounted for 69% of the basic research in electrical engineering and 54.8% of research in computer science. The DOD's dominance was even greater in applied research, in which it paid for 90.5% of research in electrical engineering and 86.7% of research in computer sciences.

Defensive Rationalizations

There are many liberals, even left-liberals, in the AI community, but few of them have rebelled against the SCI. Why? To some degree because of the Big Lie of "national defense,'' but there are other reasons given as well:

• Many of them don't really think this stuff will work anyway.

• Some of them will only do basic research, which "will be useful to civilians as well.''

• Most of them believe that the military will get whatever it wants anyway.

• All of them need jobs.

The first reason seems peculiar to AI, but perhaps I'm naive. Consider, though, the second. Bob Wilinsky, a professor at UC Berkeley: "DOD money comes in different flavors. I have 6.1 money. . . it's really pure research. It goes all the way up to 6.13, which is like, procurement for bombs. Now Strategic Computing is technically listed at a 6.2 activity [applied research], but what'll happen is, there'll be people in the business world that'll say "OK, killer robots, we don't care,' and there'll be people in industry that say, "OK, I want to make a LISP machine that's 100 times faster than the ones we have today. I'm not gonna make one special for tanks or anything.' So the work tends to get divided up.''

Actually, it sounds more like a cooperative effort. The liberal scientists draw the line at basic research; they won't work on tanks, but they're willing to help provide what the anti-military physicist Bruno Vitale calls a "rich technological menu,'' a menu immediately scanned by the iron men of the Pentagon.

Anti-military scientists have few choices. They can restrict themselves to basic research, and even indulge the illusion that they no longer contribute to the war machine. Or they can grasp for the straws of socially useful applications: AI-assisted medicine, space research, etc. Whatever they choose, they have not escaped the web that binds science to the military. The military fate of the space shuttle program demonstrates this well enough. In a time when the military has come to control so much of the resources of civil society, the only way for a scientist to opt out is by quitting the priesthood altogether, and this is no easy decision.

But let's assume, for the sake of conversation, that we don't have to worry about militarism, or unemployment, or industrial automation. Are we then free to return to our technological delirium?

Unfortunately, there's another problem for which AI itself is almost the best metaphor. Think of the images it invokes, of the blurring of the line between humanity and machinery from which the idea of AI derives its evocative power. Think of yourself as a machine. Or better, think of society as a machine--fixed, programmed, rigid. The problem is bureaucracy, the programmed society, the computer state, 1984.

Of course, not everyone's worried. The dystopia of 1984 is balanced, in the popular mind, by the utopia of flexible, decentralized, and now intelligent computers. The unexamined view that microcomputers will automatically lead to "electronic democracy'' is so common that it's hard to cross the street without stepping in it. And most computer scientists tend to agree, at least in principle. Bob Wilinsky, for example, believes that the old nightmare of the computer state is rooted in an archaic technology, and that "as computers get more intelligent we'll be able to have a more flexible bureaucracy as opposed to a more rigid bureaucracy. . .''

"Utopian'' may not be the right word for such attitudes. The utopians were well meaning and generally powerless; the spokesmen of progress are neither. Scientists like Wilinsky are well funded and often quoted, and if the Information Age has a dark side, they have a special responsibility to bring it out. It is through them that we encounter these new machines, and the stories they choose to tell us will deeply color our images of the future. Their optimism is too convenient; we have the right to ask for a deeper examination.

Machine Society

Imagine yourself at a bank, frustrated, up against some arbitrary rule or procedure. Told that "the computer can't do it,'' you will likely give up. "What's happened here is a shifting of the sense of who is responsible for policy, who is responsible for decisions, away from some person or group of people who actually are responsible in the social sense, to some inanimate object in which their decisions have been embodied.'' Or as Emerson put it, "things are in the saddle, and ride mankind.''

Now consider the bureaucracy of the future, where regulation books have been replaced by an integrated information system, a system that has been given language. Terry Winograd, an AI researcher, quotes from a letter he received:

"From my point of view natural language processing is unethical, for one main reason. It plays on the central position which language holds in human behavior. I suggest that the deep involvement Wiezenbaum found some people have with ELIZA [a program which imitates a Rogerian therapist] is due to the intensity with which most people react to language in any form. When a person receives a linguistic utterance in any form, the person reacts much as a dog reacts to an odor. We are creatures of language. Since this is so, it is my feeling that baiting people with strings of characters, clearly intended by someone to be interpreted as symbols, is as much a misrepresentation as would be your attempt to sell me property for which you had a false deed. In both cases an attempt is being made to encourage someone to believe that something is a thing other than what it is, and only one party in the interaction is aware of the deception. I will put it a lot stronger: from my point of view, encouraging people to regard machine-generated strings of tokens as linguistic utterances, is criminal, and should be treated as criminal activity.''

The threat of the computer state is usually seen as a threat to the liberty of the individual. Seen in this way, the threat is real enough, but it remains manageable. But Winograd's letter describes a deeper image of the threat. Think of it not as the vulnerability of individuals, but rather as a decisive shift in social power from individuals to institutions. The shift began long ago, with the rise of hierarchy and class. It was formalized with the establishment of the bureaucratic capitalist state, and now we can imagine its apotheosis. Bureaucracy has always been seen as machine society; soon the machine may find its voice.

We are fascinated by Artificial Intelligence because, like genetic engineering, it is a truly Promethean science. As such, it reveals the mythic side of science. And the myth, in being made explicit, reveals the dismal condition of the institution of science itself. Shamelessly displaying its pretensions, the artificial intelligentsia reveals as well a self-serving naivete, and an embarrassing entanglement with power.

On the surface, the myth of AI is about the joy of creation, but a deeper reading forces joy to the margins. The myth finally emerges as a myth of domination, in which we wake to find that our magnificent tools have built us an "iron cage,'' and that we are trapped.

Science is a flawed enterprise. It has brought us immense powers over the physical world, but is itself servile in the face of power. Wanting no limits on its freedom to dream, it shrouds itself in myth and ideology, and counsels us to use its powers unconsciously. It has not brought us wisdom.

Or perhaps the condition of science merely reflects the condition of humanity. Narrow-mindedness, arrogance, servility in the face of power--these are attributes of human beings, not of tools. And science is, after all, only a tool.

Many people, when confronted with Artificial Intelligence, are offended. They see its goal as an insult to their human dignity, a dignity they see as bound up with human uniqueness. In fact, intelligence can be found throughout nature, and is not unique to us at all. And perhaps someday, if we're around, we'll find it can emerge from semiconductors as well as from amino acids. In the meantime we'd best seek dignity elsewhere. Getting control of our tools, and the institutions which shape them, is a good place to start.

--Tom Athanasiou