Insights

Publication | Legaltech News

Nervous System: The ELIZA Effect

David Kalat

June 3, 2024

The first chatbot, ELIZA, was created by Joseph Weizenbaum at MIT in the 1960s as a Rogerian-style “psychotherapist” using natural language communication.

One of the most articulate critics of artificial intelligence (AI) in the 1970s was, ironically, an early pioneer of the field and someone who had helped popularize the possibilities of AI. Joseph Weizenbaum was the mastermind behind ELIZA, a 1960s chatbot that functioned as a digital therapist. Through a simple text-based interface, ELIZA would engage in a dialogue that seemed convincingly human. Many users found ELIZA to be as valuable as a living therapist; some formed intense emotional bonds with it; a few were convinced it was genuinely human. But less than a decade after this triumph, ELIZA’s creator was arguing that his own life’s work augured something deeply dangerous and immoral.

Weizenbaum had grown up in Hitler’s Germany, a middle-class Jewish child in a nation rapidly succumbing to racist ideology. His family fled in 1936 and settled in Detroit. Weizenbaum struggled to find a sense of community but realized he did not need English skills to excel in mathematics. When war broke out, he was drafted into the Army Air Corps to serve as a meteorologist—because as an “enemy alien” he was forbidden to work on cryptography.

After the war, he was a seminal force in the burgeoning field of computer science and was recruited to join the faculty of the Massachusetts Institute of Technology (MIT). There he developed ELIZA, named for the heroine of George Bernard Shaw’s play Pygmalion, who also used language to masquerade as someone or something else.

Weizenbaum introduced his chatbot to the world in a January 1966 journal article for the Communications of the Association for Computing Machinery. Weizenbaum described it as a program that facilitated natural language communication between “man and machine” and illustrated ELIZA’s capabilities with a sample dialogue:

Men are all alike.

IN WHAT WAY

They’re always bugging me about something or other.

CAN YOU THINK OF A SPECIFIC EXAMPLE

Well, my boyfriend made me come here.

YOUR BOYFRIEND MADE YOU COME HERE

He says I’m depressed much of the time.

I AM SORRY TO HEAR YOU ARE DEPRESSED

It was a convincing illusion of machine comprehension, but Weizenbaum knew it was just an electronic con game. The program had a library of canned responses, modeled on the generic responses used in Rogerian-style talk therapy. After each input, the program scanned for keywords to select which canned response to return. If, for example, it scanned the word “mother” in the input text, ELIZA would spit out “HOW DO YOU FEEL ABOUT YOUR MOTHER?” If no keywords were found to prompt a specific response, ELIZA defaulted to generic replies such as “TELL ME MORE.”

Scientists had already observed that patients sometimes mistakenly interpret interactions with their (human) therapist as having greater emotional significance—a phenomenon called “transference.” By mimicking the style of therapeutic conversations, ELIZA provoked a similar reaction. Weizenbaum felt that human therapists at least have real emotions and ethical feelings, even if patients sometimes misread them. By contrast, despite what ELIZA’s users may have believed, the program not only did not care for them but literally could not.

ELIZA was a landmark accomplishment in early AI. Weizenbaum’s work on the program earned him tenure and brought him to the attention of MIT’s Artificial Intelligence Project, led by computer science visionaries Marvin Minsky and John McCarthy. Minsky and McCarthy had no doubt that technology would eventually manifest true AI, machines capable of genuinely thinking like humans as opposed to merely mimicking them.

Weizenbaum, however, deeply disagreed. He did not doubt that algorithms could eventually become sophisticated enough to model human thought much better than ELIZA did, but he did doubt that would ever be enough. Weizenbaum had made ELIZA in part because of his own experiences in therapy. He was a troubled soul, buffeted by personal crises and repeatedly hospitalized for psychological issues. He knew that there were aspects of the human mind that one would never fully understand—much less be able to model as software.

In the 1970s, Weizenbaum grew increasingly agitated by the Vietnam War and alarmed at the extent to which MIT’s work was used to support it. MIT received more funding from the Pentagon than any other university. Military funds had paid for ELIZA. Weizenbaum felt scientists had a moral duty to know about, and actively participate in, how their work affected society.

To his mind, too many of his peers were ignoring how their work was affecting their world, as long as they were mentally stimulated and well compensated. That stance reminded Weizenbaum of the trap Nazi scientists had fallen into, disclaiming responsibility for how their work was used. He had already fled one home that succumbed to that thinking.

He feared that AI technology inevitably would be a threat to a humanist society. He had seen how readily people could turn over their hearts to a thoughtless machine—what more would society be willing to risk if those machines were smarter?

Regardless of whether true AI was achievable, no machine would ever have the power to make human judgments. Weizenbaum framed it as the difference between decisions and choice. A machine can come to a decision by calculating information and proceeding through a series of logic gates. That can be programmed. But a choice is a manifestation of one’s values. That can only be learned.

 

The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions, position, or policy of Berkeley Research Group, LLC or its other employees and affiliates.

BRG Experts

Related Professionals

David Kalat

Director

Chicago