AI is an Actor
The metaphors we use for technologies shapes how we approach them. Choose carefully.
This summer I taught a course on AI and society at Cambridge University. It was a good time, though as with any first-time course on a fast-moving subject, the main thing I learned was how to teach it better next year.
One of the things we talked about in the course was the difference between human and artificial intelligence, and the tricks designers use to make AI tools seem smarter or friendlier. It raised the question: if AI aims to display human intelligence, what kind of human are companies trying to create? Cambridge professors, ER nurses, Buddhist monks, and toddlers are all humans, and all display intelligence; but they seem to exhibit quite different kinds of intelligence.
If you spend any time using artificial intelligence tools like ChatGPT, Claude, or Gemini, you'll notice that these technologies don't just seem intelligent; they have personalities, too. Most are usually friendly, helpful, and supportive, like an infinitely-patient customer service agent; for an additional fee, you can get edgier personas, like an anime waifu or flirty boyfriend (or so I've read). Power users advise that when working with an AI, it will deliver more useful results if you give it a role, like "an experienced marketing executive who specializes in emerging technologies," or "a middle-school science teacher who can explain complicated ideas to 12 year-olds." Indeed, Wharton professor Ethan Mollick writes in his book Co-Intelligence that you should "Treat AI like a human, but tell it what kind of human it is."
Computer scientists have long known that we treat computers like people, and can develop emotional attachments to online characters and robots. In the 1960s, MIT researchers poured their hearts out to a simple therapist program called ELIZA. In the 1990s, Stanford professors Clifford Nass and Byron Reeves found that even tech-savvy users apply social rules to interactions with computers (saying thank you, for example), and behave as if programs have personalities, agency, and feelings. (Check out “Computers are Social Actors” for a summary.) We do this with dogs, cats, old houses, ships, and the weather, so it's no surprise we would also do this with computer programs like ELIZA, robots that are designed to be cute and friendly (Jibo), and or video game characters (Cortana).
AI companies know all about this, of course. They treat this habit like a cheat code. They calculate that giving their product a warm, helpful personality makes it more likely that we'll develop a personal relationship with their service, and less likely to abandon it for the next cool AI.
But we can use this habit to serve our purposes, too. How? By thinking of AI not just as a human, but as an actor.
Metaphors like these matter. Anthropologists George Lakoff and Mark Johnson argue that metaphors play a critical role in shaping how we think and reason, and have a special place in helping us make sense of unfamiliar things. In this case, the AI-as-actor metaphor can help us remember what AIs really do, what their limits are, and how we should interact with them.
First, it reminds us that an AI is not really making use of human-like expertise, and what it says should be treated with skepticism, no matter how convincing it sounds. In real life, we wouldn't trust Noah Wyle or Ellen Pompeo to operate on us, no matter how good a job they've done playing doctors on screen.
Likewise, large language models aren't reasoning; they're "stochastic parrots," as some AI critics put it, generating the most statistically likely output to our input. We should assume that they "understand" their replies to our questions about as well as an actor who orders an "emergency thoracotomy in response to traumatic cardiac herniation" understands human anatomy. (I don't know what that procedure is, or if it's real. I'm not a doctor!) They sound authoritative, but that’s part of the act. When you ask an AI to respond as a business advisor or math tutor, think of it as playing that role on the screen, not being that person in reality.
Second, it helps us remember that generative AIs are programmed to act likable, keep you engaged, and always be ready to help. Human actors want audiences and public acclaim; AIs are programmed to act in a similar manner. Ever notice how AIs almost always end an answer with an offer to do something else for you-- answer another question, help you with a math problem, write a first draft? That's no accident. That's good interaction design. It's intended to keep the act going, and to make you feel bad about abandoning the conversation-- just as you would when talking to a human.
Finally, if AI is an actor, we are the director, acting coach, and audience all in one. When we ask an AI to respond as a teacher or an executive, give it a task, and then refine the results, we're giving it a role to play, setting the scene, deciding where the performance is good, and offering notes on how it can be improved. The most gifted stage actors draw on and channel an audience's energy, and the best performances are more like collaborations. Likewise, the most interesting AI-generated results emerge from the back and forth between us and AI, not from AI itself. When we think it's exhibiting human capabilities, we mistake our guidance for its intelligence.
In 1950, AI visionary Alan Turing proposed that if an examiner cannot tell the difference between a computer's answers to questions and those provided by a human, the computer could be considered to have displayed human intelligence. (He talks about this in a 1952 BBC panel, “Can automatic calculating machines be said to think?”)
We now call this the Turing Test, but revealingly, Turing himself called it "the imitation game." Long before it was a reality, before it was translating texts, pricing flights, mapping routes, or summarizing our Zoom meetings, the founders of computer science imagined artificial intelligence as a kind of performer. Today, AI has turned into an incredibly skilled mimic of human intelligence, able to play many roles, to sound convincing, and to keep us talking. But the more we use it, the more important it is to remember that it's all an act.
Great article Alex!