Human-like programs abuse our empathy – even Google engineers aren’t immune

Publish: 7:13 PM, June 18, 2022 | Update: 7:13 PM, June 18, 2022

It’s easy to be fooled by the mimicry, but consumers need transparency about how such systems are used
The Google engineer Blake Lemoine wasn’t speaking for the company officially when he claimed that Google’s chatbot LaMDA was sentient, but Lemoine’s misconception shows the risks of designing systems in ways that convince humans they see real, independent intelligence in a program. If we believe that text-generating machines are sentient, what actions might we take based on the text they generate? It led Lemoine to leak secret transcripts from the program, resulting in his current suspension from the organisation.
Google is decidedly leaning in to that kind of design, as seen in Alphabet CEO Sundar Pichai’s demo of that same chatbot at Google I/O in May 2021, where he prompted LaMDA to speak in the voice of Pluto and share some fun facts about the ex-planet. As Google plans to make this a core consumer-facing technology, the fact that one of its own engineers was fooled highlights the need for these systems to be transparent.
LaMDA (its name stands for “language model for dialogue applications”) is an example of a very large language model, or a computer program built to predict probable sequences of words. Because it is “trained” with enormous amounts of (mostly English) text, it can produce seemingly coherent English text on a wide variety of topics. I say “seemingly coherent” because the computer’s only job is to predict which group of letters will come next, over and over again. Those sequences only become meaningful when we, as humans, read them.
The problem is that we can’t help ourselves. It may seem as if, when we comprehend other people’s speech, we are simply decoding messages. In fact, our ability to understand other people’s communicative acts is fundamentally about imagining their point of view and then inferring what they intend to communicate from the words they have used. So when we encounter seemingly coherent text coming from a machine, we apply this same approach to make sense of it: we reflexively imagine that a mind produced the words with some communicative intent.
Joseph Weizenbaum noticed this effect in the 1960s in people’s understanding of Eliza, his program designed to mimic a Rogerian psychotherapist. Back then, however, the functioning of the program was simple enough for computer scientists to see exactly how it formed its responses. With LaMDA, engineers understand the training software, but the trained system includes the effects of processing 1.5tn words of text. At that scale, it’s impossible to check how the program has represented all of it. This makes it seem as if it has “emergent behaviours” (capabilities that weren’t programmed in), which can easily be interpreted as evidence of artificial intelligence by someone who wants to believe it.

Emily M Bender is a professor of linguistics at the University of Washington and co-author of several papers on the risks of massive deployment of pattern recognition at scale.