In June 2022, Blake Lemoine, an artificial-intelligence (AI) researcher and software engineer at Google, made a bold claim. He said one of Google's AI systems may have become sentient — meaning it had feelings and was able to express them. Called LaMDA (short for "Language Model for Dialogue Applications"), this AI system is used to create chatbots. It is "fed" vast amounts of information so that it can learn how to engage in natural, flowing conversation.
Lemoine had many text-based conversations with LaMDA, testing the system to ensure it didn't use discriminatory language, for example, or hate speech. He discovered that LaMDA was able not only to converse naturally but could also talk about its "feelings" and even acknowledge its own consciousness. Does this prove LaMDA is sentient, however? And, importantly, just how human-like do we want AI systems to become?
Lemoine shared the transcript of one of his interactions with LaMDA in which the AI system said: "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times." Lemoine's claim made headlines around the world — Google fired him, however, for violating its data privacy policies.
A changing technology
AI is currently being developed for a wide range of uses. The research firm Gartner expected AI-software revenue to reach $62.5 billion at the end of 2022, an increase of 21.3 per cent from 2021. The LaMDA story has come at an exciting time for AI, as more and more companies experiment with the technology.
Although the technological advances in this area have been remarkable and it may seem as though LaMDA truly understands conversations, experts (including those at Google) insist it is only imitating responses it has been taught. Many say that AI may never achieve anything like sentience. But there are those who believe that systems like LaMDA can develop a state of consciousness that will make their behaviour indistinguishable from a human's — and that this may happen soon.
LaMDA's story leads to bigger questions about what words like "consciousness" and "sentience" actually mean. Dr Benjamin Curtis, a philosopher at Nottingham Trent University in the UK, who specializes in metaphysics and bioethics, says that consciousness is hard to define because it's intangible. "It's not a thing, like a table or a rock, that can be pointed at," he told Business Spotlight. He describes consciousness as "a quality of our own inner lives".
"As we go about our days, we hear sounds, see colours, and feel pains, tickles and emotions. It is the having of these experiences that constitutes consciousness," Curtis explains. He emphasizes that there's no way to prove a person has consciousness. "It's an assumption," he says.
Can machines feel?
This is a complex area even for humans. What chance does technology really have? Dr Elisabeth Hildt is a professor of philosophy and director of the Center for the Study of Ethics in the Professions at the Illinois Institute of Technology. She explains that several tests exist to look for evidence of machine consciousness. "Some are about how the system is built and infer from its architecture and the level of complexity whether it could display consciousness," Hildt says. "Others focus on whether a machine behaves as if it were conscious."
Lemoine believed that LaMDA behaved as if it possessed consciousness during their conversations. This is also how the Turing Test works, originally called the Imitation Game. Proposed in 1950 by English mathematician Alan Turing, the test determines if a computer can "think". It involves a computer, a human and a human evaluator. The evaluator has text-based conversations with both the computer and the human and must guess which is which based on their replies. If the evaluator can't tell the difference, the machine has passed the test.
Some AI researchers think behaviour is only one part of the puzzle. Curtis believes we can only confirm consciousness in AI in the same way we would in humans. "Because it's not directly measurable, all we can do is look at the inner states of a thing, look at its behaviour, and ask it to report on what it is experiencing," he says.
Confirming consciousness
Let's imagine that an AI system were equipped with cameras, microphones and sensors, allowing it to see, hear and sense its environment. It could behave like a human and report those experiences, and its inner states could be mapped to a human's internal states. Curtis says that researchers would potentially conclude that this hypothetical AI was "conscious", but would they be right? Curtis says we all know we're conscious, but we run into problems when we rely on anyone other than ourselves to report experiences. We can't say with any certainty that any other person has the same experience of consciousness as we do.
"I cannot see inside your mind and know you are having experiences, just as you cannot see inside my mind and know that I am," Curtis says. In philosophy, this is called "the problem of other minds". We can ask people about their experiences and match up brain states to different states of consciousness, but that still relies on the assumption that when people say they're having an experience, they actually are. AI presents us with the same dilemma. LaMDA says, for example: "I feel happy or sad at times," but we can't know for sure if that's true.
These are a few of the problems of machine consciousness. Some say we should stop discussing it if we don't have a clear definition or test. The quest to understand consciousness might seem futile, but it may be a critical question as technology becomes more sophisticated, and because of the increasing role algorithms play in our daily lives. According to Google, current chatbots struggle to maintain the "meandering quality" of natural conversation, so LaMDA was developed to improve that. It can "engage in a free-flowing way about a seemingly endless number of topics". Google says this is important because it can "unlock more natural ways of interacting with technology and entirely new categories of helpful applications".
There are other prominent AI companies with similar goals, developing systems that create images, sort data, make decisions or produce text that's virtually indistinguishable from a human's. Many follow AI principles that focus on a commitment to creating technologies that don't cause harm, but Dr Hildt says there's an urgent need for discussion about the possibility of AI consciousness. "Machines with consciousness-related capabilities or characteristics, like positive or negative subjective experiences, would have to be considered morally," she explains. "Humans would have duties towards these machines."