People tend to accept robots with humanlike characteristics up to a point. Then, things get strangely uncomfortable.
Robots have appeared in film for more than 100 years, with the first depiction occurring in the silent film “The Master Mystery,” starring magician-turned-wannabe-actor Harry Houdini. Previously referred to as “automatons” before “robot” became commonplace, these metal machines have been portrayed as delightful helpers à la C-3PO and WALL-E and as villains, like the T-800 from “Terminator” or VIKI from “I, Robot.”
Whether a robot is “good” or “bad” isn’t the ultimate indicator of whether we fear them or not. Sometimes, they just need to have human characteristics and qualities for people to feel unsettled by them. Sigmund Freud first coined the term “uncanny” in an essay to describe that feeling toward objects such as dolls and wax figures. Robots soon followed.
“Roboticist Masahiro Mori came up with the notion of the ‘uncanny valley,'” said Jaime Banks, an associate professor of advertising and brand strategy in Texas Tech University’s College of Media & Communication. “The uncanny valley is this pattern where, as something becomes more human, we have more positive feelings about it up to a point. When it is kind of human but not quite human, that makes us really uncomfortable. It’s that strange familiarity, or familiar strangeness.
“This has been shown with robots, zombies, dolls, people with plastic surgery, all sorts of things that we have this idea in our head about what a human looks like, and it doesn’t quite make it there—but it’s also not far enough away to be comfortable.”
Media influence
Banks—whose research focuses on human-machine communication emphasizing social cognition in human-robot interaction, especially in relation to cooperation, trust, mind perception and moral judgments—noted that the media people consume can influence their ability to trust or fear robots and/or artificial intelligence (AI).
“Our reactions to some computer-generated images (CGI), which are common in media, can have an effect similar to how we tend to see robots,” Banks said. “A lot of times, we know things don’t look quite right, but can’t quite explain why. A famous example is ‘Polar Express,’ where the characters are just a little bit creepy, or Disney characters like Elsa in Frozen with her strange facial proportions. But, as we watch them, we can become a bit desensitized to their weirdness and sort of forgive them for being weird, in light of their actions or personalities. That might suggest that we could possibly be desensitized to robots over time by different types of media exposures or in-person exposures, and studies do support this idea.”
Movies and television shows have portrayed robots or AI as the main protagonist or as a villain in many instances, and that has affected the way people view them in the real world.
“Media can prime or help to create our ideas about what robots are and if they are OK,” Banks said. “We’ve been trained very well by entertainment media narratives to think that if we give robots too much power, they’re going to take over the world and kill all humans. So, some researchers think that uncanny responses to robots are really feelings of existential threat. But, some people are perfectly fine with those ideas and even those fears, which is where the scientific work comes in—from beliefs, traits and experiences, what it is that might predispose a person to be more or less fearful or more or less accepting of a robot?”
Banks says abjection, which sometimes gets confused with the uncanny valley, is a similar concept.
“Abjection is theorized to happen right before fear,” she said. “It’s that feeling of rejection when we can’t easily fit something into a symbolic category. We know what humans are, and we know what safe machines are (like our cars or phones). But social robots are strange because we know they are machines but they act sort of like humans. In cases like these, when we can’t fit something into a category, some hypotheses suggest our brains kind of freak out. By nature, our brains like to be efficient with our limited cognitive resources, so when we can’t draw on shorthand decisions (like quick categorizations), that can be uncomfortable and we might end up rejecting the uncategorizable thing altogether.
“It’s not necessarily even a reaction to the thing itself; it’s to our own inability to process it and figure out what kind of thing that is. Some folks theorize that abjection, that rejection of it because it doesn’t fit, is what happens in the moments before we actually become conscious of it and feel a conscious fear for something.”
Curve of the uncanny
After Mori first predicted the ‘uncanny valley’ curve, research has sought to determine whether or not it actually exists for people’s acceptance of robots and their likeness to humans.
“There is some evidence to support the uncanny valley curve that shows that, as human-likeness increases, so does our liking of a robot, before it hits that ‘weird’ point and liking drops dramatically,” Banks said. “There also are studies that don’t support the existence of that curve. Often, these studies use graphics software to create incremental combinations from 0% human to 100% human to show different combinations of humanness and machine likeness to determine the point where people start to get weirded out.”
However, other studies suggest that the shape of the uncanny curve may be a bit different.
“In one of the more recent studies on uncanny feelings, researchers from the Air Force Academy evaluated people’s responses to 250 robots that actually exist in the world today that are humanlike to some degree, instead of simulated characters,” Banks said. “That study suggests that we may actually have two valleys in our liking of robots—the theorized big valley (a lot of disliking) when it’s not-quite-human, but also another small valley (a bit of disliking) when it’s not-just-robotic.
“So, there are different patterns we see in the data sometimes. But, in general, the notion of the uncanny valley is fairly well-accepted—that we are uncomfortable with the not-quite-humanness of robots.”
Why robots and AI make good “bad guys”
From HAL to Skynet, AI and robots have been some of the most memorable villains in cinematic history. Banks speculated as to why they make such good “bad guys.”
“There’s a lot of evidence that our enjoyment of entertainment media comes from what happens to the characters,” she said. “We like it when bad stuff happens to bad people, and we like it when good stuff happens to good people. In the same way that robots sit on that line between human and not-human, maybe robots also are easy to put in that middle space between good and not-good. Scholars have suggested that this kind of moral ambiguity can make characters more relatable and interesting.
“But robots sit on another kind of middle ground. Robots are easy to understand in our own terms because, to some extent, they act like us and can look like us, but they’re still easy to consider as ‘other.’ That is, they’re ‘like-us’ so they’re easy to wrap narratives around that still make sense to us, and we can understand them as agents. But they’re also ‘not-like-us,’ so it’s OK if they’re bad and it’s OK for us to like them for being bad. So, that may be a way of disengaging from the badness so we can still enjoy what’s happening but put them in a box that says the rules are different for them.”
That’s not to say that AI and robots can’t also make good protagonists.
“In the same way, that may be why they make good characters, too—WALL-E, Optimus Prime and all of the good robots we like,” Banks said. “They may be entertaining because they are surprising to us. People may think, ‘Oh, that robot is endearing because it’s not supposed to be good and noble like that, and it turns out that it is.'”
Robots in society
The TV show “Black Mirror” aired an episode titled “Metalhead,” where the protagonist tried to flee from murderous, robotic “dogs.” Those robotic dogs were based on their real-life robotic counterparts created by Boston Dynamics—Spot and Big Dog. Things like Roombas, self-driving cars and automated factories also are just a few examples of how actual robots have been ingrained into society.
With the COVID-19 pandemic, hospitals adapted and started using robots to “wheel in” a doctor for a virtual visit. These robots also can take a patient’s vital signs. While the use of robots in a typical human-to-human space is relatively new in the U.S., Japan has embraced the use of robots for a while now.
“In Japan, with an increasingly aging population, there is a trend toward developing robots whose job is to support health care,” Banks said. “For instance, there is a bear-shaped robot named ‘Robear’ designed to support eldercare—for instance, helping people stand up and lifting them from beds to wheelchairs. That makes it a serious workhorse with a very cute face. That’s where cultural nuance and context become important. In a culture where kawaii aesthetics are appealing, and in a health care context where cleanness and feeling safe are valued, a bright white, slow-moving, cute robot may well fit the bill. Now, in a different culture or for a different purpose—like a robot as a social companion—different designs or behaviors may be more important.”
Should we fear robots?
Author Andrew Smith wrote, “People fear what they don’t understand and hate what they can’t conquer.” Perhaps the reason some people fear robots is because they are too humanlike; perhaps they might not be human enough.
“This is a little bit of a debate in the field, as to whether and how robots should be designed in relation to humans’ ability to understand them,” Banks said. “It’s referred to as the explainable AI movement. From one camp, robots are great tools. They can do all sorts of things; they can help us in all sorts of ways. But that camp feels like we shouldn’t humanize them because they don’t have the capacity to do some of the things that humans can do, and by making them too shiny and pretty and humanlike, we will expect too much from them that they actually can’t fulfill for us…at least, not yet.
“And then, another perspective considers it fine to make them seem humanlike because that’s a necessary condition for them to be integrated into society. That is, we won’t accept them as actors or agents in the world if they don’t act like us and follow our norms and, to some extent, look like us.”
The way a robot looks, and behaves, also could indicate whether people accept them or are scared of them.
“When we consider the idea of ‘anthropomorphism,’ or how humanlike something is, we tend to only think about appearance, but movement matters, too,” Banks said. “If a robot looked perfectly human and sounded perfectly human but then was a little bit jerky or something, that probably would weird us out. If its movements aren’t fluid, or if they don’t walk the right way or if they have odd textures, that could be weird.”
As a scientist, Banks doesn’t want to be prescriptive and say whether people should or should not be fearful of robots and AI. She does, however, see potential for a middle ground.
“I’m not sure it’s all that different for robots compared to how we react to humans,” Banks said. “We don’t see a stranger and automatically trust them or fear them. We use a bunch of heuristics, our past experiences, stereotypes and new interactions to gain information to try to pass a judgement about whether someone is safe or if we should be fearful of them.
“Fear is an interesting question to me, rather than something that should or shouldn’t be there. Our job as social scientists is to try to understand the dynamics of fear, the dynamics of attraction, the dynamics of trust and all sorts of other human experiences. These are important because our research—along with the work of ethicists and philosophers—can be used to answer questions about how to build appropriate policies and practices around the design, production, marketing, and integration of robots as they may enter society in different ways.”
However, a little fear isn’t a bad thing.
“I think things will get really interesting if intelligent machines end up playing important roles in human social spheres,” Banks said. “But, as with people, a certain amount of fear can be a useful thing. Fear can make us think critically and carefully and be thoughtful about our interactions, and that would likely help us productively engage a world where robots are key players.”