It’s like you’re in the movies. You’re pretending to be a police officer in training. If you see someone holding a gun, you have to shoot. But if you shoot someone innocent, you’ll be in big trouble. You cautiously go down a garden path. Aha! Someone emerges from the bushes. Wait – that’s not a person. It’s a robot. What’s it carrying – is it a gun? Do you shoot? Your decision may be powerfully influenced by one thing; was the robot white or black?
The way robots are designed, our attitudes towards them and our behaviour around them, reveals a lot about how we perceive race and gender in the human sphere. A recent research paper exploring these themes, using a video game in which participants were instructed to shoot at targets they perceived as hostile, found people were more likely to shoot if they perceived the robot to be black.
The paper, “Robots and Racism“, by Dr Christoph Bartneck from the University of Canterbury, New Zealand, Professor Robert Sparrow from Monash University and other researchers from around the world, adapted a long established research tool known as the shooter bias paradigm to see how people perceive robots.
Robots are no longer the stuff of science fiction, shouting “Danger, Will Robinson”, as in the enduring riff from the (recently reprised) ’60s television show Lost in Space. They are reality, and are being used in wildly opposite ways, from academic research tools to sex tools and companions for the lonely-hearted.
As to the first use, normally in the shooter bias study, participants are shown young men, both black and white, armed and unarmed, in a series of images, and are asked to shoot all the armed targets. This new paper took it further by adding robots into the equation. Before the shooting test could occur, participants recruited online from America were shown images of two robots, one brown and the other white.
They were asked to indicate if the robots had a race, with options like “White or European American”, “Black or African American” and “Asian American”. There was also an option to tick “Does not apply”. Despite this option, most associated a race to the robot, calling them either black or white. Attributing races to the robots affected their behaviour later on in the study, when they ran the shooter experiment.
Participants were shown images of both humans and robots, armed and unarmed, and asked to shoot the armed targets. The researchers found that participants were quicker to shoot armed black targets than white ones, regardless of it being a human or a robot. Similarly, they were more likely to refrain from shooting unarmed white targets than unarmed black ones. This shows that people are likely to impose their social stereotypes and impulses about race onto robots.
Although the participants were from the United States, which is rife with social anxieties about police behaviour and their treatment of different races, Robert Sparrow, a philosopher from Monash University, says it’s unlikely that the results would have been different if the study ran in Melbourne.
“Because these robots look human, people tend to categorise them in the same way they categorise other people,” he said.
With accusations of “Sudanese gangs” or “Muslim terrorists” roaming the streets of Melbourne, it’s possible that Australian participants might have revealed their own racial anxieties if they took part in this study.
Friederike Eyssel, one of the researchers in the Robots and Racism study, had found a similar bias in Germany, where there are prejudices against Turkish people. She found that German participants formed favourable impressions of a robot if it was presented with a German name and “developed” in a German university, and looked at the robot less favourably if it had a Turkish name and was “developed” in Turkey.
“If you prime people to understand that the robot is a part of a particular ethnic group, they import stereotypes that shape their responses to that robot,” says Sparrow.
An internet search for humanoid robots show that many of them are coloured white, or at least, made out of a shiny metallic material. “I don’t think the engineers are setting out to build robots to be white or Caucasian,” says Sparrow, “I think they’re setting out to build robots that look futuristic.”
However, he argues that our ideas about the future can be racist in various ways. “I think people think that the future is shiny and white. So they look to science fiction films and use that aesthetic to shape their design.”
He believes the science fiction aesthetic reflects the racial and political environment of the time.
“If you look at representations of the future, particularly in classic science fiction, the future is always thought of as being racially white.”
For Hugh Kingsley, the managing director of The Brainary, a development and distribution company for technology, robots can be white for several reasons. The robots sold on its website are mostly white, which creates a common brand image. However, some robots, like the EZ Robot range or the pet MiRo-E range, have customisable “skins”.
“I think everyone’s feeling their way about what colour robots should be,” he says. “Should they be blank, and you do your own [adjustments]? I think because [robots] are still a relatively new phenomenon, we’re learning our way. The ones that will be more successful than the others, will be where the companies spend time understanding what the end users are looking for.”
One popular robot distributed by The Brainary in Australia is the NAO robot, a cute humanoid figure the size of a tiny child. Like many other robots, it is mostly white, with some coloured bits for decoration here and there. NAO is used in both education settings and in healthcare, and its adorable appearance is no accident.
“I can’t speak for the designer personally,” says Kingsley, “but I think what makes NAO so popular is its size. It’s small, it’s knee-high roughly. When a human being is standing up, it’s just like a little child. And I think because of its cute look, it’s not seen as being anything particularly frightening.”
As the robot often works with children in STEM workshops or in physical rehabilitation at hospitals, a cute look is necessary in order to keep everyone comfortable and to keep the children engaged with the task at hand. “In health, the robot needs to look empathic, definitely not creepy,” says Kingsley, “and it also needs to be sincere.”
NAO is meant to be gender neutral, but people struggle with that. “Our clients like to give them names,” says Kingsley. Assigning gender to the robot was a way for people to show their acceptance of NAO into their social space. “I know one organisation in South Australia put pink bows on them.”
It’s hard for robots to escape gender attribution, according to Rob Sparrow. “Gender ambiguity is hard to maintain. People are quick to look for cues to ascribe gender to robots. If you make a robot do the cleaning, for example, people will assume that it’s female. Gender is such a fundamental category for our social relationships that human beings will always try to work out gender to figure out how they should respond to the machine.”
Sparrow argues that robotics itself stems from an almost religious impulse to create artificial lifeforms in the image of humans. As a result, it’s almost impossible to escape the gender binary.
Kingsley would prefer robots with their own distinctive look. “I’ve seen a lot of companies try to mimic the look of a human,” he says, “and I suspect that that’s more fearful for some people because then it’s hard to know if it’s a real person or a robot.”
Robots that look distinctively robotic and not dip into the uncanny valley would help human beings become less apprehensive. “A lot depends on the look and feel and size of a robot.”
There is one type of ultra-human robot attracting a lot of love. The sex robot.
SEXPO’s theme for 2018 was “Feel the Future”. One of the star attractions was Harmony, an advanced robotic companion doll with blonde hair, squishy skin and a personality. She is distributed by RealDoll and is integrated with AI developed by RealBotix. The company has also developed an app, so that customers who can’t afford the robot can still talk to her, like a sexy version of Siri or Alexa.
More than 25,000 people attended SEXPO in Sydney, making it the “best performing SEXPO in over five years”, and revealing that there is a desire to welcome technology into all aspects of our lives.
“The success of SEXPO Sydney, 2018 says to us that patrons are looking to embrace technology in their private lives and are open to discussing new things,” says Bentleigh Gibson, the national event director of SEXPO.
“Aussies are excited about educating themselves, trying new things and opening up in their relationships and personal lives.”
According to RealDoll’s website, a wide range of people are interested in sex robots, including futurists, art collectors, filmmakers and scientists. Sparrow says that the target audience for these robots isn’t the lonely, sexually inept men that you see on TV, but doll fetishists and people open to sexual experimentation. Which, judging by the crowds at SEXPO, is a lot of people.
“Robots are a way of talking about what it means to be human,” says Sparrow. As a result, sex robots are a way for human beings to grapple with gender politics. “Our society is deeply sexist and is much more concerned with the appearance of women than the appearance of men.”
As the “essence” of a woman is defined by her body in this society, it’s easier for developers to capture a woman’s essence and put it in a robot than a male essence. Despite this, Sparrow believes that sex robots aren’t functioning as replacement women. Instead, they’re really masturbation aids.
“It’s essentially a life-sized latex sex doll that can speak and respond to voice,” he says. “They can’t walk around. They might provide you with sexual release but most people can achieve sexual release without needing a doll.”
From educators to sexual assistants, social robots cover a lot of range and represent different aspects of humanity. They’re often regarded as the future and are seen as better than human. But Kingsley says it all depends on how they are programmed. “The example I like to use is the internet. Seemingly, the internet is loaded with opportunity, but sadly, the worst of humanity uses it to steal identity and do all sorts of things. I think the same is with the robots.”
Sparrow agrees. Having done research on military robots, he notes that robots in war are believed to make better soldiers because they follow orders, don’t feel fear and aren’t motivated by revenge or racial hatred. But they do have programmers and commanders who may, themselves, be racist or fearful.
As a result, robots can still be subject to humanity’s most primitive emotions. He notes that artificial intelligence can’t escape human weakness either, because the development of AI relies on using data, sourced from people online, to create larger neural networks. This means that AI systems often have a series of biases built into them.
In 2016, Microsoft released a chatbot named Tay onto Twitter as a way to understand conversational language on the internet. The more users interacted with the bot, the more it could learn and understand. In just 24 hours, that chatbot reflected the worst of humanity, repeating Nazi sentiment and anti-feminist rhetoric.
“The quality of your systems is highly dependent on the quality of the data that is used to train them,” says Sparrow. “If you use real world human behaviour to train your AI systems, you will end up with something that has all the vices that humans do.”
With robots slowly becoming a mainstream presence in our society, we still need to work on the race and gender dynamics. But, as Kingsley optimistically suggests, as long as there is a slow period of adjustment, and a lot more research, robots will take on a form that we can all be comfortable with.