cover image
Illustration by Sam Chivers

This article is a preview from the Winter 2015 edition of New Humanist. You can find out more and subscribe here.

Every new technology brings with it a new kind of accident. So said the French writer and thinker Paul Virilio in his theory of the “integral accident”. The invention of the train is the invention of the derailment, the invention of the aeroplane is the invention of the plane crash. It’s an illuminating, if bleak, way of looking at progress.

Was the invention of the robot, therefore, the invention of the robot uprising? Without a doubt. Even before Karel Čapek gave us the word “robot” in the 1920 play R.U.R – which concerns a robot revolt – humanoids created to serve us have been defying our commands. It’s a theme at least as old as the Jewish folktale of the Golem and Mary Shelley’s Frankenstein. Nowadays the tale is retold every year, sometimes several times. In a six-month period of 2015 we were treated to the films Ex Machina and CHAPPiE, the Channel 4 drama Humans, and the Terminator franchise’s latest dalliance with the law of diminishing returns. And the cultural horizon is getting closer. In 1982, Blade Runner imagined “replicants” walking undetected among us in 2019. Now, 2019 is a lot nearer. Humans, Ex Machina and CHAPPiE are all set in approximately the present day.

But what if the robot-revolt trope is too dominant in our cultural discourse about humanoid machines? What if it has, in fact, obscured several more urgent ethical questions that we should address before autonomous robots walk among us? The advent of genuine androids or replicants has the potential to spark an existential crisis for us humans. This crisis could have far-reaching consequences for how we treat each other and view ourselves – consequences that reach back to questions we have asked about human relations since the Enlightenment.

First, however, there’s a basic question of practicality. “I think we’re still decades away from creating something that even gets close to looking like a real human being,” says Gustav Hoegen. “If you want to create something you stand in front of and can’t tell the difference, that’s going to take a long time.” Hoegen knows – he is an animatronics designer and special effects artist who specialises in building realistic human heads for films including Ridley Scott’s Prometheus (2012). Robotics companies come to Hoegen for advice on how to make their humanoids more human.

The most pressing problem, Hoegen says, is human skin. “[Synthetic] skin never wrinkles up the way a real skin does, it’s always slightly limited in its movement no matter how soft the silicone is.” These problems are compounded by the mechanisms that need to be built under the skin for the robot to reproduce realistic facial expressions.

“The human head has 50, 60, maybe hundreds of muscles, all working in sync, and we are always limited in terms of the space inside the head,” Hoegen says. “What we use to make the skin move is still quite crude, and you want as many areas moving [together] as possible, and they start fighting with each other.”

This isn’t such a problem when designing a special effect for the cinema, as it will only appear for a minute and perform one or two expressions. But a truly realistic human face would need a very wide range of movement to recreate different expressions. The more flexible it has to be, the more mechanisms must be jammed in. And that’s just the face. It’s hard enough to make an independent, upright bipedal robot walk at all; matching the instinctive ease of the human gait is another order of difficulty altogether.

However, there is a clear cultural expectation that humanoid robots will soon walk among us, perhaps so convincing that they go undetected. And globally, teams of roboticists persist in trying to develop more and more realistic anthropoids. In January this year Toshiba unveiled ChihiraAico, a robot built to resemble a 32-year-old Japanese hostess, intended for reception and retail applications. In Hong Kong this April, Hanson Robotics demonstrated Ham, a face and shoulders modelled on a white male, capable of a large number of expressions, and of simulating eye contact. Ham’s creator David Hanson imagines a wide range of uses, starting with medical training, a field in which there is established demand for realistic dummies.

There are practical reasons for designing robots modelled on ourselves. We have shaped our environment to suit beings in our form: stairs made for bipeds, doors of a height and width that fit us, handles and switches shaped to our hands. Robots at large in our world would do well to follow that form to be truly versatile. And this ability to interact with the human world extends to the humans living in it. It is assumed that humanoid robots will be easier for humans to relate to; this is certainly the thinking behind a robot like ChihiraAico, explicitly intended to facilitate communication between “humans and non-humans”. They are a user interface for artificial intelligence – software with a human smile.

But Hoegen suspects that these practical concerns might shield a more basic, and more human, motive: the desire to show that it can be done. “I think it’s just plain human ambition on their part to create something that is realistic, and to prove to the world they can do it.”

To achieve a robot that can pass flawlessly as human would prove something about robotics and prosthetics. It would also disprove something about humans. We would be clearly and resoundingly shown to be imitable, which is to say, not inimitable. Our ability to recognise each other as human would undergo a revaluation.

“We want to try to build robots like us to see whether we can or whether we can’t,” says Richard Ashcroft, professor of bioethics at Queen Mary University of London. “That tells us something about the nature of human uniqueness.” This question underpins much of the theoretical debate over artificial intelligence and robotics, and is, Ashcroft says, “perfectly respectable”. “A lot of work in philosophy turns on trying to provide impossibility proofs and one way to try to defeat that is to say, ‘Well, look, here, I built one! You can’t have an impossibility proof because here is one!’” The field of human uniqueness would, for better or worse, be diminished. Looking like a human would no longer be something that only humans can do.

When that day arrives, it will bring with it dangers. The theoretical discussion has naturally focused on the important, and eye-catching, Terminator consideration: what happens if the robots rise up and destroy us? But that’s really a question about artificial intelligence, and its prominence might have obscured more subtle but equally vital pitfalls awaiting us when perfectly formed replicants walk among us, an ethical area that Ashcroft says remains “under-explored”.

There is now consideration of whether robots might have rights. On this, Ashcroft follows the utilitarian philosopher Jeremy Bentham’s principles on animals. In 1789, Bentham wrote: “The question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?”

“If we can talk meaningfully about robots suffering,” Ashcroft says, “then we have good reason to be concerned about the ethical nature of our relationship with the robot from the robot’s point of view.” But even if a robot can be shown to experience something akin to suffering, don’t expect that to clarify our relationship with them. We’re still figuring it out with animals, which can and do experience suffering. We’ve been coexisting with them or more than 10,000 years. We still haven’t thoroughly settled the question of suffering in other human beings, for that matter.

Suffering – and the possibility of revolt – could simply be treated as an engineering problem, and designed away. But this raises a whole series of further questions. If replicants are to be inserted into particular societal roles, what does it say about those roles? Books and films – and sometimes real-life projects – give us an idea of the first jobs that replicants will fill: customer service, care work, domestic service, sex work. This is practically a directory of the worst-treated, lowest-paid, lowest-status non-manual jobs available. Proposing that these fields could be taken over by replicants is, Ashcroft says, “doubly insulting”: first, the suggestion that they don’t even have to be done by humans; secondly “it takes as a given that other human beings are untrustworthy or unreliable, and seeks a design solution to that, rather than looking at the ways they are made to be unreliable or untrustworthy by the conditions in which they have to work.”

This is an extreme form of what Evgeny Morozov calls “solutionism”: the application of a complex technological workaround to a problem as an alternative to addressing underlying social or political causes of that problem. But so what? If the people doing these jobs are subject to low pay, low status and poor conditions, and vulnerable to worse exploitation and abuse, surely mechanising the jobs out of existence does everyone a favour?

Put bluntly: no. The double insult outlined by Ashcroft could be compounded by actual injury in the form of a worsening deal for human workers in general. If there is a class of people – pseudo-people, anyway – that can be freely mistreated and exploited, those attitudes will be carried into other spheres. And the humans could end up being treated worse than the robots: “If you have a robot that you consider a highly valuable piece of equipment, and a person that you consider basically with contempt, you’ll treat the human worse, not better.” The problem of humanity’s scorn for its workforce won’t be solved by creating workers it is safe to scorn, and humans won’t be automatically raised into a privileged category. Čapek, after all, wrote about humanoid machines in order to talk about the treatment of human workers in a world being transformed by the assembly line and scientific management.

Look at the ethical questions closely enough, and a horrifying truth begins to emerge. In trying to build human-like workers that won’t tire, neglect their responsibilities or rise up, we might not be trying to solve the problems of robotics. We might be trying to solve the problems with human slaves. Perhaps one of the reasons why the robot uprising is such a prevalent idea in culture is the sneaking suspicion that we might deserve it.

Very few roboticists are alive to these dangers. One exception is Professor Alan Winfield, an expert in swarm robotics at the Bristol Robotics Laboratory in the University of the West of England and author of Robotics: A Very Short Introduction. Unusually for a person in his field, he has fundamental objections to humanoid robots in principle, let alone in practice.
Underpinning this is what he calls the “brain/body mismatch problem”. “Right now we can build robots that look fantastically lifelike, that look much like human beings, but whose behaviour in no way matches their appearance,” Winfield says. “If you look at a robot and it looks like a person, then not unreasonably you expect it to behave like a person. They are no way as smart as a person. It’s unethical to build robots whose behaviour, in other words intelligence, is such a huge mismatch with their appearance. Whether deliberately or innocently, it’s a deception.”

This is particularly concerning when a robot might be placed in contact with vulnerable people such as children, elderly people with dementia, and so on, who might not understand that they are interacting with a machine. “If [the robot is] displaying, demonstrating behaviours that suggest that it cares for them, for instance, for you and I that would be fine because we would know that it’s fake. But there are grounds for worrying about vulnerable users who may not understand that a machine doesn’t in fact love them.” Even without immediately harmful consequences, vulnerable people would be subjected to a sham.

Remember, here, that care work is one of the fields prominently cited as a possible target for a robot workforce – and also that “relatability” is one of the main reasons roboticists are striving towards perfect replicants. Able people might not be so susceptible, but we are not immune, Winfield warns. “The truth is that humans are uniquely sensitive to things that look like humans. So even a perfectly smart adult will perhaps unconsciously respond to cues. We might rationally know that this robot is a robot and not a beautiful woman, for instance, but whether we like it or not we will be affected by its appearance, and our expectations following that appearance. So, to get a little bit more controversial, I think we should not be building android robots, and we should not be building gendered robots either.”

The trouble with robots simulating emotional responses and human facial cues, Winfield says, is that it leads to a “lopsided” relationship with the human users. The emotions we respond with are genuine. “It’s like having a relationship with a person who is utterly, utterly insincere,” says Winfield. “The robot has no choice but to be insincere, because it’s a robot, and it cannot have feelings, it cannot empathise, or love or care about a person.” In that predicament, we can recognise an extreme form of the psychic harm that David Foster Wallace identified in the insincere “service smile” and its equivalents. These faked niceties, Wallace wrote in 1995, have a sinister cumulative effect: “It messes with our heads and eventually starts upping our defences even in cases of genuine smiles and real art and true goodwill. It makes us confused and lonely and impotent and angry and scared. It causes despair.”

All this applies with particular intensity to sex robots. If culture is anything to go by, it’s pretty clear that as soon as convincing human robots appear, people (men) are going to have sex with them. Sex robots have been a persistent trope from The Stepford Wives (1975) and Blade Runner to Steven Spielberg’s A.I. Artificial Intelligence (2001) and Channel 4’s Humans. The hugely questionable trade in silicone-skinned “real dolls” suggests a ready market already exists. Winfield insists it’s a path we shouldn’t go down, echoing many of Ashcroft’s concerns about how the treatment of robot workers would rebound on our fellow humans: “It encourages a complete objectification – which sounds absurd, how can you objectify an object. But you’re not objectifying an object, you’re objectifying the thing the object represents, which for a sex robot would be a man or a woman.” A Campaign Against Sex Robots, fronted by robot anthropologist and ethicist Dr Kathleen Richardson of De Montford University, was launched in September this year.

As far as sex robots are concerned, Ashcroft takes his lead from Immanuel Kant – again working from the philosopher’s consideration of our behaviour towards animals. “[There are] two linked problems, one being the effect on the person that you’re using, the other being the effect on yourself,” he says. Preceding Kant, John Stuart Mill laid out detailed principles for what constitutes “harm” to another individual, and what should thus be proscribed. “We’re all brought up, following Mill, to be concerned with the problem of harm, and from that point of view if you’re just doing it to a robot you aren’t harming a person, so where’s the problem? We forget the effect on yourself, which is from Kant’s point of view just as important. In a sense it means reducing oneself to a mere object. In sex itself that can be part of the negotiation – but there’s no negotiation when it’s between a human and a robot.”

And this can be extended to cover the whole field of non-suffering, obedient, robot slaves. Winfield says that there has been some discussion of the ethics of robot slavery, although almost entirely from the robots’ point of view. Some authors have even averred that if robots are programmed to want to serve, it would be unethical to prevent them from serving. But the repugnance of slavery isn’t exclusively what it does to the slave, although that’s repugnant enough. There’s also the decay it causes in the master. Winfield himself builds cooperative swarm robots – completely unanthropoid, resembling motorised hockey pucks. He tries to program them to behave ethically, for instance by intervening when they see another robot about to cause harm to itself. “They are not full moral agents like you or I – I refer to them as ethical zombies, in the sense that they haven’t chosen to act as they do, and they certainly don’t reflect on their actions afterwards. They’re pathologically ethical.”

We, however, are capable of reflecting on our actions – and so, having reflected, what can be done about this replicant trouble? Winfield says that ten years ago his lab conducted experiments looking at human interactions with a variety of robots from then state-of-the-art humanoids to more simple, stylised forms. “And there’s reasonably good evidence that cartoon-like humanoids are as humanoid as you need to be in order for people to be comfortable.”

Hoegen suggests that robot designers themselves might be realising the value of the uncanny. “They tend to go a lot deeper into the psychological level [than film people], asking how people can interact with [a robot] – and the question is should it be really realistic, or should we keep it slightly uncanny? And the reaction to [an uncanny robot] has been better, in a bizarre way.”

For Ashcroft, this opens up exciting possibilities: “I’m genuinely interested to see what other forms intelligent life might take.” The solution to some of these concerns might be as simple as letting robots be robots. Or it may be time to regard the “uncanny valley” surrounding the human likeness not as an obstacle to be bridged, but as a defensive moat to be respected.