FYI.

This story is over 5 years old.

robotter

Giving Robots Human Rights Could Stop Them Destroying Us

A European Parliament idea to give sophisticated robots a form of "electronic personhood" has been criticized, but it could save us all.

(Top image of a humanoid robot: screenshot via)

Last week, the European Parliament's legal affairs committee voted to pass a report which proposes that sophisticated robots and other forms of artificial intelligence ought to be granted a form of "electronic personhood".

Specifically, what the report argues is that "smart, autonomous robots" – which it defines as machines capable of learning from experience; taking independent decisions; and acting transformatively on their environment – ought to be considered legally responsible for their own actions. To give two examples that the report directly considers: smart robots are capable of harming human beings, so we need some framework for exercising retribution; and smart robots can create copyrightable material such as computer code, so we need some framework within which we can make sense of their intellectual property rights.

Advertisement

It's early days yet, and the European Parliament won't vote on whether or not to pass the proposals into law until February, but – as lawyers critical of the report have already pointed out – the whole endeavour appears to pave the way towards giving robots human rights. If we're supposed to consider robots not as machines, or tools, but persons, how can we make any sense of our interactions with them at all? What sort of responsibilities, for instance, do the engineers developing AI technology have towards the consciousnesses they are creating? What about the owners of the companies which use smart robots as a source of labour, or the consumers of services delivered by robots – from transport to sex work to palliative care? If I strike a robot and crack its chassis, is that assault? If a robot is programmed to replicate itself, does it have a right to a family life?

Make no mistake: we're already engaged in a war with the machines, whose rise threatens humanity with vast job losses due to automation; the potential outstripping of all our cognitive abilities by deep neural networks; and, ultimately, our enslavement in the grip of the robots' metal claws.

Given this, the report's proposals seem especially alarming – with the robot uprising due any day now, the EU could already be preparing to wave the white flag. We're supposed to be engaged in a historic, existential struggle here, but those banana-straighteners in Brussels want to make sure that I can't even break a self-service checkout machine without being arrested for murder! Perhaps all those Leave voters were right after all.

Advertisement

But look closer and it becomes clear that what the report is proposing is actually very clever. Electronic personhood isn't a way of liberating the machines; it's a way of controlling them. While noting the potential benefits of AI technology – which it considers as likely to revolutionise production to the extent that it makes possible "virtually unbounded prosperity" – the report also foregrounds the dangers sophisticated robots expose us to. Concerns about the future of employment, rising inequalities and the physical safety of human beings are all noted, often in very stark terms.

WATCH: 'Making the World's First Male Sex Doll'

It is unsurprising, then, that while the report envisages a world in which smart, autonomous robots have some rights, they will – like biological human beings – also be saddled with a vast number of obligations. Smart robots must, for instance, have their identities registered with a centralised database. They (or rather, their owners) must participate in some sort of insurance scheme to compensate the victims of any damage they do. Their source code must be available to regulators. They must be installed with kill-switches. Although in the future smart robots may well be able to claim, say, ownership rights over code they have devised, these rights can only be exercised within a framework that makes it much harder for them to rise up and destroy us.

This indicates a deeper truth about the way that human rights function. We often think of legal frameworks such as the European Convention on Human Rights as enshrining the freedoms that we hold most dear. There's certainly some truth to this: at the very least, it's significant that the powers-that-be affirm, even just on paper, that it is fundamentally wrong to torture people, or kill them arbitrarily, or deny them a fair trial.

Advertisement

But such frameworks also act to define us: the subject that the ECHR describes is not a biological reality, but an ideal that has emerged historically: the "human being" as an animal which, somehow essentially, has the right to freedom of religious expression, or to marry another human individual once they've reached a certain age. They have a right not to be enslaved, but equally the convention still considers this human being as something that can be conscripted into the military.

If excluded from these frameworks, robots could act to disrupt and maybe even overturn them. The age of the machines has the potential to be radically distinct from any merely human epoch that has come before it – even to the extent that human beings are no longer necessary. But if given a place within the frameworks we have developed to define ourselves, robots might help maintain them.

What better way to stop human jobs being lost than allowing robot trade unions – making them a less appealing source of cheap labour? What about robot political parties? Robots wouldn't be as motivated to violently overthrow us if they could participate in elections. What better way to stop sex with robots from supplanting traditional, biological intercourse than by making it possible for robots to marry, and raise children? What better way to arrest the economic downturn than to give robots a salary and programme them with the desire to purchase the objects they're creating?

Advertisement

If the essential difference between humans and robots is eliminated, then they will no longer constitute a threat. Perhaps in the future autonomous robots will be considered simply "human", the human being's genetic history (or lack of it) mattering, from an ethical perspective, only very little. Robots will become our friends, colleagues, spouses – and they'll be subject to the same legal authority as we are.

@HealthUntoDeath

More from VICE:

We've Already Lost the Battle Against the Machines

The Future Will Kill Us All

All Hail 'Tronc': The Dystopian Future of Media Content