Robot ethics – tough questions in system design

We’ve all seen apocalyptic headlines about robots and AI destroying jobs, and videos of supposedly sinister machines learning to walk, run, and jump (sometimes unsuccessfully). The prevailing narrative in a hysterical UK media – and to an extent in the US, where technology investment is allied with job creation – is that there is something malign about the next generation of machines.

Yet we don’t think of smartphones, cars, aeroplanes, or spin driers as evil. If we consider such devices from a moral or ethical standpoint at all, it is about how those technologies affect human behaviour, health, or well-being, or the ways in which their manufacture or use impacts on the environment, on human rights, or on sustainability goals. Such concerns are really about ourselves and how society is evolving, therefore, and not about machines themselves being a malignant force.

In recent years we’ve heard some of the ethical challenges associated with AI – in particular, the risk of it automating systemic bias via flawed training data – but we’ve heard less about the ethical challenges of robotics and autonomous systems (RAS) themselves, beyond their apparent threat to employment. This is something that the UK-RAS Network, the robotics research group of the Engineering and Physical Sciences Research Council (EPSRC), is seeking to put right with the publication of a new white paper.

One impetus behind the paper, Ethical Issues for Robotics and Autonomous Systems, is the principle that engineers should hold paramount the health and safety of others, draw attention to hazards, and ensure that their work is both lawful and justified. Launching the white paper at UK Robotics Week in London, co-author John McDermid, Professor of Software Engineering at York University, described it as “pragmatic guidance to help designers and operators”, not just informing them of the dangers, but also reminding them of principles they should already hold dear.

Distributive justice

Another of these principles should be “distributive justice”, he said: the concept that everyone should share equally in the rewards and social costs of robotics, with smart machines introduced in a way that is both rational and defensible. So the idea of robots taking job opportunities away from those with poor skills or education would be ethically wrong, unless the technology were counterbalanced by making retraining opportunities available at both national and organisational level.

This is an important point. Organisations are often left out of the ethical debate; the focus is invariably on economics, productivity, or technologies when it ought to be on collective responsibility. Put another way, who decides whether introducing robots is acceptable in a given situation: those affected by them, or others on their behalf? The answer is not as obvious as it might appear.

As the World Economic Forum pointed out last year, it’s easy to assume that robots, automation, AI, and other Industry 4.0 technologies will mainly have an impact on blue-collar workers and be implemented by their managers to drive down costs, increase productivity, and make businesses smarter and more agile; but that isn’t necessarily the case.

Over the next decade or so, once-safe white-collar jobs, such as those in legal services, banking, investment, accounting, auditing, administration, and management will be equally in the firing line, and few industries will be unaffected. In their place will come a different mix of skills: data analysis, collaboration, coding, experience design, vertical expertise, transferrable knowledge, the ability to work with robots and AI, and ‘soft’ human skills, such as creativity, communication, and emotional intelligence.

The good news from the WEF was that the global economy will benefit from a net gain of 58 million human jobs, despite the havoc wrought across many sectors. Developing strategies to help everyone in society grasp these new opportunities, companies, and services is therefore essential.

Another ethical challenge in applying robotics is the principle (attributed to the philosopher Immanuel Kant) of ‘Ought implies can’. This applies in the many scenarios where humans and robotic systems will have to collaborate in future. Put simply: a human may be tasked with intervening in a machine’s actions or decisions, but will he or she actually be able to?

At the heart of this question are the twin issues of automation and autonomy – the extent to which machines are either following preset human instructions, or are able to make decisions independently. Where the latter is the case, does the human supervisor actually have the skills, time, awareness, data, context, or opportunity to intervene if something goes wrong?

Yet for some people, the bigger and more important question is undoubtedly this – why develop autonomous machines in the first place? The answer to that is it’s the only way for some tasks to be carried out safely, or at all. Take rovers exploring a distant planet, for example. With a radio communications delay that can be as long as 14 minutes one way between Earth and Mars, it stands to reason that a human can’t control a robot in real time with a joystick.

Such challenges don’t just apply to outer space, however: under the sea, where robots already help to maintain energy pipelines, power cables, and subsea communications, human-robot interaction can also be a challenge, as radio waves propagate badly in salt water. In these and other instances, robots need to be able to work either autonomously or assist their human controllers with a degree of supervised independence, adapting to local conditions.

With autonomous cars or vans, the claim is that they will drastically cut the 1.2 million deaths worldwide that occur every year on the road, 95% of which are caused by human error. In the case of autonomous delivery drones or cargo planes, meanwhile, the argument is that removing human pilots maximises the space available for goods and fuel, enabling aerial platforms to fly further and be more useful when they arrive.

But these claims fall apart in the case of, say, autonomous cargo ships or oil tankers, where removing the human crew would have a negligible impact on weight, capacity, speed, or safety. In these instances, the rationale is more clearly economic, creating new roles in which small numbers of remote human supervisors oversee fleets of autonomous ships across the globe, rather than captain vessels in person.

Whether it’s desirable to replace a life on the high seas with a sedentary job at a computer screen is a moot point. Substituting point-and-click interfaces for centuries-old careers might be efficient, but it would also be boring, enervating, and (frankly) insulting for thousands of seasoned professionals and anyone with ambitions to become one.

In each of these instances, however, the same ethical principle of ‘ought implies can’ applies to the system’s design, says the white paper:

Is it reasonable to expect drivers (operators) of AVs to take back control after a period of autonomous driving? If so, how long is needed to regain situational awareness?

[In the case of remote operators of vessels] The operations should be designed so that the captains can oversee and manage the safety of all the vessels they are responsible for remotely – with changes in the design, e.g. levels of automation, made if the ‘ought implies can’ principle would be violated, due to inability to maintain situational awareness.”

The underlying principle here is to do with oversight, says UK-RAS – with a side order of responsibility and liability if something goes wrong.

Ought we?

One thing missing from the white paper is the logical inverse of Kant’s principle: ‘Does can imply ought?’ In other words, just because something is technically possible, does that make it morally justifiable? This certainly applies to the growing use of autonomous systems by the military – for example, the increased use of personal drones, reconnaissance robots, and augmented reality systems by ground troops in the US, UK, and elsewhere.

In each isolated case, a new technology may be justified – to keep soldiers safe, give them battlefield awareness, or lighten their physical loads, for example – but may also represent a small step towards something that’s harder to support on moral grounds: the automation of killing, and the ‘gamification’ of warfare.

Yet keeping people safe is a noble ambition for the developers of robotics and autonomous systems, particularly in the kind of lethal, extreme, or hazardous environments that typify some industries: space, deep mining, offshore energy, or nuclear decommissioning, for example – as codified in the UK’s Industrial Strategy Challenge Fund (ISCF) ‘Robots for a Safer World’ challenge.

This desire to keep humans out of harm’s way is certainly an argument in favour of robots: they can go safely where humans can’t, and that alone makes them desirable. However, as fatal crashes last year revealed, autonomous vehicles and similar technologies are still in their infancy and, until they become more sophisticated, widespread, and accepted by humans, this presents its own ethical challenges and risks.

Other challenges are more opaque. Indeed, opacity in robotics is an ethical problem in itself. Where autonomous decisions are not transparent, they are not open to scrutiny, which means that any unfairness or bias can’t be corrected or even analysed.

Some issues are subtler still. One emerging ethical problem is to do with deception, says UK-RAS: software robots, such as chat bots, often appear human, in order to encourage people to communicate with them.

The problem extends to physical robots, too. Humanoid or zoomorphic machines are often made to appear cute, cartoon-like, and vulnerable, so that humans don’t feel threatened by them – a sensible decision in itself, perhaps, but one that risks people forming emotional attachments to machines. For example, a friendly robot that encourages people to buy goods or services from a specific retailer could be a deeply manipulative device, while the commercial relationships that exist behind the scenes would be invisible to the user.

The white paper says:

[Such robots] present the risk, especially to naïve or vulnerable users, of emotional attachment or dependency (given that it is relatively easy to design a robot to behave as if it has feelings).

To combat this, the EPSRC believes that robots should always be regarded as manufactured artefacts. One of its stated robotics development principles is that:-

They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent.

My take

Wise words. To their credit, the paper’s authors acknowledge that many of the ethical questions surrounding robots have no definitive answer as yet. But one in particular demands our attention: whether robots should themselves be regarded as moral machines or moral agents, with responsibility delegated to them directly rather than to their human designers or minders.

This is a real ethical minefield, given that – despite UN declarations – there is no universal definition of good and bad behaviour in a world in which rights for women, children, atheists, LGBTQ people, dissidents, journalists, immigrants, asylum seekers, and ethnic minorities are seen very differently in some parts of the world to others. Indeed, arguments still rage about them in our own societies.

MIT recently suggested in its ‘Moral Machine’ report that, if a fatal accident involving an autonomous vehicle (AV) is inevitable, the AV should be able to adapt its ethical programming to match the values of whichever society it is operating in. In short, it might decide to run a different person over in one country than in another: an idea that is truly horrifying in its implications, despite making logical sense.

UK-RAS sees the dangers in this argument, saying:-

At minimum, we believe that one should ask whether or not it is appropriate for a RAS to be viewed as a ‘moral agent’ if it absolves the developers or operators of the RAS from moral (and perhaps legal) responsibility for the system.

Other questions remain equally troublesome. For example: how society assures and regulates robots in a world of human laws; what a framework for ethical governance might look like; and whether the state should regulate on behalf of the people most affected by technologists’ research: the public.

Yet the underlying problem is easy to state, if not so easy to solve: the more we look to machines to augment human decision-making in a messy, emotional, complex, and illogical world, the more we realise that centuries of law are designed to protect us from other people, not from machines. That demands our urgent consideration.

Browse

Article by channel:

Read more articles tagged: