One of the big questions facing society today is how to ensure that AI serves and benefits humanity. A programme of ethical and responsible development would help prevent problems such as the automation of bias and discrimination, and the rise of inscrutable machines. But how can that be brought about?
This was the subject of a public debate last week at Imperial College, London by a panel of technologists and ethicists. Dr Joanna Bryson, who teaches AI ethics at the University of Bath, set out a key theme of the evening:
I got into ethics when I noticed people being weird around robots, because I’m a psychologist. I now realise that the problem is this: we don’t understand ethics or what it means to be human.
‘Being weird around robots’ refers to people wrongly perceiving machines to be human, and developing emotional responses to them, she suggested – a problem that has roots in a century of sci-fi lore about robots, perhaps. As a result, many in the robotics and AI sector now favour the development of machines that stress their artificial nature, rather than present themselves as being ‘human’.
As Bryson suggested, a critical challenge in defining what ethical AI development looks like comes down to defining ethics themselves and understanding our humanity. Part of that challenge lies in the absence of consensus on a whole host of questions in the human world, suggested Reema Patel, Programme Manager at the Royal Society for the Encouragement of Arts, Manufactures, and Commerce (RSA). The RSA’s DeepMind partnership is exploring the role that citizens might play in developing ethical AI. Patel said:
We don’t know what the future looks like. Asking the public what they think isn’t just about commissioning an opinion poll. It really requires engaging with the uncertainty of the question. It really affects us all, our values, our choices, the trade-off that we make as a society. It’s so important to find new ways of engaging citizens in informed debate about the future of AI.
This is because AI is starting to make decisions that would otherwise be made by humans, she explained, which adds new layers of ethical complexity. Here, the RSA’s citizen outreach programmes may help, she said:
One of the things we’re doing at the RSA is convening citizen juries: randomly selected groups of people, to deliberate on particular ethical issues. And we’re looking in this instance at the application of AI to the criminal justice system to better understand the parameters. And we’re looking at the issue of how AI is influencing democratic debate.
The reason for doing this is not just to find, ‘what does a citizen think?’ or, ‘what do people think off the top of their heads?'[…] Understanding what could create a moral consensus in any particular cultural and ethical context has to happen; we have to find new ways of doing that. So we’re prototyping, we’re experimenting. But in any space where there is no agreed moral consensus, the question must come back to the developer: is this a decision that an AI machine should be making?
Good question. Patel expressed her concern about AIs that “resemble human competency”, suggesting that this poses “really interesting questions about what it is to be human.” She shared the story of a colleague’s three-year-old child who attempted to engage a phone in an emotional conversation, asking, “Siri: do you love me?”
Bryson suggested that some technologists could be using people’s emotional response to AIs to abrogate their responsibility for flaws and errors in their systems:
If you have a product that is dangerous, do you sell it today? And I feel like that because people are being fooled – they think intelligent means ‘person’, and so people are ready to say, ‘Siri: do you love me?’ – a lot of companies are trying to get out of what would, obviously, have been their responsibility of due diligence.
Networks of intelligence
Professor Andrew Blake, Research Director at the Alan Turing Institute, suggested that it was the arrival of deep learning networks earlier this decade that brought questions about ethical development to a head:
Deep learning networks are three times as effective in image recognition and three times as effective in speech recognition, but they are black boxes – even more so than previous technologies. If you have black boxes deciding the meaning of a word, then you don’t worry too much about understanding the rules, but if the same black box is deciding whether to give you credit or not at the bank, then you want to be able to challenge that, and it becomes much more important whether you can do that.
But he said that this challenge has inspired researchers, rather than hindered them:
There is a lot of thinking going on about how you break open these black boxes and design them from the beginning to be ‘less black’, or if you can pair up the black box with a shadow system that is more transparent. The Turing Institute is all over this.
However, Bath University’s Bryson cautioned that neither the existence of black-box solutions nor improvements in their development should encourage programmers to be complacent:
I don’t think deep learning is the end of responsibility. After all, you audit accounting departments without knowing how the humans’ synapses are connected! Even if deep learning was a complete black box, we could still put a ring around what it’s allowed to do. We’ve been dealing with much more complicated things than AI – people – for a long time.
The question remains, is there anything fundamental about AI that sets it apart from any other type of software? Bryson responded:
First of all there is this question, what is intelligence? And I use a very simple definition, which is when you generate an action based on sensing. When you’re able to recognise a context and exploit it, or recognise a situation. But this is only one small part of what it is to be human.
We aren’t building artificial humans, but we are increasing what we can act on and what we can sense. That’s what AI is doing. So there are two ways I would say that AI is different. One, we are able to perceive, using AI, things that we couldn’t perceive before. And companies and governments can perceive things about us, and we can perceive things about each other, as well as about ourselves.
This isn’t just about discovering secrets, it’s also about discovering regularities that nobody knew existed before. The other is to do with how you set a system’s priorities so it’s not just passive. That’s what we call autonomous, when it acts without being told to act.
Machines that learn
However, this uncovering of hidden ‘truths’ about people is a controversial area, if for no other reason than AI’s predictions may be wrong or untestable in the real world. Citizens may have no insight into why they’ve been rejected for a job or for life insurance, for example, or why the police are knocking on the door.
A key ethical challenge lies in an increasingly important subset of AI: machine learning. While many AI systems themselves may be well designed, the training data that some use may contain unconscious human biases or assumptions. One example is the COMPAS algorithm that is already being used in the US justice system, which research has suggested is replicating and automating systemic human bias against black Americans when issuing sentencing advice.
Imperial College’s Professor Maya Pantic developed the theme:
What’s worrying is the bias in the data. If the data is biased in any way, this bias will be picked up and it will be propagated through all of the AI’s decisions. For example, if jobs are always given to people from certain areas… then you will continue predicting that these people from these areas will be getting the jobs.
We need to discuss this issue with the government. We need to have something like auditing of the software, because of lot of things can go wrong, especially if we cannot have explicit machine learning – systems that are open boxes, not black boxes.
Bryson cautioned against over-simplifying the problem, but acknowledged that a serious challenge in IT development today is organisations using AI to reignite behavioural wildfires, in effect, that human society had previously extinguished:
I don’t think that the bias itself is different. What you are saying is that human culture creates biased artefacts, but that’s been true all along and AI is no different. The difference comes from this weird over-identification [that people have with machines]. Machine learning is one way we programme AI, but some people are using this ‘magic dust’ to go back to things that we have previously outlawed – like the persistence of stereotypes, such as who gets hired for what jobs.
Imperial College’s Pantic added:
If you think about computer science, it’s white males. It’s ten per cent females, and an even lower percentage on non-white people. We shouldn’t use technology that is built by such a small minority of the population.
The evening’s most pragmatic perspective came from an unusual source: the Rev Dr Malcolm Brown, who sits on the Archbishops’ Council of the Church of England. He said:
The difference is authority, surely. If we treat the AI that has a bias built into it as somehow overruling our human judgement, then that’s different to, say, an HR director, who can be challenged. I’m reminded of something that [British politician] Tony Benn said should be asked of everyone in power: ‘What power do you have, who gave you that power, whose interests do you serve with that power, to whom are you accountable, and how can we get rid of you?’ These are interesting questions [that could be applied to AI].
With AI, the advance that makes this problematic involves manifestations of power. Is that power in the hands of the people who create the AI, or is it in those of the user? Where do responsibility and accountability lie, and how do we change that if it goes wrong? These are the areas where we are floundering.
My take
It’s ironic that it took a man of God – the Church of England’s Brown – to raise the question of human power, authority, and responsibility, when God could be described as the ultimate black box solution. Brown’s suggestion that we should be able to ask who holds the power in any AI system, to whom they are accountable, and – crucially – how we can get rid of them, was the best idea on the table.
A private company is accountable to its shareholders, not the general public, and this problem will be significant in the years ahead. Take the insurance sector, for example, which might be tempted to push up its profits by making more and more people uninsurable. Would any application of AI in this field be fair or transparent? And what if the system is wrong in its predictions?
As AI has more and more power invested in it by human beings – often by companies who don’t ask enough questions of themselves or their vendors – having a robust mechanism in place that gives us clear answers to these questions is the most sensible, and the most human, response to ‘magic dust’ strategies.
AI regulation and software auditing within a future evolution of GDPR could be the best option. After common sense, of course.
Image credit – YouTube Disclosure – Some of the speakers’ unscripted comments have been edited for sense and grammar.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: