The implementation of artificial intelligence (AI) across all areas of daily life, from mobile applications to job recruitment, has brought with it important ethical connotations. The impact of AI is so important that lawmakers, businesses and even governments are meticulously looking at it to establish boundaries and rules that can safeguard individuals rights.
Professor Lokke Moerel, Senior of Counsel at Morrison & Foerster and one of Europe’s leading AI lawyers, exposed the current ethical dilemmas affecting AI implementation and the need to open black boxes to identify biases at the recent CIO UK Artificial Intelligence Summit.
She opened her talk by introducing two dilemmas to her audience. In the first one, data scientists from a pharmaceutical company were presented with a situation where a product was short in supply. The firm’s executives asked them to make a prediction on how to best distribute the product to limit shortage.
The data team reorganised the distribution with a positive outcome and a minimal number of complaints. In theory, the black box algorithm was successful and all stakeholders were satisfied.
However, explained Professor Moerel, looking at the reorganisation of the product distribution, it was discovered that it actually demoted certain postal codes – those from unprivileged neighbourhoods and with communities less prone to complain as a result of social alienation. Who is liable in this situation?
“‘The AI did it’ is not an acceptable excuse,” Professor Moerel said. “Algorithmic accountability implies an obligation to report and justify algorithmic decision-making and to mitigate any negative social impacts or potential harms.”
The second scenario that the academic described was Amazon’s much-talked-about AI-powered recruiting tool, which overwhelmingly favoured male candidates over female ones. The underlying cause of this bias was that the algorithm had been fed data that indicated that male candidates were preferable to their female counterparts.
“If your data is biased – ‘one-sided’ – the algorithm will be biased,” declared the lawyer. “You can’t get all your historical data in because all your historical data is biased … It’s very hard to get clean data.”
The Digital Revolution is the new Industrial Revolution
In Professor Moerel’s view, there are great similarities between the first Industrial Revolution and today’s digital one. Although the former brought with it technological progress, it also created child labour, pollution and poor working conditions.
New technologies required numerous trials before they could work as required and be fully regulated.
“The first car had a person walking in front of it with a red flag because it didn’t have brakes,” she said. “Think about what society did then: roads, airbags, rules to regulate vehicles safety… Artificial intelligence is that first car without brakes. How it looks today is not how it will look in half a year from now and after we start cracking the black box.”
Although it might be tempting to adopt a pessimistic approach to AI, the scholar stressed that the new technology is just making its first steps and there’s still a long way to go and areas to develop. As was the case in the Industrial Revolution, some of the problems might take years and even decades to be dealt with adequately.
Above all, Professor Moerel stressed the need for personal accountability. Despite decisions being processed by machines, there must be human accountability for the data that was input in the first place.
“If you end up in court and your answer is ‘the algorithm did it’, it won’t be accepted,” she said. “It’s one of your tools and you have to deal with it, making sure it comes to the right solutions: you must justify your algorithm-making and mitigate the negative effects. That is your task.”
According to Professor Moerel, at the core of the AI ethics dilemma lies privacy. From a legal perspective, ownership of data is an intangible asset: there are no intellectual property rights in data.
“The bizarre thing is that all the other fundamental rights – discrimination, freedom of speech, etcetera – are folded into the assessment where you can process the data for data protection purposes,” she said. “That’s why it’s all about privacy.”
AI feeds itself with great amounts of data, which it then analyses to make predictions. The professor cited an example where people who suffer from obesity have a particular set of characteristics, which result into predictions. These predictions lead to actions, such as insurance companies increasing the insurance premium.
This implies a major legal shift in the burden of proof. If someone has those characteristics, they will be predicted as future obesity cases. How can people then prove that they won’t be obese?
“The question is, do I get a chance to prove the algorithm was wrong?” Professor Moerel asked the audience. “There are a number of challenges, including unforeseen applications and discrimination.”
Although Professor Moerel disagrees with calls for extensive legislation, she welcomes the introduction of data protection steps such as GDPR.
“You need to mitigate impact on individuals plus society as a whole, but we need an ethical assessment, not a computer that complies there – that’s not enough,” she said. “There are many examples where companies were legally compliant, they got the consent, and still made everyone upset.”
She added: “Law is what you should or shouldn’t do – what you are allowed to do – and ethics is what you should do. It’s a different assessment. … Ethics are quite stable.”
If organisations and governments are transparent about the data they are using, then there’s hope for better use of AI. The tipping point is that black boxes shouldn’t contain unfair biases and instead benefit every individual and society as a whole. In addition, algorithms need to be auditable and people must be accountable for them.
“We will overcome all the downsides of AI like we did with the Industrial Revolution but you have to be open about the negatives or the concerns so you can address them,” Professor Moerel concluded. “If you don’t, you’ll get a big backlash.”
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: