GDPR — How does it impact AI?

Now that GDPR is over one year old, Eric Winston, from Mphasis looks at the interaction between AI and the GDPR

The vast scope of GDPR has raised fresh challenges – chief among them is the complex interaction between AI and the GDPR. In particular, this shines a spotlight on Article 22, which concerns automated profiling and decision-making, where the incorrect use of personal data can have huge ramifications for the individuals concerned.

The problem is that existing AI system logic takes automated decisions without user consent. Since data is the engine behind AI, Article 22 impacts every industry hoping to leverage the power of technology to drive efficiencies through automated means.

In an increasingly data-reliant business landscape, how can organisations reconcile the advent of disruptive technologies and their inherent risks while remaining fully compliant?

Is China at an advantage in the AI race because of GDPR?

AI/machine learning needs lots of data. Is China at an advantage in the AI race because of GDPR? Will the UK and Europe be left behind?

The rise and rise of AI

AI continues to escalate in its influence worldwide and is revolutionising business processes in a way that is no longer theoretical or the material of science fiction – but tangible and immediate.
With the EU representing as much as 21% of global GDP in 2019, EU-based organisations have no choice but to strike the right balance between reaping the benefits of AI and managing it to ensure there are no unintended consequences.

In the UK, the government has championed the flourishing AI sector, underscoring the country’s position as a true leader in emerging technologies, and is working towards making the UK a global centre for data-driven innovation. According to a recent forecast, AI can be a major contributor to growth, with the potential to add £232bn to the UK economy by 2030.

However, it’s a tale of two halves. Despite the UK’s businesses and sectors increasingly adopting, and investing into AI as part of their future, most are largely unprepared for it. Worryingly, findings reveal that fewer than half of businesses have protocols in place for implementing AI in a safe and ethical way.

GDPR anniversary: has the regulation backfired? What next?

The definitive round-up of GDPR as it celebrates its first anniversary. Has GDPR backfired? what next for GDPR?

This has led the Information Commissioner’s Office to issue a rallying call to action for industry leaders to unite in helping to establish a new framework for data protection for the use of AI. It’s vital that businesses promote greater transparency and integrate data protection measures by design and default into their AI strategies. This is firmly on the agenda for key sector players who are leading by example – for instance, a new code of conduct for the use of AI in the NHS was recently launched to ensure that only the safest and best systems are used.

An evolving relationship

Aiming to instil responsible practices, Article 22 prescribes that AI – including profiling – cannot be used as the sole decision-maker in choices that can have legal or similarly significant impacts on individuals’ rights, freedoms and interests. For instance, an AI model cannot be the only step for deciding whether a borrower is eligible to qualify for a loan.

There are exceptions to the rule in scenarios where the decision is necessary for entering into a contract, when union or member state law authorises such decisions – for example, to detect tax fraud – or when the data subject gives his or her explicit consent. An individual is also able to contest the automated decision and obtain human intervention in the first and third exceptions.

Beyond this, organisations face a process of trial and error in terms of applying this to their own systems, with the added pressure of even the smallest mistake potentially causing very damaging consequences.

The grey areas of data protection

If one were to play devil’s advocate, automated decision-making is often justified, such as in cases where an AI tool rejects a job application if the applicant has not provided sufficient information. However, Article 22 is triggered if the application has been rejected by the AI tool despite supplying all necessary information and meeting the criteria to apply for a job.

The crucial determiner here is at what stage of the automated decision-making process was the application rejected – and why.

Undoubtedly, the GDPR is a step in the right direction as it empowers individuals to regain ownership of their personal data. However, one of the major criticisms about the game-changing regulation is its ambiguous language that could result in serious misinterpretation.

Article 22 is designed with an admirable objective at its core, to prevent any unfair bias or discrimination from entering into a decision. Profiling, as part of AI decision-making, could result in repercussions when collecting and processing sensitive data such as race, age, health information, religious or political beliefs, shopping behaviour and income.

If misused, the darker side of automated profiling means that the more vulnerable segments of society will bear the brunt of any negative outcomes.

Post GDPR and the ownership of data

Post GDPR and the ownership of data. Will citizens become custodians of their own data and will companies have to ask them for permission to use it?

Addressing the conundrum

As a very first step, there is a need to ensure that the Article is understood correctly by all, not just to uphold corporate reputations but – most crucially – to safeguard individuals. There is an element of education that still need to take place to allow businesses to translate the requirements of the GDPR into their real use cases.

As the GDPR evolves to provide greater clarity surrounding AI, the onus is on data controllers to carry out regular quality checks of their automated systems. The guidelines on conducting Data Protection Impact Assessments (DPIAs) can help to ensure that remedial action is taken promptly to manage any negative impact. Other checks should include ensuring error-free algorithmic auditing and allowing users to contest a decision.

Another effective solution might be for companies to simply negate the requirement of Article 22, by using AI programming so that it flows back one step to allow concerned individuals to collect relevant inputs and make a final decision.

It’s important to recognise that the relationship between adopting the GDPR and the successful ongoing growth of AI doesn’t have to be mutually exclusive and can be complementary. Regulation will not stem the advance and potential of next-generation technology as long as people and businesses are well prepared and focus on the underlying principles of the GDPR – protecting the privacy of individuals and ethical practices. This can lead to an enhanced customer experience and even greater adoption of AI into the mainstream.

Only when organisations put a premium on gaining – and keeping – customer trust, can they truly harness the power of AI in tandem with the GDPR.

By Eric Winston, Executive Vice-President, General Counsel and Chief Ethics & Compliance Officer at Mphasis

Browse

Article by channel:

Read more articles tagged: