A.I. - should our children be worried about Artificial Intelligence?

A.I. – should our children be worried about Artificial Intelligence?

Artificial Intelligence (AI) seems to be the current flavour of the month and is being sprinkled liberally onto everything from Apple SIRI and Alexa, to medical diagnoses, self-driving cars, and even to making burgers and helping you decide who to marry.

It is perhaps the most complex multifaceted challenge of our time. It is underpinned by technologies that are themselves advancing rapidly and, depending on who you speak to, will result in one of two startlingly contrasting future outcomes – AI will either help us solve all our problems (in which case there won’t be much else for us to do) or it will kill us all (so nothing to worry about as we won’t have any problems to solve), and the jury is out on which way it will all go!

While Hollywood and science fiction often portrays AI as robots with human-like characteristics in some distant dystopian future (think ‘Terminator’, ‘I Robot’ or ‘Ex Machina’), we are unaware that this technology is quietly and relentlessly creeping into our lives as we speak.

We interact with it daily when making our Amazon purchases (‘people who bought that also bought this’) or wonder how Facebook newsfeeds are so well curated and customised just for us. You are also probably unaware that every time you use the Underground, the CCTVs used by Transport for London are just the front face of powerful AI-based facial recognition algorithms that identify you in a nanosecond (their justification for invading your privacy: so that they don’t mistake you for a terrorist); or that over 80% of the trades done on the stock exchange are now being handled by AI.

Not only is AI remarkably good at these tasks but every day it’s getting better, smarter and widening its scope. This type of AI, properly known as narrow AI (or weak AI), is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create more general AI (AGI or strong AI).

While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task. For many people this is a difficult issue to fathom – how will we compete with a technology that is advancing at electronic speeds and becoming ever more intelligent when we biological humans can only evolve at Darwinian speed – or control it (which some say we must do if we are not to become subservient to these ‘robot overloads’) when our government institutions whose task it is to introduce policies and laws to protect our safety and welfare, move at an even slower glacial pace.

Once it reaches AGI level, it is quite conceivable that most major decisions could be made to a greater or lesser degree by AI (the equivalent of us today relying almost exclusively on GPS when travelling while the old fashioned physical road atlas is being consigned to the recycling bin because it is slower, more cumbersome and gets outdated quickly).


No-one will worry too much if the AI is helping us complete the next tax return more accurately or provide efficient monitoring and control of a building’s heating and air conditioning systems to optimise energy usage – that after all is the purpose of deploying smart technology and getting a return on the investment.

However, should humans trust an AI algorithm when it points to strategies for maximising corporate profits (for their owners) which involve replacing the majority of its human employees with automation; or be in control of automated weapon systems in times of war or political crisis – scenarios where speed might be a prerequisite and the AI makes decisions much faster than humans? These are questions that we are now having to face and for which there are no easy answers.

Given these bleak scenarios one might well wonder why we would continue down this path – would it not be better to stop all AI research forthwith? Unfortunately, it is never that simple and whether we accept it or not, the AI Genie once out cannot be put back into the bottle.

The same was said about nuclear technology which has the potential for colossal destruction but its deployment since the end of the Second World War has been largely contained and its power controlled, even though much of its use has been primarily for military purposes.

In fact, some of the most difficult challenges faced by humans today  – such as combatting climate change, pollution, the population explosion, finding a cure for cancer and other critical illnesses – are actually big data problems that can only be solved by AI that is sufficiently capable.

This partly explains the large amount of funding directed at AI research by governments and mega-corporations who also see the obvious potential of AI in finance, commerce and of course defence which accounts for substantial spending as AI is now a critical component of the military-industrial complex of most major countries.

Given that AI will continue to advance our best approach is to look at it positively and work in partnership with the emerging technology for the betterment of society while retaining the human ability to objectively question and scrutinise the motives of AI researchers and the corporations behind them and to hold them to account.

We should remember that while machines will have the competence they lack the comprehension and are devoid of any consciousness – critical human qualities we will need to bring to the fore to help us adjust to this brave new world, and a new paradigm and ensure the economic benefits of automation are shared.

For this to happen we need our young people and workforce to be educated in the upcoming technologies of Industry 4.0, Internet of Things (IoT), Blockchain, Cyber Security, Nanotechnology, Synthetic Biology, etc; and at the same time have the softer emotional intelligent skills of critical reasoning, curiosity, flexibility, open-mindedness and collaborative working – so they can take on jobs that are yet to be conceived.

Arrange a Conversation 


Article by channel:

Read more articles tagged: AI, Featured