Do we need a better North Star for AI than the Turing Test ? – Enterprise Irregulars

Almost 7 decades have passed since Alan Turing asked “Can Machines Think?” . It’s hard to define what thinking means – and it continues to be hard today as it was in 1950. So Turing took a pragmatic approach and redefined the problem as “Can a computer behave like a human being?”.

It was absolutely a fine question to ask in 1950 because it was about the theoretical future and it provided a Northstar for AI researchers to guide their efforts . Now AI has reached a level where it is practical and the answer to the question has real consequences.

First – I don’t think machines think at all like we do . Most AI today is machine learning . What it does is do pattern detection at scale. And it takes a lot of information for the machine to “learn”. That in itself is an indication that machines are dumber than even little children today .

A toddler who has seen a picture or two of a dog on her story book can usually understand that the first real dog she sees is similar to a dog she saw in the book . And the odds of her mistaking a dog for a cat are minimal . That’s not true for machine learning at all – it needs a lot of labeled data to get to a comparable level if at all .

Second – we don’t just make decisions based on patterns alone . We use other information we know from the past and combine that with the newly detected pattern to make a decision . That’s not how AI works in mainstream today .

But Machine learning is amazingly powerful at a lot of things. It does not need to be like human thinking for it to be powerful and give us a lot of value . And like all of computing – it can also cause a great deal of harm. The time is ripe for us to get the debate on ethics of computing in general and AI in particular to actual actions with some urgency .

What is the purpose of AI ? Is it to replace humans at scale ? Or is it to augment humans ?

If it’s purpose is to augment humans – then isn’t it pure deception to make it pass Turing test ? Why should it solve problems in a way that makes an observer believe it’s a human ?

For example – if I ask a computer to add two five digit numbers , it can find me an answer in sub-second time . But to make me believe it’s a human that is answering – it needs to wait for few seconds or a minute to give me the answer . What is the real value delivered here ?

Humans are imperfect and we are not always logical, nor consistent . That is our default mainstream behavior . Computers on the other hand have logical and consistent behavior as their default . It’s a waste of effort to make them behave like humans just to pass the Turing test . And obviously how ethical would such a system be that is fundamentally built to deceive ?

Computers can and should be used extensively to solve complex problems . They can do that without having to mimic humans just to trick us .

AI can do the world a lot of good – and that won’t happen if we don’t trust it . For example – there are well known ethical problems to solve for self driving cars . If we stop thinking about AI as human like – we probably will set a realistic bar for it to meet for us to be comfortable letting it drive our cars . AI enabled cars will also have many accidents – but we can have a logical discussion on “is it good enough that we can save a lot of lives lost through DIU and texting while driving by having more cars depend on AI ?”. If on a global aggregate – if AI can reduce the total loss of life every year , will we accept it ?

There will be many such difficult questions to answer now and in future . Can’t we give ourselves more room to solve them by taking away the need for AI to be human like ?

So should we free the world of AI to move past the Turing test so that they can focus on solving real problems without the burden of also deceiving us in the process ?

(Cross-posted @ Vijay’s thoughts on all things big and small)


Article by channel:

Read more articles tagged: