Do machines deserve the same patience when learning as we allow humans?

Do machines deserve the same patience when learning as we allow humans?

Recently, we have all been focused on digital ethics, especially in the wake of AI and its related technology. It has the power to do several tasks a lot quicker with greater accuracy (still questionable) than a human can.

Whilst ethics is the right thing to be focused on to ensure that the builders of such technology are able to address questionable ethics, biases and judgements before building a piece of artificial intelligence that could be counterproductive. After all, let’s face it, the machine and AI need to work for all humanity, ethnicities, races, ages, you get the point.

However, as we continue down the route of building standards and practices locally and globally, are we really considering the time we allow machines and builders of these machines to learn, grow and improvise? How much time are we prepared to give to machines and creators to make mistakes and learn from them?

Patience With Humans When Learning

Let’s consider this example. Typically, a human in a new job after, let’s say graduation, will be allowed some time to get to know the systems, processes and people. This may be in addition to an internship. It is of 2–3 years in one or a few job roles before we start expecting real results and growth, or mistakes not being made on the job. At least that’s how a graduate scheme works over extended periods.

For experienced professionals also, there are 3–6 months in any new role where they are supported to learn before being expected to start delivering results. And so, my question is; do we allow such time for machines, and businesses building such AI to have the room for mistakes, learn and grow from their consumers?

D

Granted the machines may do it a lot quicker and so in that case what is our patience level requirement with the machine and the builders of AI?

Typically, you wouldn’t fire an employee without considerable training, performance management etc.

Patience With Machines When Learning

Are we going to allow AI to do the same as businesses and individuals that consume that piece of AI? Or is it up to the business/technologists building particular AI or machine learning tool that need to take that responsibility fully?

If they take that responsibility, how much time and monetary support are we prepared to provide as businesses and individuals consuming the technology? Patience is expected when dealing with humans especially in cases of underperformance and mistakes. What about the bots right? Or do they not get any rights and room to learn once deployed to a client? Should consumers learn more patience? Should there be a process for consumers to highlight their concerns and allow sufficient time to rework on a particular code and AI, redeploy and re-test?

I understand that a counter-argument is that such technology builders should not deploy tech that is not already tested and refined. But, let’s face it, if we had the same constraints on humans, most of us would not have had our first, second and third jobs to learn and grow from. Besides, the builder of machines can do hundreds of checks and balances on test data sets. However, code reacts to live sets of data differently due to size and behaviour. It will pick things up that it is not familiar with, and break.

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: AI, Featured, Machine Learning