Join us for networking & quality resources to help you and your team succeed in digital transformation.
I remember the hubbub surrounding Dan Pink’s book A Whole New Mind, in which he stated that the future (this was back in the early 2000s) would belong to right-brained people, because machines would ultimately automate anything that could be easily documented and that followed a standard process.
- His argument was that everything that could be reduced to a defined process would be outsourced or automated, and in most cases he has been proven correct. What remains to be seen from his prediction is whether or not the “winners” of this automation will be the people who have “right-brain”skills – artistic, creative people who were most likely liberal arts majors.
What I think is another interesting and similar question is emerging in machine learning and artificial intelligence. I think it is likely that the people whose jobs are most at risk from machine learning are the deep specialists, who have really deep but narrow knowledge.
As we train machines to interpret data and to approach an artificial intelligence, I think many of these instances will be deep learning around very specific problems. Take, for example, breast cancer.
Machine learning and artificial intelligence may soon reach a point where detecting breast cancer from x-rays and sonograms is more consistent and less error-prone than having a doctor do that work.
But unlike doctors, who are often good at many things simultaneously, the AI or ML application that’s good at finding breast cancer will probably not be able to immediately also diagnose other health issues. For a while at least AI and ML are often good, but one-trick ponies.
Which raises the question – will we spend the time and effort to train the AI and ML applications in a wide array of very specific, very important and very narrow fields, where the benefits outweigh the cost of training the machine?
And, once all of these machines are trained, who oversees the transition from one model to another to do multiple diagnoses, or when will a generalist human be “good enough” because they can move between different issues more easily and perhaps more rapidly and effectively than machines?
Does it pay to be a generalist in an age of ML?
Of course, this is not just an AI or ML issue. When we think of the technologies powering digital transformation, there are a handful of technologies that are really efficient but frequently limited option solutions.
Most robots, for example, are good at one or two activities that are constantly repeated, whether that action is in 3-D space (picking and placing parts) or in automating data transcription using RPA. IoT devices gather and transmit data effectively, but only the data that the sensors are meant to capture and transmit.
These deep but narrow competencies will eventually create highly productive and efficient but potentially fragile processes, where a small shift in focus or needs may expose the fact that these technologies, at least for the foreseeable future, aren’t really good at rapidly shifting from one input, one data set or one job to another, even if the shift is somewhat inefficient.
We humans, however, have evolved to do exactly that. For the most part, we are multi-functional machines, capable of doing a wide variety of tasks without a lot of reprogramming, and we can shift from job to job, task to task relatively quickly.
It seems as though as digital transformation takes hold, the ability to adapt, to be flexible and the ability to shift quickly from one task to another will be important when working with machines and intelligences that are relatively narrow and somewhat rigid in their capabilities.
What happens to the specialist?
What happens to people who have deep, deep learning and experience in a specific field that AI or ML can rapidly learn? At first they become the teachers of the technology, helping instruct the AI or ML on misdiagnoses, correcting errors and improving the model.
Then, once the machines become nearly as good as the humans at detecting issues, humans become the explainers, telling people how the machines made their decisions, often defending the machines. Eventually, as machines can explain their decisions and provide sufficient evidence, humans in deep specialities may become much less valuable.
What happens to a deep but narrow specialist once explainability arrives?
Where humans will thrive
Where humans will thrive in this rapidly approaching future is in places where there is little previously documented experience, where situations and models change frequently and without a pattern, where data is messy or missing, where a fast but “good enough” answer will suffice, or where tasks are frequently changing and don’t allow time for re-purposing or retooling.
These needs will still be filled by capable generalists who can apply a lot of intelligence, reasoning, creativity and dexterity to rapidly emerging challenges that haven’t been seen previously or can’t be adequately predicted.
At a time when our education systems are increasingly focused on narrow fields of study, we need a more comprehensive education system that reinforces a number of good skills simultaneously and turns out people able to rapidly shift from one task or skill to another.
Instead of increasingly narrow PhD programs, what we need are robust programs that engage science, math, literature, technology, psychology and other disparate skills to prepare people for the challenges and opportunities they are likely to face as they increasingly work with intelligent machines.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox