In 2002, Daniel Kahneman was awarded the prestigious Nobel prize in economics. This feat was even more impressive because Kahneman is a psychologist. Although he was recognized by the Economist in 2015 as the seventh most influential economist in the world, Kahneman holds no formal academic credentials in the field of economics.
How does a psychologist so earn the respect of colleagues in a completely different field that, without any formal training in their field, he is embraced not just as one of their own but is considered one of their stars? The answer is rather simple: convincingly demonstrate that one of the prime assumptions of a discipline is completely wrong.
The Importance of Framing
In his recent book, The Undoing Project, Michael Lewis popularized the story of how Kahneman and his close friend and colleague, Amos Tversky, collaborated over the course of several decades to understand the working dynamics of human judgment and decision-making.
They closely examined the ways in which people make decisions under conditions of uncertainty and found that their judgments and behavior were consistently at odds with the longstanding assumption of economic theory that takes for granted that people act rationally by seeking to maximize their gains.
The extensive research of the two psychologists revealed that, when making judgments, people were strongly influenced by a dynamic they called “framing.” For example, if the subjects were presented with the sequence “A |3 C,” people would identify the middle figure as the letter “B.”
However, if presented with the sequence “12 |3 14,” people would identify the middle figure as the number “13.” Similarly, Kahneman and Tversky discovered that, contrary to the prevailing utility theory, people could shift from risk avoiding to risk seeking behavior simply by presenting them with a different description of the same reality.
This means that framing can lead to errors in decision-making and cause people to behave in ways that clearly do not maximize gains, and in some instances, actually result in unwitting losses. How does this happen?
Dual Thinking Modes
In his best-selling book, Thinking, Fast and Slow, Kahneman asserts that humans engage in two different thinking modes in their day-to-day lives. He refers to these ways of thinking by the nondescript names System 1 and System 2. System 1 is fast thinking, which operates automatically with little or no effort. It is highly proficient at identifying causal connections between events, sometimes even when there is no empirical basis for the connection.
System 2, on the other hand, is slow thinking and involves deliberate attention to understanding details and the complex web of relationships among various components. Whereas System 1 is inherently deterministic and undoubting, System 2 is probabilistic and highly aware of uncertainty and doubt. Needless to say, these two ways of thinking are contextually very different.
Tversky summed up the human thinking paradox well when he concluded that “Man is a deterministic device thrown into a probabilistic universe.” Given the limited capacity of the human brain, the time it takes to do System 2 thinking is misaligned with the speed needed to make practical decisions.
For example, if you suddenly find yourself in an unfamiliar place late at night in the presence of a complete stranger with no one else is in sight, you will need to make some practical decisions rather quickly. Doing a detailed background check on the stranger is not possible, so you will have to make a fast judgment about whether this unknown person is likely friendly, hostile, or indifferent. You will rely upon your experience and intuition to quickly examine the clues in front of you to make a decision whether to ignore the person, engage in a conversation, or flee as fast as you can.
This type of fast thinking is far more common than slow thinking because the high degree of ambiguity and the rapid pace of events that is the context of our day-to-day lives makes System 2 thinking highly impractical, which is why Kahneman contends, “the intuitive System 1 is more influential than your experience tells you, and it is the secret author of many of the choices and judgments you make.”
The Prevalence of Unconscious Biases
Utility theory assumes that people are rational agents because, for the most part, we see ourselves as levelheaded people who make informed and deliberate choices. Unfortunately, this perception is more illusion than fact. Kahneman and Tversky discovered two important tendencies about human thinking: “We can be blind to the obvious, and we are also blind to our blindness.”
In other words, despite the confidence we feel from our System 1 thinking, there’s a great deal that we don’t know, but more importantly, there’s a great deal that we don’t know that we don’t know. And when that happens, humans are naturally prone to hold unconscious biases.
A common example is confirmation bias, which is the tendency to immediately interpret new information in a way that confirms an individual’s preexisting beliefs or opinions. As a byproduct of System 1 thinking, we build mental narratives to make sense of the continuous flow of information and events that we need to rapidly process in our daily lives. These mental narratives are the soil of confirmation bias and can often cause us to make confident decisions that are completely wrong.
Even the most intelligent among us are not immune from this tendency. Medical doctors, like all other professional experts, are prone to construct mental narratives based upon their individual and shared experiences.
However, these narratives, while useful most of the time, can often get in the way of accurate diagnoses. Lewis cites research by the Oregon Research Institute that found that an algorithm was more effective at diagnosing cancer than a group of doctors and outperformed even the single best doctor.
The Great Promise of Artificial Intelligence
One of the reasons that algorithms outperform experts is that algorithms don’t have the limitations of the human brain. Thanks to Moore’s Law, computers have a seemingly infinite capacity to store and process information, which means that one of the truly unprecedented benefits of burgeoning artificial intelligence (AI) systems is that we may soon have the ability to process System 2 thinking at System 1 speeds.
In addition to speed, another factor that distinguishes AI is that its underlying structure is networked. Because the aggregation mechanisms in the algorithms of AI networks will have instant access to the full diversity of data that represent the independent thinking and the local knowledge of different perspectives, these machine learning systems will meet the criteria identified by James Surowiecki – discussed in detail in an earlier article in this series – for harvesting collective intelligence.
This is important because one of the attributes of collective intelligence is its ability to integrate diverse and even opposing perspectives into workable holistic solutions that move beyond the limits of human biases.
Amir Husain, the author of The Salient Machine: The Coming Age of Artificial Intelligence, makes the point that AI is not just another technology, it is a new form of intelligence. With its vast capacity to store information and its ability to rapidly process and retrieve information at the speed of Google searches, AI doesn’t need to engage in the heuristic short-cuts that are prevalent in Systems 1 thinking.
Without the limitations of the human brain, AI is far more capable of thinking probabilistically and holistically, and is, thus, capable of weighing the relative content of multiple perspectives in a matter of seconds. In other words, the great promise of artificial intelligence is that it could effectively put an end to the flawed human biases that often plague decision-making is our social organizations.
The Evolutionary Leap
For centuries, hierarchical structures have been the near universal template for how we have designed our social organizations. Because top-down hierarchies naturally leverage the individual intelligence of the elite few, they are susceptible to – and even amplify – human bias.
The foundational theory of hierarchical structures is that, by giving the supposedly smartest people who rise to the top of these organizations the authority to command and control the work of others, then the organizations will be smarter than they otherwise would be.
Unfortunately, smart people are not immune from human biases, and when they make decisions based on narratives that out of touch with what’s actually happening, the consequences can be drastic.
The blindness of the financial experts who gave us the Great Recession and the failure of traditional media companies to grasp the significance of the digital revolution are two examples of how narratives that were once useful guides can suddenly become pathways to disaster.
The evolutionary leap in the fundamental fabric of human social organization is the essential element of Digital Transformation. As the context for how humans think and act together rapidly shifts from top-down hierarchies to peer-to-peer networks, and as our social leaders become more proficient in their understanding of how to build and lead networks, we will no longer be blind to the obvious, and more importantly, we will no longer be blind to our blindness because the emerging human-machine symbiosis made possible by our new-found capacity to leverage the extraordinary amplitude and the incredible speed of collective intelligence will go a long way toward moving us beyond the hazards of human bias.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: Definition, Featured