Naturally Stupid Versus Artificially Intelligent

Doug Freud

In a 1970 Life Magazine article, Marvin Minsky, cofounder of the MIT Artificial Intelligence Lab predicted, “In three to eight years we will have a machine with the general intelligence of an average human being.”

Forty-seven years later, I still can’t say to my Amazon Echo, “Alexa, damage report!” and learn all the things that my children have screwed up since the last time I asked. While we’ve made significant progress, most artificial intelligence (AI) systems struggle with tasks that your average four-year-old has mastered (like basic grammar). How could Minsky, who had developed self-propagating neural networks, be so wrong?

Historians will point to significant limits in computer power, the challenges associated with a combinatorial explosion, and the difficulty of computer science to develop new approaches to knowledge representation. All of these are reasonable explanations for why, decades later, we’re only beginning to enter the age of naïve artificial intelligence, and are still far away from a machine with the average intelligence of a human being. Explaining how Minsky and others could be so wrong requires the red-headed stepchild of the sciences: psychology.

Importing social and cognitive psychology to AI

Despite my lineage (my last name is Freud, after all), Sigmund Freud’s theories have no relevance and little support as the prevailing paradigm within the psychological community. Social and cognitive psychology is where the most interesting academic research is taking place, and in order for AI to advance, insights from these disciplines will be critical.

Daniel Kahneman, along with Amos Tversky, are perhaps the most important psychologists of the last 150 years, and their work spawned a revolution that led to the widespread adoption of behavioral economics. I would summarize their findings by stating: Humans are flawed information processors, and we use a variety of heuristics that lead to significant errors in judgment and decision-making. Even the smartest amongst us, like Minsky, are subject to these errors.

You just can’t fix stupid…and here’s why

In his book Thinking Fast and Slow, Daniel Kahneman summarizes a lifetime of research and provides a simplified framework for how the brain works. He proposes that there are two systems:

  • System 1 is automatic and processes information quickly so that we make judgments and decisions at a real-time pace. We intuitively make decisions, quickly processing the data we encounter, and are always able to make judgments even when there is incomplete information or significant uncertainty. When System 1 classifies something as abnormal or surprising, System 2 activates.
  • System 2 requires effort and attention, and it’s only then that we use algorithms and processes to slowly embark on a knowledge discovery process (which, if done by humans as smart as Minsky, can lead to first principle scientific discovery). Before we can activate System 2, we’re subject to the error caused by the machinery of System 1, and the bias introduced by the use of heuristics is why we’re all naturally stupid.

Heuristics and biases

In 1974, Kahneman and Tversky published an article in Science (volume 185) entitled “Judgment under Uncertainty: Heuristics and Biases,” in which they describe how decisions based on beliefs concerning the likelihood of uncertain events are made (for example, election outcomes or guilt of a defendant). We can think of a heuristic as “a judgmental shortcut that gets us where we typically need to go quickly, but at the expense of introducing bias or error.” This article describes three such heuristics:

  • Representativeness is “the degree to which an event is similar in essential characteristics to its parent population and reflects the salient features of the process by which it is generated.” When people rely on representativeness to make judgments, they’re likely to judge incorrectly. The simple fact that something is more representative doesn’t actually make it more likely.
  • Availability relies on immediate examples that come to a person’s mind when evaluating a specific topic, concept, method, or decision. This heuristic operates on the notion that if “it” can be recalled, it must be critical compared to alternative solutions/explanations. Availability causes us to rely too heavily on more recent/available information.
  • Anchoring and adjustment describe the inclination to rely too heavily on the first piece of information offered (the “anchor”) when making decisions. Once set, there is a bias/error toward interpreting other information around the anchor.

System 1 also uses other heuristics which lead to errors in judgment. The substitution heuristic is where System 1 assigns subjective probabilities by answering a different/simpler question, which of course leads to bias and error in the interest of speed and convenience. It is beyond the scope of this blog to go through all the heuristics in System 1, which lead us to be naturally stupid despite the best efforts of System 2.

Exceptions to the rule

One should acknowledge that there are significant individual differences in knowledge, skills, and abilities. There are, for example, grand masters at chess who can play 20 games simultaneously and, based on their intelligence and time on task (practice), turn what for most humans is a System 2 process into a System 1 task. Even poker, which is significantly more random, has enough regularity that a clever human can implement a set of automated rules that over time will enable them to succeed.

There are many disciplines where experts can’t predict any better than chance. For example, think of the 2016 U.S. presidential election, where almost every pundit predicted a Hillary Clinton victory. Even data scientist Nate Silver, who uses an incredibly sophisticated approach, was unable to predict the outcome. His System 2 capabilities are substantive, but the error in System 1 judgments led to mistakes that surface in System 2 problem solving. Human predictions fail when there are not enough feedback loops, when the world is so chaotic that nothing is predictable, and when there are weak correlations between features and outcomes.    

Artificially intelligent

Since our internal cognitive machinery is prone to error and our ability to change it is limited, the next major leap in knowledge discovery will come via automation in machine learning and other AI technologies. In my next blog, we’ll explore the issues associated with the adoption of AI into our work and professional lives.

For more information


Doug Freud

About Doug Freud

Doug Freud is a global Vice President of Data Science for the SAP Customer Innovation and Engagement Platform team. His academic background is Industrial Organizational psychology, and he has worked in both GTM and professional service roles. He is a proven leader with ability to manage cross-functional teams that implement innovative solutions. His passion is using data and machine learning to change business processes and create new systems of innovation.