Caution: Your Culture May Eat Your AI Strategy

Imran Khan

Sometime in the near future, a customer will call the help desk for Uber’s self-driving car service and say: “I’ve been waiting for my ride for the last 15 minutes. It is supposed to be on time. I scheduled it for 10 a.m. and now it’s 10:30!… Where is it?”

If you understood the joke, congratulations! You are a person from a “polychronic” culture (more on this later) for whom time is a flexible commodity. Some cultures call it Stretchable Time. What’s so funny about this scenario is that the autonomous car, being aware of time, was prompt to pick up the customer, but when the customer was 15 minutes late (yet considered on time in the local culture), the car had already moved on to pick up another fare.

In these situations, the autonomous car must be aware of the concept of Stretchable Time and be fashionably late, as in the local culture, or it should wait much longer to allow for the Stretchable Time.

Time and scheduling is one of the cultural dimensions covered in Erin Meyer’s extensive cultural study and book, The Culture Map. Her study is about working in a global world – cross-border meetings, interactions, negotiations, projects, distributed teams, and virtual work. The joke above got me thinking about the Culture Map and how it raises pertinent questions about artificial intelligence in the Future of Work.

Concept of time: From structured to flexible

That different cultures treat time differently is one of the most common observations by anyone traveling abroad, whether for work or leisure. You can map cultures (or countries) on a scale from Structured (monochronic) to Flexible (polychronic). On one extreme are the precise Germans and Swiss; on the other end are the flexible people Africans, Middle Easterners, and Indians. Americans, some Europeans, and Latin Americans tend to fall somewhere in between.

01chart1 scheduling

A German executive will arrive right on time for a meeting, while the Kenyan executive is likely to be 20 minutes late. A Swiss project manager pursues one thing at a time, while a Chinese project manager juggles multiple balls at the same time. A Japanese shop assistant focuses solely on the customer they’re talking with, while an Indian event planner deals with changes in plans quite often, almost as second nature.

Some of the implications of this research for AI developers include:

  • An autonomous car needs to be aware of these idiosyncrasies in its location. Maybe vehicles need a regional configuration in the date-time settings. To complicate things further, not everyone in a location acts in a similar manner.
  • Mobile assistants could initiate calls based on the time scheduled on a person’s calendar. While this will be okay in certain monochronic cultures, it would not be prudent in polychronic countries.
  • Scheduling and dispatching service technicians is touted as one of the low-hanging fruits for the AI community. This needs to be rethought or reprogrammed depending upon the location where it’s used. A global company that sells and repairs televisions, for example, couldn’t use the same AI-based software logic in different countries.

Trust: From task-based to relationship-based

Let’s take another one of the eight dimensions of the Culture Map: Trust, or how people in different cultures build trust. This can be mapped on a scale from Task-based to Relationship-based cultures. On one extreme are the heavily transaction-based countries, led by the United States, Canada, and the Netherlands; on the other end, China, Saudi Arabia, and India are very relationship-based cultures. The United Kingdom and France tend to be somewhere in between.

02chart2 Trusting

Some countries are very legal-system driven, and they tend to focus more on the transactional nature of any interaction. It’s faster to do negotiations and business deals in these countries. In other countries, personal relationships are much more important than the transaction. In these countries, having vodka or chai or tea or a meal with your counterpart is important to building a relationship, leading to trust, leading to the business outcome. In such cases, the business deals are slow to cook but seem to yield better long-term results.

Here are some implications of this research for AI developers:

  • When robots are co-workers, how will people trust them if they cannot have tea with them?
  • Even more difficult: robots in an authority position, like a boss, human resource executive, or even a customer. Depending on cultural backgrounds, this might be manageable for transactional jobs (like serving in restaurants – though they are unlikely to get tipped). But getting your annual performance evaluation from a robot will be a dampener, no matter how intelligent the AI may be.
  • There is a reason not many large B2B sales happen on websites: the importance of trust based on relationships. It works only if you are building relationships in the real world before transacting in the online world.
  • Did you know that most people name their robot? Early on, AI makers assumed that customers would use names like “living room bot” to identify one from the other. But in real life, people name their robotic vacuums and garden tools like a pet or a butler (e.g., “Mr. Jeeves”). Now even the companies advertise the fact that you can name your robot. This enables you to build a trusted-relationship with your robot and so you don’t feel intruded upon in your own home.

Culture pitfalls ahead for artificial intelligence

The entire Culture Map research is a study of how we act in the globalized world. It may also point to potential pitfalls for AI and how we will react to widespread AI usage in the future. It’s safe to say that cultural differences will determine what someone views as an acceptable AI outcome or behavior.

For any AI researcher or user, knowing and respecting these differences is crucial in the implementation and adoption of AI broadly in the global environment.

According to Gartner, 2018 was the year AI became truly integrated into our daily lives. The AI gold rush is driving the creation of new categories of jobs. Gartner also predicts that by 2020, AI will create more jobs than it eliminates. Roles that can be replaced with AI will be replaced, such as cashiers at cashier-less stores. In their place, new roles are being introduced. A variety of industries, including retail, hospitality, and warehouses, will need AI scientists, AI trainers, and navigation specialists for robots. The entire premise of AI is to make our world and our lives better. We cannot allow it to bring our existing prejudices to our future or to steamroll us to a pan-cultural world.

My previous article focused on the notion of ethics in AI. Culture and ethics lead to values or behaviors. These are an important part of the discussions about the role AI will play in our lives. What still needs to be discussed is whose ethics, and ultimately whose values, are being embedded in the AI. Is it transactional-West versus mythical-East, or is it American versus everyone else, or are we talking about cultural nuances? These are important distinctions that will determine how we co-exist with AI in the future: from strategy to evaluation to adoption. A good question to ask while defining any AI strategy is: “How different is the AI’s behavior from expected cultural/behavioral norms?”

Ultimately, we need to continue the debate over AI and cultures, and enable multiple cultural versions of AI to exist.

Image source: Imran Khan, based on the Culture Map work of Prof. Erin Meyer.

For more about the risks of AI, see “Biased Artificial Intelligence: Can You Fight It?

A version of this post was earlier published on LinkedIn Pulse.


Imran Khan

About Imran Khan

Imran Khan has been driving adoption of innovative technologies by SAP ecosystem of customers, partners, developers and ISVs for 19 years. He is a firm believer of using technology to transform people, organizations and governments. He writes about Digital Transformation, Leadership & Technologies and their impact on our lives.